What the score measures, and what it doesn’t.
The score is a reading of where the tasks you describe sit on a public augmentation-automation spectrum, localized to the Philippines using country-level signals from the same dataset.
Looking for the methodology behind the pivot plan at /intake? See /methodology. Different methodology, different output.
You answer 15 quick questions about your role, your employer, your AI tool familiarity, and how your time actually splits across tasks. We compute a single number from 0 to 100 that says: of the work you described, how much of it sits in the augmentation lane (where AI helps you do the work) versus the automation lane (where AI is being asked to do the work directly).
Higher means more augmentation, which is the better outcome for the worker. Lower means more automation, which is the signal that warrants a reskilling conversation.
The score draws from a public, regularly-refreshed dataset of real AI-tool usage observed across thousands of occupational tasks. The dataset is released by an external research team, free of charge, with documentation. It includes both:
- Task-level breakdowns of how AI is being used (directive, feedback loop, task iteration, validation, learning) across roughly 3,300 distinct O*NET-coded tasks. From this we compute, per task, what share of usage looks like automation versus augmentation.
- Country-level usage data, including a Philippine slice with 9,075observations. From this we compute a Philippine-specific tilt: how the country’s overall augmentation-automation split compares to the global average.
The Philippine slice tells us, among other things, that per-capita AI tool usage in the Philippines is roughly 0.45x the global average and that the country’s task-level interaction tilt is more directive (1.07x global) and more iteration-heavy (1.27x global) than typical. Top-level automation in the Philippine slice sits at 52% and augmentation at 48%.
We also reference Philippine labor-market analyses from the IMF (Cucio & Hennig, Feb 2025), AMRO Asia (Dec 2025), the ILO research brief by Phu Huynh (Feb 2026), the World Bank, and the Philippine Statistics Authority for context on which sectors carry which exposure profiles. We do not stack their numbers; each comes from a different methodology and we treat them as separately attributed signals.
For each of your 12 task answers, we look up the corresponding task rows in the public dataset. We compute a weighted average of automation and augmentation shares across those rows, weighted by how much time you said you spend on each task. We then apply a small Philippine-specific tilt (approximately ×0.95 on the global aug share) to localize the answer, plus a minor employer modifier (±3 percentage points) reflecting how aggressively AI rolls out at different employer types. The raw country/global ratio sits closer to 0.87, but we clip the tilt to the [0.95, 1.05] band so the country signal nudges the answer rather than dominating it.
The score is the augmentation share, rounded to an integer 0-100. The tier comes from fixed thresholds: ≥60 highly resilient, 50-59 resilient, 40-49 worth reviewing, below 40 migration recommended.
A coverage indicator tells you what share of your tasks had direct evidence in the dataset versus what we imputed from your occupational group. Below 40% coverage we flag the result as lower-confidence. Below 25% we fall back to a group-level average. Below 10% we refuse to score.
Public AI-usage data has been released across multiple refresh windows. For each occupational major group we hold a directional slope drawn from observed AEI release deltas during Phase 1 research. We use that slope to project your score 18 months and 36 months out.
The 18-month and 36-month numbers are directional, not predictive. The slopes are constants today, calibrated against early cross-release deltas; we plan to re-derive them empirically once enough enriched releases land. Treat the trajectory as “here is the way the wind has been blowing for similar tasks,” not as a forecast for your specific role. Your slope can change at any time as new dataset refreshes arrive. We refresh the inputs every time the upstream dataset publishes.
- About 38% of Philippine AI-usage observations in the dataset fall outside the public O*NET classification. Those rows are excluded from the score. The remaining classified portion is what we draw from.
- Different occupations have different task-coverage in the dataset. Software developer tasks have direct evidence on roughly two-thirds of their tasks; voice customer service representative tasks have direct evidence on roughly one-third. We use a coverage-based fallback so the answer doesn’t depend on us pretending we have data we don’t.
- The score uses the 2010 O*NET-SOC vintage to match the upstream dataset. Newer SOC equivalents (Software Developers reclassified as 15-1252) would not match the 19,500-task corpus.
- Voice customer service representatives land in the “worth reviewing” tier in our calibration, not the “migration recommended” tier. This is contrary to the most common public narrative about AI and BPO. The dataset shows that current AI usage on customer-service tasks is more augmentation than automation. The trajectory is declining, slowly.
- The score uses only the answers you give us. We can’t measure your employer’s strategy, your manager’s decisions, or the local labor market in your specific BPO campus.
From the country-level Philippine slice of the public dataset, this is how classified usage distributes across occupational major groups.
| Occupational group | Share of PH usage |
|---|---|
| Computer and Mathematical | 31.0% |
| Educational Instruction and Library | 15.4% |
| Arts, Design, Entertainment, Sports, and Media | 7.5% |
| Office and Administrative Support | 3.5% |
| Life, Physical, and Social Science | 1.4% |
| Community and Social Service | 1.0% |
| Production | 0.9% |
| Sales and Related | 0.6% |
| Architecture and Engineering | 0.4% |
| Farming, Fishing, and Forestry | 0.3% |
| Personal Care and Service | 0.2% |
| Management | 0.2% |
Filipino professionals have free or low-cost options for picking up adjacent skills:
- TESDA: government-funded vocational courses including Programming NC III, Bookkeeping NC III, and many others.
- DICT: Department of Information and Communications Technology runs free tech-skills programs for Filipinos.
- Coursera financial aid: full course access for free if you apply for aid.
- LinkedIn Learning: sometimes free through a public library card or employer benefit.
We don’t take affiliate payments. None of the above links pay us. We recommend whichever path you can actually finish.
Disclaimer.Statistical indicators derived from public data. Patterns may have legitimate explanations. This score is one signal among many; it is not career advice. We don’t name the upstream vendor on user-facing pages so the score reads as a methodology, not as a logo; the dataset attribution is tracked internally and refreshed monthly.