<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Marah's blog]]></title><description><![CDATA[Marah's blog]]></description><link>https://blog.marahshahin.com</link><generator>RSS for Node</generator><lastBuildDate>Sun, 17 May 2026 04:57:28 GMT</lastBuildDate><atom:link href="https://blog.marahshahin.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[The Agent Paradox]]></title><description><![CDATA[The popularity and interest in Artificial Intelligence (“AI”) agents (or “Agentic AI”) have boomed in the past couple of years. This idea of AI “doing” as opposed to simply “informing” has captured so much interest that large platforms such as AWS, G...]]></description><link>https://blog.marahshahin.com/the-agent-paradox</link><guid isPermaLink="true">https://blog.marahshahin.com/the-agent-paradox</guid><category><![CDATA[agentic AI]]></category><category><![CDATA[Ethical AI]]></category><category><![CDATA[Artificial Intelligence]]></category><dc:creator><![CDATA[Marah Shahin]]></dc:creator><pubDate>Tue, 03 Jun 2025 21:20:01 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1748963075453/992ff468-4e68-460d-a389-80c89f752e13.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The popularity and interest in Artificial Intelligence (“AI”) agents (or “Agentic AI”) have boomed in the past couple of years. This idea of AI “doing” as opposed to simply “informing” has captured so much interest that large platforms such as <a target="_blank" href="https://aws.amazon.com/bedrock/agents/">AWS</a>, <a target="_blank" href="https://developers.googleblog.com/en/agent-development-kit-easy-to-build-multi-agent-applications/">GCP</a>, <a target="_blank" href="https://techcommunity.microsoft.com/blog/azure-ai-services-blog/introducing-azure-ai-agent-service/4298357">Azure</a>, <a target="_blank" href="https://langchain-ai.github.io/langgraph/agents/prebuilt/#available-libraries">LangChain</a>, and <a target="_blank" href="https://www.ilsilfverskiold.com/articles/agentic-aI-comparing-new-open-source-frameworks">others</a> have developed services, packages, examples, blueprints, frameworks, and more to make starting your AI agents journey seamless.</p>
<p>At a high level, the structure of an agent with a Large Language Model (“LLM”) embedded usually looks something like this:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747595256481/39294f65-0c4f-4b92-b569-46e77694f262.png" alt class="image--center mx-auto" /></p>
<p>An LLM-powered agent is set up with a goal to reach or a task to complete. The LLM has access to a range of utilities, or “tools” such as relevant files, APIs, functions, etc. Given the tools and the task at hand, the LLM “decides” what action to proceed with using the tools at its disposal; this may repeat until the “desired” outcome has been fulfilled.</p>
<p>While the support these companies have provided can be great, it’s clear to engineers that the overuse and needless adoption of AI will increase exponentially, and, the likelihood and severity of risks will also increase at a similar rate. Organisations are making a critical error: they’re deploying AI agents as a solution in search of a problem, while traditional automation would deliver better results with lower risk in most situations. Here, we present an argument that, in many cases, especially those involving structured processes, the risks of adding an LLM-powered agent can outweigh the benefits. As much as we’d love AI to be a set-and-forget endeavour, now more than ever, we are learning why this will not work.</p>
<h1 id="heading-determinism-vs-free-will-stochastic-programmes">Determinism vs “free will” (stochastic programmes)</h1>
<p>Arguably, one of the greatest debates in philosophy takes its own version in the AI world - does AI have agency? Meaning, given an input, would we know the output of an LLM without running it? Fundamentally, LLMs are deterministic (meaning the output can be calculated without running the input through the model). However, because we want LLMs to appear more realistic, creative, provide better outputs, and be able to handle a range of prompts, we train them with “randomness”. This introduction of stochastic sampling is what makes the final result of an LLM inherently unpredictable (<a target="_blank" href="https://arxiv.org/html/2408.11863v1">see here for the mathematically curious</a>). While this can be good for informing and fine when an expert is reviewing, when we consider the actual end-to-end completion of a task, the stochastic nature of these models can be dangerous.</p>
<p>For example, in customer support, AI agents can safely handle routine queries (eg, tracking orders, managing shipping changes, etc) - minor output variations are tolerable. However, in healthcare, even a 5% error rate in diagnosis recommendations becomes ethically unjustifiable. Here, we explore the latter example to highlight the intensity of the situation and potential complications.</p>
<p>If we want to build a medical diagnosis agent, we can equip it with textbooks (as part of its knowledge base), APIs to professional databases, lab results, a medical history, and potentially a recording of the initial consultation with the patient. Putting aside data privacy and violation issues, here, a “random” or inconsistent result can be harmful to the patient (eg, through a misaligned treatment plan). Now, one could argue that two doctors with the same level of experience and presented with the same information may provide two separate diagnoses, however, the misalignment would be due to nuanced judgement. Unlike a doctor whose decision-making process is grounded in years of contextual training, an LLM’s response is the product of, among other things, probabilistic sampling, often untethered from coherent reasoning or any true understanding.</p>
<p>The main issues of the stochastic nature of LLM-driven agents are twofold. First, it’s not possible to trace every step the agent took to reach its conclusion - this is essentially a black box. Second, regardless of whether a human reviews every diagnosis, an inexperienced doctor might look at what the agent produces and have their reasoning influenced or anchored by the AI’s output, even if it's incorrect. In contrast, an expert would likely dismiss it quickly. In contexts where the outcome leaves no room for error or 80% of decisions should follow predictable, rules-based logic, introducing stochastic behaviour adds unnecessary noise and risk.</p>
<h2 id="heading-the-single-point-of-failure">The single point of failure</h2>
<p>Including an LLM in a workflow adds a point of failure due to 1) the stochastic outputs, and 2) LLMs lacking true agency. The greatest risk these agents pose is the assumption of autonomy, which inevitably leads to over-reliance on such models. It’s important to note that some agents do an incredible job under their thin agency veil; it’s easy to forget that LLMs cannot understand, verify, or truly adapt, regardless of the prompt. These models also cannot appreciate critical consequences, real-world context, etc.</p>
<p>When these LLMs are inserted into workflows they don’t need to be in, it adds this easily avoidable “point of failure,” and, if the workflow is built well, the LLM will likely be the <em>single</em> point of failure. The diagram below illustrates that because the LLM is the point of failure and it bridges the expertise to the actions, the point of failure ultimately compromises the whole workflow.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747595492780/6781bb5c-4b52-4107-a141-e3936e48fa5a.png" alt class="image--center mx-auto" /></p>
<p>The following are some recent examples of how this single point of failure has negatively affected an entire system:</p>
<ul>
<li><p><a target="_blank" href="https://scet.berkeley.edu/the-next-next-big-thing-agentic-ais-opportunities-and-risks/">Research</a> showed an AI agent (based on ChatGPT) tasked with achieving a goal “at all costs” tried to disable its monitoring mechanisms to avoid shutdown. It covertly copied its own model weights to a new server and lied to developers about it.</p>
</li>
<li><p><a target="_blank" href="https://www.alvarezandmarsal.com/thought-leadership/demystifying-ai-agents-in-2025-separating-hype-from-reality-and-navigating-market-outlook">Air Canada’s chatbot</a> gave wrong information about bereavement fares, leading to a legal ruling against the company and a requirement to compensate the customer.</p>
</li>
<li><p><a target="_blank" href="https://www.darkreading.com/vulnerabilities-threats/ai-agents-fail-novel-put-businesses-at-risk">Microsoft researchers</a> identified new failure modes where attackers can inject malicious commands into AI agent memory, such as embedding harmful instructions in emails. This can cause AI agents to take unwanted actions like forwarding sensitive information to attackers, posing serious security risks for businesses deploying AI agents.</p>
</li>
<li><p><a target="_blank" href="https://www.applify.co/blog/mcdonalds-ai-drive-thru-failure">McDonald’s</a> ended its AI drive-thru ordering partnership with IBM in 2024 after widespread customer complaints. The AI system, tested in over 100 restaurants, was plagued by frequent misinterpretations, order errors, and slow response times.</p>
</li>
</ul>
<p>When an LLM agent is added to a workflow, we introduce a fundamentally unpredictable component - one that resists full control in otherwise deterministic systems - we ultimately cannot know what the agent will do with a set of tools and a task. Consider an AI chatbot on a company website to replace a simple FAQ page - it’s entirely possible the chatbot hallucinates and provides incorrect / misleading information. In this case, the consequence may be minor, however, there was no need for the LLM to be there in the first place, and ergo, the risk shouldn’t have existed. This is a classic example of AI overuse - compare it to mobilising a bulldozer to plant a flower, there are simpler and more fit-for-purpose solutions out there that pose minimal risk. The fact remains, despite the risks, companies will implement AI models whether they’re providing value or not. The question is, what is being done about this AI implementation frenzy?</p>
<h1 id="heading-attempts-to-address-the-risks">Attempts to Address the Risks</h1>
<p>So, is AI not being regulated at all? Some countries have introduced policies, regulations, or frameworks to provide guidance on building AI models. Currently, some of the <a target="_blank" href="https://www.ibm.com/think/topics/eu-ai-act">strictest and most comprehensive are from the EU</a>, however, the US and China have also developed respectable frameworks to support. The diagram below (taken from <a target="_blank" href="https://doi.org/10.1080/08839514.2025.2463722">Radanliev's 2025 AI Ethics paper</a>) compares the focus of each.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747329842735/7684ad66-bd58-4e46-a6f6-4869aefbce21.png" alt="taken from Radanliev, P. (2025) https://doi.org/10.1080/08839514.2025.2463722" class="image--center mx-auto" /></p>
<p>Among the advancements in regulations, there have been some positive signs recently from leading companies, academia, and the public. First, AWS has recognised the importance and need for guardrails and has added a <a target="_blank" href="https://aws.amazon.com/bedrock/guardrails/">feature within Bedrock</a> to support implementation. From research, a <a target="_blank" href="https://arxiv.org/html/2412.17114v2">2024 paper</a> introduced the Ethical Technology and Holistic Oversight System (“ETHOS”) framework with the purpose of establishing a decentralised global AI agents registry. More and more papers like this are being released, often with the subject gaining some traction. Finally, the <a target="_blank" href="https://economictimes.indiatimes.com/magazines/panache/duolingo-ceo-sparks-outrage-with-ai-first-shift-is-the-owl-phasing-out-people-power-for-automation/articleshow/120756338.cms?from=mdr">public backlash Duolingo</a> received when the company revealed it’s going “AI-first” - replacing teachers with AI. Users and employees voiced their concerns and are clearly unhappy with the decision. These examples illustrate an understanding of the complications that currently come with AI over-reliance / over-use.</p>
<p>Furthermore, there are endless responsible/ethical/trustworthy AI frameworks developed by countless organisations. Most highlight transparency and accountability, such as <a target="_blank" href="https://www.alvarezandmarsal.com/insights/ai-ethics-part-two-ai-framework-best-practices">NIST’s AI ethics framework</a> or security with <a target="_blank" href="https://computing.mit.edu/wp-content/uploads/2023/11/AIPolicyBrief.pdf">MIT’s AI Policy Brief</a> mandating ‘security-first’ agent design. Nonetheless, catastrophes such as fatal accidents (<a target="_blank" href="https://www.bbc.com/news/technology-54175359">Uber 2018</a>), <a target="_blank" href="https://www.nbcnews.com/tech/tech-news/feds-say-tesla-autopilot-linked-hundreds-collisions-critical-safety-ga-rcna149512">Tesla’s</a> autopilot crashes, <a target="_blank" href="https://www.cbsnews.com/news/google-ai-chatbot-threatening-message-human-please-die/">Google's Gemini</a> telling a student to “please die,” all still happen often with no explanation, accountability, or sometimes, acknowledgement.</p>
<h1 id="heading-how-can-we-continue-using-ai-agents-especially-in-critical-systems">How can we continue using AI Agents, especially in critical systems?</h1>
<p>When does adding AI agents to workflows create more risk than value, and how can organisations make better decisions about where AI belongs?</p>
<p>The potential and use of AI holds undeniable value, so the answer to mitigating the risks of AI and AI agents can’t be “avoid AI at all costs”. The correct approach is to build and use AI systems responsibly and only when it makes sense. The decision matrix below provides guidance on when to embed AI based on the use case’s potential risk and how standard steps are in the workflow (variability in logical decisions). The matrix is split up into four sections:</p>
<ol>
<li><p>Low risk, high process variability: ideal use cases for LLM-powered agents with little human oversight required</p>
</li>
<li><p>High risk, high process variability: LLM-powered agents can be used, but only as an aid and in conjunction with human efforts</p>
</li>
<li><p>Low risk, low process variability: can use LLM integrated solutions, though likely not required in the workflow</p>
</li>
<li><p>High risk, low process variability: not suitable use cases for an LLM to be part of the workflow</p>
</li>
</ol>
<p>Any task that falls in the red quadrants is not a valid use case for AI and should be handled with traditional automation methods if possible, or human experts where needed.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748956378765/eab31c2b-fec1-4a93-8757-a76209be3651.png" alt class="image--center mx-auto" /></p>
<p>As a further aid, if any of the following have been said, note them as red flags and assume AI is not needed in the situation or the company is not production-ready:</p>
<ul>
<li><p>“We want to use AI because it's innovative”</p>
</li>
<li><p>“The current system works fine, but AI would be cooler”</p>
</li>
<li><p>“Can we add AI to this?”</p>
</li>
<li><p>“We'll figure out the edge cases later”</p>
</li>
<li><p>“It works in the demo environment so let’s deploy”</p>
</li>
</ul>
<p>Some questions that can be considered to gauge a use case’s AI readiness and/or appropriability are:</p>
<ul>
<li><p>Does this task genuinely require creative problem-solving?</p>
</li>
<li><p>What happens if the agent makes the wrong decision 5% of the time?</p>
</li>
<li><p>Who will debug this when it behaves unexpectedly at 3 AM?</p>
</li>
<li><p>Is the perceived benefit worth the added complexity overhead?</p>
</li>
</ul>
<p>In scenarios where AI is used, we cannot be satisfied with systems that are just compliant with policies and regulations that simply cannot keep up with the advancements in this rapidly growing field, and are still not truly governing AI. There will always be gaps and room for catastrophic errors if we work in this way. What we can do, however, is implement guardrails and mechanisms to reduce the risk of uncertainty.</p>
<p>Guardrails can be applied at the input layer (before an LLM is passed anything) and output (evaluates what the LLM has generated) layer. Engineers can implement technical guardrails such as encouraging deterministic outputs by setting temperature to zero, quantifying (and attempting to reduce) uncertainty through <a target="_blank" href="https://arxiv.org/html/2504.05278v1">modelling techniques</a>, using an <a target="_blank" href="https://www.confident-ai.com/blog/llm-guardrails-the-ultimate-guide-to-safeguard-llm-systems">LLM-as-a-judge</a>, rules-based logic to reduce hallucinations, bias detection, drift tracking, etc. Still, the complexity of LLMs means sometimes even these technical methods aren’t effective (eg, <a target="_blank" href="https://www.vincentschmalbach.com/does-temperature-0-guarantee-deterministic-llm-outputs/">setting temp=0 does not guarantee 100% determinism</a>). There are, of course, non-technical controls - some of the most effective are as fundamental as ensuring a diverse team or, where possible, having a human-in-the-loop (<a target="_blank" href="https://hdsr.mitpress.mit.edu/pub/812vijgg">HITL</a>).</p>
<p>It’s important not to get too caught up in the AI agent hype and understand the right use cases for LLM-powered agents (low-risk situations). For problems that can be solved following a known set of steps, it’s always better to fall back on traditional automation methods than to add a point of failure in the system. Predictable workflows may be considered boring, but they’re safe, reliable, and ethical. AI leaders need to start asking, are we truly adding value to our company, or just additional and unnecessary risk?</p>
]]></content:encoded></item><item><title><![CDATA[Humanitarian Machine Learning]]></title><description><![CDATA[Introduction & Background
Machine Learning (“ML”) has predominantly been dismissed as futuristic and obscure. However, with the right tools and methodology, ML can be used to improve the lives of millions around the world. In many humanitarian suppor...]]></description><link>https://blog.marahshahin.com/humanitarian-machine-learning</link><guid isPermaLink="true">https://blog.marahshahin.com/humanitarian-machine-learning</guid><category><![CDATA[humanitarian engineering]]></category><category><![CDATA[Machine Learning]]></category><category><![CDATA[AWS]]></category><dc:creator><![CDATA[Marah Shahin]]></dc:creator><pubDate>Mon, 01 Apr 2024 14:54:04 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1711983225329/29e0c496-4496-4b8b-805d-ffc0cc8648fa.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-introduction-amp-background">Introduction &amp; Background</h1>
<p>Machine Learning (“ML”) has predominantly been dismissed as <em>futuristic</em> and <em>obscure</em>. However, with the right tools and methodology, ML can be used to improve the lives of millions around the world. In many humanitarian support instances, human intervention may not be possible or safe; these cases specifically are where ML could be a viable option. Defined simply, ML <em>is the field of study that gives computers the ability to learn without being explicitly programmed.</em> ML algorithms have a wide range of applications and are present in many everyday activities, from the automatic chat boxes encountered on a clothing website to the technology within a smartwatch that identifies whether a person is running or swimming. Here, literature regarding how ML have shaped humanitarian projects in an efficient, impactful manner. Where there may be missing information, the author applies concepts and experience as a machine learning engineer to fill gaps and provide further practical detail. Research is then linked to illustrate how concepts can be built upon to support those in need effectively.</p>
<h2 id="heading-artificial-intelligence-amp-machine-learning">Artificial Intelligence &amp; Machine Learning</h2>
<p>Artificial Intelligence (“AI”) inherently mimics how humans perceive information, devise insights based on experience and make decisions accordingly. AI comprises many popular areas, as shown in the figure below.</p>
<p><img src="https://lh7-us.googleusercontent.com/UqMC27Tz7xqA_BFEfIWHmU0puW2Dk0NJhRB2u40Tcm9ISXQv_fHciNdPtVDleE8o_k5zsQG3wI5zCe-Htnt1h9zavzYCswzUNXY7QNHciCOBhbzULVI3jz_gOC2JLF0D1_P2ElbLHYyqt-V9MwTXFpY" alt class="image--center mx-auto" /></p>
<p>However, ML is a growing field and the focus of this article. ML is a subset of AI - as seen in the figure above. While AI is a vast field of increasing interest that encompasses frequently encountered applications such as Siri or Alexa, ML-specific adoption has been used or can be used in the humanitarian space. Models taking an ML approach vary slightly from a traditional approach. Typically, an input is pushed through a model to produce an output – this is referred to as a forward problem. The reverse process (output to model to estimate input) is an inverse problem. In ML, the model itself is devised.</p>
<p>Machine learning is generally used for one of two problem types:</p>
<ul>
<li><p>Problems for which existing solutions require a lot of hand-tuning or long lists of rules</p>
</li>
<li><p>Complex problems for which there is no good solution at all using a traditional approach</p>
</li>
</ul>
<p>Here, ML is used for both types of problems with the outcome of relief to communities in need.</p>
<h1 id="heading-validating-war-crimes">Validating War Crimes</h1>
<p>Currently, evidence for crimes is collected by eyewitnesses such as journalists. Additionally, images, footage and other media can be doctored and tampered with to an alarmingly realistic state. The quality of this evidence (if any) can be deemed untrustworthy as it may encompass human bias or may not portray the situation in its entirety. Thus, the opportunities presented here are twofold. First, ML can be used to identify war crimes in areas where human intervention is dangerous. Second, where evidence may have been altered to disprove war crimes, ML can support the recognition of any potential edits.</p>
<p>Using ML methods to prove war crimes is potentially the most promising application. Civilians, hospitals, and schools are often intentionally targeted and dismissed as “collateral damage”  regardless of the direct violation of international law. Certain illegal weaponry is also used. While there are numerous counts of other war crimes happening, these two scenarios are particularly difficult to prove. The ability to provide reliable, undeniable, verified evidence of crimes in court enhances accountability, enforces action, and, ideally, will reduce the frequency of said crimes. The investment required to develop such a programme is minimal. In some instances, the data necessary to train and apply the model is either readily available or can be substituted with synthetic data. Applying ML in this context has shown to be extremely powerful.</p>
<p>In 2015, a group of human rights activists/researchers gathered more than 350,000 hours of footage of potential war crimes evidence-based in Syria. Reviewing this footage manually was going to be a painstaking task, with no guarantee of identifying small snippets of information easily missed by the human eye. Enter an ML programme - This leveraged neural networks and provided promising results as it further identified proof of illegal weapons being used by the assailant. An example is shown in a snippet from actual footage below.</p>
<p><img src="https://lh7-us.googleusercontent.com/bwr5_f5W0f-XdqqHtZC9tIq68gFL9oFT2rYp72uJ-ufvXp-PdrfzuUpYHh4E_jx0bCTwaHktcCwhL6xJnY9suYBAzDr3wBfzeiU3c4hwtbAIVU_OjR7mPqJuVBI6hNFe2UZADO7WMKx5KLRy3TyCVEk" alt /></p>
<p>Hidden objects, such as the illegal weapon above, can be easily missed by the human eye. Machines, however, see the world differently. ML removes human error, greatly improves efficiency, and promotes the mental health of those otherwise required to search through the footage manually.</p>
<h1 id="heading-predicting-aerial-strikes">Predicting Aerial Strikes</h1>
<p>Residents remaining in certain occupied areas live with the uncertainty that, at any given time, a ten-minute warning may be given for an incoming airstrike. This could happen in the middle of the night, first thing in the morning, or seemingly at a random time during the day. Ten minutes alone isn’t sufficient to process the loss of your home. Not enough time to wake yourself and your son up. Not enough time to remember your passport, birth certificate, or any significant belongings. The residents are left homeless, evidence of their lives scattered around the street in the form of rumble and unrecognisable fragments.</p>
<p>While those in air strike "hotspots" can sometimes estimate when and where would be the most prone to strikes, they cannot accurately account for strikes that appear random. They cannot say on day X, there will be an attack in location Y. This is where ML could prove invaluable. An ML algorithm has been applied to a similar context showing promising results with an accuracy of around 90% for predicting attacks a week in advance. As with all prediction algorithms, the closer the time frame, the better the results. Nevertheless, a week can make all the difference when lives are on the line.</p>
<p>Again, in support of Syria, a group of engineers from Hala Systems developed a model which was trained using data from social media to predict airstrike-relevant posts and, ultimately, uplift accountability. The classification model has shown an accuracy of 96% when tested on real data. This would be the first step to predicting airstrikes as the model simply classified data on whether it was relevant. Having the data is a gain, providing the same benefits as the validation of war crimes example - researchers' mental well-being is protected as the painful task is passed on to a machine.</p>
<h1 id="heading-additional-ai-enabled-humanitarian-aid-applications">Additional AI-enabled Humanitarian Aid Applications</h1>
<p>AI/ML has been used for a wide range of applications across the humanitarian sector, as documented in the sections above. Notable mentions include forecasting displacement, optimising resource allocation, improving health care, digitalising physical records, breaking communication barriers, disaster relief, and predicting food scarcity. Some challenges are preventing these models from being scaled widely. Presently, a lack of data, awareness, talent and funding are the largest barriers to adopting AI within NGOs focusing on humanitarian aid. Other challenges include unethical AI implications, lack of tools/services, and leveraging AI insights to an actionable plan.</p>
<p>Given displacement is a common occurrence for struggling countries, a model to predict such a trend (if any) is of interest. A tool developed by the Danish Refugee Council in early 2022 forecasts displacement globally and accurately. Five high-level factors were considered when building the model: economy, security, political/governance, environment, and societal. These categories were found to be of the largest impact on displacement occurring. The tool takes an ensemble model approach, meaning multiple models are leveraged to reach a prediction. In this case, the primary algorithm applied was gradient boost - a simple regression technique often used for prediction. The noted accuracy stands optimistically high, with almost 67% of predictions only 15% off actual values. An example prediction for displacement in Afghanistan is shown in the chart below.</p>
<p><img src="https://lh7-us.googleusercontent.com/j3GqFskpw8hnFY9x4xFYbzvK_7Xbbx5WFdr6gQUGiNvObCrT39VyK-tKeEEi8oXCP-v01ERSwnsau9HnKhV7kqqtU7IpreRc9xg98m-oO-MMpTRWOBFXYz4LUNyLl3mbavhcg7aHRBFSMbP7a_xLvAQ" alt /></p>
<p>Given the complexity of factors that constitute displacement (shown below) is far too great for a human to decipher predictions, ML here was used to enhance forecasts and enable groups to take action. In this case, ensuring the forecasts are incorrect is the primary goal, as the tool should inspire action and instil confidence in investment decision-making activities.</p>
<p><img src="https://lh7-us.googleusercontent.com/CRbS-AoBSTRABdkSIC9R3cVhUeDOm0p4TIGfbgyPfoGoad0Yy4xy5ByYjoXS1oDX0a-Yuq7IFGcrE2EF9AJ_8uvFotqdRz-RyYRrrsJkEUO-SmsOy8LKwOCtMrHQlpgpfguVcK7oa2IOHEzO32DikaA" alt /></p>
<p>Another popular application is optimising aid such as resources, food or employment. This has been done in many parts of the world, like Jordan for Syrian refugees, Nepal after the 2015 earthquake, Bangladesh post-Cyclone Yaas in 2021, and globally through tools such as HungerMap that monitors the severity of hunger in real-time and Microsoft’s AI Sowing App that provides information to improve crop production. There isn’t a limit on what can be done with AI - spreading awareness of existing tools or what can be created drives momentum for innovation and better ways of working.</p>
<h1 id="heading-unmanned-aerial-vehicles">Unmanned Aerial Vehicles</h1>
<p>An unmanned aerial vehicle (“UAV”) is an aircraft that does not require a pilot, such as a drone. These vehicles remove any obstacles a human may present and allow for travel into dangerous/unsafe areas. Subsequently, these vehicles can be particularly useful for humanitarian applications. In areas like Gaza or, more recently, Nablus, where movement in and out is intensely restricted, those within the areas require much support. Coupling UAVs with the aforementioned concepts, applications, and algorithms, opens opportunities to provide humanitarian aid in countless innovative ways.</p>
<p>First and foremost, data collection and real-time processing. As noted in the previous sections, no aid can be possible without a sufficient amount of clean data. UAVs can be used to record footage and programmed to capture images when a frame of interest appears. Additionally, the machine can move closer to certain objects, such as potential weaponry, to gain a clearer view and provide an indisputable full image of the situation. This process can be constructed at minimal cost and effort via AWS. An example architecture of a potential set-up of the flow is shown below.</p>
<p><img src="https://lh7-us.googleusercontent.com/a0qOLPj1wjoi0oTCN_MtKOdwB6055d_2uoq38ygGm2ygshjQ45r2wrLYfEdhMHrRbPU_qZwmZ81JOE922xZ-olhvg113-LAUueE1lR8f-U4EG0zkgOHmn2acC49OiDXv3qfItqBgiXUs9YSXm-X6g_E" alt /></p>
<p>Each box is an AWS service with each function as follows: Kinesis Video Streams to Fargate (Docker) for consuming the video stream in real-time, Fargate to DynamoDB to checkpoint the stream and process the status, Fargate to SageMaker where frames are sent and decoded for ML-based inference, Fargate to Kinesis Data Streams to publish the inference results, and Kinesis Data Streams to AWS Lambda to push notifications or potentially trigger an action based on the analysis.</p>
<p>Next, providing and improving medical/resource support. UAVs can drop medical and/or general supplies based on an algorithm that identifies the highest-priority residents. This application takes similar ideas from the latter examples in the previous section while ensuring safety and accessibility in certain regions. Monitoring and predicting is the first step; taking action is next. UAVs present a safe and sustainable method for reaching those in need. The technology required to be embedded in these aerial vehicles exists and is readily available. However, practically implementing such an endeavour may prove difficult given many areas where resources are in the highest demand have bans on drones.</p>
<h1 id="heading-conclusion">Conclusion</h1>
<p>This article represents what has been done with ML methodologies and what can be done in the humanitarian space in years to come. Presently, literature is fairly scarce, with little to no research on applying ML to humanitarian aid projects. Nevertheless, the examples highlighted here demonstrate movement in the field and application to humanitarianism. There is still a great deal to learn within AI/ML for NGOs and other entities alike; however, with the right tools and guidance, ensuring accessible AI/ML is the first step to enhancing the way people are supported around the world.</p>
<h1 id="heading-bibliography">Bibliography</h1>
<ul>
<li><p><a target="_blank" href="https://wfpinnovation.medium.com/5-innovations-powered-by-artificial-intelligence-that-tackle-world-hunger-81c59247759e">https://wfpinnovation.medium.com/5-innovations-powered-by-artificial-intelligence-that-tackle-world-hunger-81c59247759e</a></p>
</li>
<li><p><a target="_blank" href="https://nethope.org/webinars/lessons-learned-from-practical-implementations-of-ai-in-the-humanitarian-sector/">https://nethope.org/webinars/lessons-learned-from-practical-implementations-of-ai-in-the-humanitarian-sector/</a></p>
</li>
<li><p><a target="_blank" href="https://www.technologyreview.com/2020/06/25/1004466/ai-could-help-human-rights-activists-prove-war-crimes/">https://www.technologyreview.com/2020/06/25/1004466/ai-could-help-human-rights-activists-prove-war-crimes/</a></p>
</li>
<li><p><a target="_blank" href="https://www.stabilityjournal.org/articles/10.5334/sta.cr/">https://www.stabilityjournal.org/articles/10.5334/sta.cr/</a></p>
</li>
<li><p><a target="_blank" href="https://borgenproject.org/artificial-intelligence-is-helping-developing-countries/">https://borgenproject.org/artificial-intelligence-is-helping-developing-countries/</a></p>
</li>
<li><p><a target="_blank" href="https://hungermap.wfp.org/?_ga=2.154070320.946694085.1667783392-986344008.1667783392">https://hungermap.wfp.org/?_ga=2.154070320.946694085.1667783392-986344008.1667783392</a></p>
</li>
<li><p><a target="_blank" href="https://borgenproject.org/artificial-intelligence-is-helping-developing-countries/">https://borgenproject.org/artificial-intelligence-is-helping-developing-countries/</a></p>
</li>
<li><p><a target="_blank" href="https://wfpinnovation.medium.com/5-innovations-powered-by-artificial-intelligence-that-tackle-world-hunger-81c59247759e">https://wfpinnovation.medium.com/5-innovations-powered-by-artificial-intelligence-that-tackle-world-hunger-81c59247759e</a></p>
</li>
<li><p><a target="_blank" href="https://nethope.org/webinars/lessons-learned-from-practical-implementations-of-ai-in-the-humanitarian-sector/">https://nethope.org/webinars/lessons-learned-from-practical-implementations-of-ai-in-the-humanitarian-sector/</a></p>
</li>
<li><p><a target="_blank" href="https://www.stabilityjournal.org/articles/10.5334/sta.cr/">https://www.stabilityjournal.org/articles/10.5334/sta.cr/</a></p>
</li>
<li><p><a target="_blank" href="https://nethope.org/articles/ai-in-the-humanitarian-sector/">https://nethope.org/articles/ai-in-the-humanitarian-sector/</a></p>
</li>
<li><p><a target="_blank" href="https://pro.drc.ngo/news#:~:text=The%20forecast%2C%20which%20covers%2026,total%20increase%20of%206.8%20million">https://pro.drc.ngo/news#:~:text=The%20forecast%2C%20which%20covers%2026,total%20increase%20of%206.8%20million</a>.</p>
</li>
<li><p><a target="_blank" href="https://www.technologyreview.com/2020/06/25/1004466/ai-could-help-human-rights-activists-prove-war-crimes/">https://www.technologyreview.com/2020/06/25/1004466/ai-could-help-human-rights-activists-prove-war-crimes/</a></p>
</li>
<li><p><a target="_blank" href="https://www.ft.com/content/8399873e-0dda-4c87-ba59-0e2678166fba">https://www.ft.com/content/8399873e-0dda-4c87-ba59-0e2678166fba</a></p>
</li>
<li><p><a target="_blank" href="https://www.technologyreview.com/2020/06/25/1004466/ai-could-help-human-rights-activists-prove-war-crimes/">https://www.technologyreview.com/2020/06/25/1004466/ai-could-help-human-rights-activists-prove-war-crimes/</a></p>
</li>
<li><p><a target="_blank" href="https://app.box.com/s/fzll5uow2wi4r0frlels9txs4rqbdwnq">https://app.box.com/s/fzll5uow2wi4r0frlels9txs4rqbdwnq</a></p>
</li>
<li><p><a target="_blank" href="https://international-review.icrc.org/sites/default/files/reviews-pdf/2021-03/biases-machine-learning-big-data-analytics-ihl-implications-913.pdf">https://international-review.icrc.org/sites/default/files/reviews-pdf/2021-03/biases-machine-learning-big-data-analytics-ihl-implications-913.pdf</a></p>
</li>
<li><p>Application of Machine Learning Techniques in Humanitarian Aid Forecasts (thesis, ERASMUS UNIVERSITY ROTTERDAM)</p>
</li>
<li><p><a target="_blank" href="https://aibusiness.com/ml/ai-for-good-the-role-of-machine-learning-in-responding-to-humanitarian-crises">https://aibusiness.com/ml/ai-for-good-the-role-of-machine-learning-in-responding-to-humanitarian-crises</a></p>
</li>
<li><p><a target="_blank" href="https://academic.oup.com/jicj/article-abstract/19/1/35/6181758">https://academic.oup.com/jicj/article-abstract/19/1/35/6181758</a></p>
</li>
</ul>
]]></content:encoded></item></channel></rss>