Indonesia's National Artificial Intelligence strategy: One year on from a health perspective

By Jum’atil Fajar*

(Source: Unsplash | Irwan Iwe)

In August 2020, the Indonesian Agency for Technology Research and Development launched the National Strategy for Artificial Intelligence 2020-2045. The strategy sets out five priority areas, one of which is health services.

The roadmap for the artificial intelligence (AI) programme for the health sector includes three programmes, namely the preparation of health data, the assessment of the 4Ps of Health (Predictive, Preventive, Personalised and Participatory) with the support of AI, and the application of the 4P paradigm in health workers and in health facilities.

One data preparation programme that must be completed in 2020/2021 includes the implementation of an electronic medical record system in government health facilities and regulation of electronic medical records, interoperability of health data, and the use of data for research. The institutions responsible for implementing the programme include the Ministry of Health, the Ministry of Communication and Information, the Technology Research and Development Agency, Hospital Associations, Healthtech Associations, industry associations and other related stakeholders.

Currently, government-owned health facilities that have implemented an electronic medical record system are Type A (registered and licensed by the Ministry of Health) and Type B hospitals (registered and licensed by provincial local government), which are generally located in big cities. An example of a Type A hospital that has implemented electronic medical records is the National Central General Hospital dr. Sardjito in Yogyakarta. According to medical professionals at the hospital, medical records are in electronic form, which means that all patient examination results can be retrieved from the hospital management information system (HMIS).

An example of a Type B hospital that has implemented an electronic medical record system is the Damanhuri Barabai Regional General Hospital. According to their IT team, their electronic medical record system for outpatients and inpatients greatly helps the hospital accreditation process.

Despite this progress, however, the application of the electronic medical records is not evenly distributed in all regions. In the Central Kalimantan province, not one Type C hospital (registered and licensed by the district/municipality local governments) has fully implemented the system. A more limited implementation at Type C District General Hospital dr. H. Soemarno Sosroatmodjo Kuala Kapuas,is a new electronic medical record system to enter COVID-19 patient data into the HMIS, especially for the results of x-ray photos and analyses carried out by radiology specialists.

The existence of electronic medical records is a prerequisite for interoperability of health data. Through the provision of a centralised platform, the government can ensure that all health information is accessible through various locations. The information cannot only be used by doctors to access the clinical history of the patients, but also for research. Furthermore, the platform also allows patients to access their own data, helping them better understand their own clinical records.

Regarding the assessment of the 4Ps of health, a programme using AI was started in 2020 to map genomes of healthy and sick Indonesians in a complete life cycle. Since 2009, research on the Indonesian human genome has been carried out by the Eijkman Institute, which has mapped the genetic diversity of various regions and ethnicities in the archipelago. The results of this human genetic research have been uploaded to a data centre that can be accessed by researchers.

Other initiatives under the assessment of the 4Ps of health include the mapping of medical algorithms for symptoms and signs to support diagnosis by artificial intelligence (predictive, preventive, participatory). However, to date, the mapping has not been yet completed. (See the author's previous article entitled Overview of AI in healthcare in Indonesia.)

Regarding the application of the 4P paradigm to health workers and in health facilities, basic trainings of the 4P paradigm for medical personnel have been conducted, covering big data, AI, Internet of Things (IoT) and genetics. In the past year, there have been many webinars discussing these topics. These trainings were organised by the Indonesian Hospital Association (PERSI) in collaboration with companies that utilise AI and IoT. However, only a few of the training activities have been carried out.

Dr Gregorius Bimantoro, the only doctor on the national strategy drafting team, explained that as a follow-up to the National Strategy, an Artificial Intelligence Innovation Center (PIKA) was formed, which will be transformed into Industrial Collaboration and Artificial Intelligence Innovation (Korika). Korika is expected to have pillars of its work: (1) digital talent; (2) research and innovation; (3) physical and digital infrastructure; and (4) supporting regulations and policies. He also explained that there are many programmes from this national strategy that are still being implemented. For example, the Ministry of Health has established a Digital Transformation Office with a roadmap to achieve health data interoperability.

Dr Bimantoro added that data mapping – of human genomes, medical algorithms, and physical sensor interpretation data – is expected to be carried out by universities with a Centre of Excellence in Biomedical Research.

It has been a year since the launch of the National Strategy for Artificial Intelligence. While there is still a lot to be done, it is important to highlight that despite a very challenging year, its implementation is underway, and the process is still ongoing. Hopefully the efforts of various parties to realise this strategy will continue and be strengthened over time at a national level.

Share

*Jum’atil Fajar is an AI enthusiast. He holds a Masters degree in Health Sciences. He helped develop the hospital management information system. He currently manages the Hospital Accreditation Data Management Information System.


Upcoming Leading with AI Events

Using AI for Safer Cities

This Friday - 24th September 2021 @ 17.00 CEST

Free registration via EventBrite. Follow link here.

Free registration via EventBrite. Follow link here.


Upcoming Webinar (in Spanish) Reconocimiento Facial - Perspectiva legal 7th October 2021 @ 20.00 CEST

Free registration via EventBrite. Follow link here.

Organised by Berkeley Center on Comparative Equality & Anti-Discrimination Law , Leading with AI, and WebJusticia.

Register on EventBrite here


Additional Events in September

AI Forward Forum Series

23rd September, 30th September, 21st October, 18th November (AI Forward Forum)
Register here

AI Forward Forum features talks given by prominent speakers and moderated discussions on human and machine intelligence.

What is necessary to build socially aware machines that can interact with humans naturally? Despite recent advances in AI, this is one of the most difficult questions to answer. While this is also the question that is often posed to computer scientists, it is seldom addressed to specialists in other fields.

Are you an anthropologist, a sociologist, a psychologist, a biologist, an artist, you name it... willing to think afresh about how truly intelligent systems can be achieved?

23rd September - Professor Catherine Pelachaud "Conversing with Socially Interactive Agents"

30th September - Debate event between Dr. Stefan Buijsman and Mark Saroufim, ML engineer at Facebook AI, on human and machine superpowers

7th October - Dr. Maryam Alimardani "Brain-computer interfaces and the future of human-machine interaction" (in collaboration with LT Big Brother)

21st October - Professor Josh Bongard

18th November - Professor Michael Levin

AI & Climate Change

28th September (Hyper Island and AI Sweden) | Register here

We bring in Sarah Juhl Gregersen & Erik Wilson on the topic to inspire us, and to open up a conversation amongst the participants to share views and experiences. Together we will explore questions like "How can one be sure to keep complex technological solutions sustainable?" and "Where does sustainability come with innovation and AI?"

Sarah Juhl Gregersen is a sustainability consultant and an Associate of the Stockholm Resilience Centre. Erik Wilson is a Project Manager at AI Sweden and led their work together with RISE and Stockholm Resilience Center to produce the report "AI in the service of the climate" which was released in February 2021.

AI & Big Data in FinTech Forum

29th September (Virtual Fin Tech Fair) | Register here

With the rise of Big Data and enhancement of Artificial Intelligence technologies, there is rapid transformation of Financial Services across sectors. In a post-pandemic world, embedded AI and big data analytics will form the back bone of FS, especially as they are leveraged to deliver Digital services.

ABFF will showcase the best AI & Big Data solutions in Asia, with Thought leaders and Technology experts providing insights on what lies ahead for Financial Services as the adoption of digital and AI enable fast changing consumer behaviours and demands.

Learning about Artificial Intelligence: Courses and resources

By Leading with AI Team

(Source: Andrew Neel, Unsplash)

In this back-to-school edition of the Leading with AI newsletter, we wanted to offer some ideas for those wanting to learn more about artificial intelligence (AI) and related technologies. All of the courses listed below are for a non-technical audience and can be completed in a number of hours as self-paced online courses. (Descriptions are provided by the course providers.)

AI for everyone

Total duration: 7 hours

The course is targeted to non-technical people and organisations who want to learn more about AI. Though this course is largely non-technical, engineers can also take the course to learn the business aspects of AI.

If you want your organisation to become better at using AI, this is the course to tell your non-technical colleagues to take. The content of the course includes: 

The meaning behind common AI terminology, including neural networks, machine learning, deep learning, and data science.

What AI realistically can--and cannot--do

How to spot opportunities to apply AI to problems in your own organisation

What it feels like to build machine learning and data science projects

How to work with an AI team and build an AI strategy in your company

How to navigate ethical and societal discussions surrounding AI

Digital Skills: Artificial Intelligence

Total duration: 6 hours (3 weeks, 2 hours per week)

AI is used in many businesses to improve the way employees work. On this course, trainees will learn more about the past, present and future of AI and explore its potential in the workplace. You will enhance your understanding with interesting facts, trends, and insights about using AI. You will also explore the working relationship between humans and AI and the predicted skills needed to work with AI.

Artificial Intelligence for Healthcare: Opportunities and Challenges

Total duration: 4 hours (4 weeks, 1 hour per week)

The use of AI has been a major development in healthcare. With the availability of vast amounts of health data, and the increasing possibilities of data analytics, understanding AI, and the challenges and opportunities it creates has never been been more important. 

On this course you will consider why we might need AI in healthcare, exploring the possible applications and the issues they might cause such as whether AI is dehumanizing healthcare. You should leave the course more confident in your knowledge of AI and how it might improve today’s healthcare systems.

AI for Legal Professionals (I): Law and Policy

Total duration: 12 hours (4 weeks, 3 hours per week)

This course is aimed at lawyers, legal educators, regulators, and anyone interested in legal and policy issues regarding the development and application of AI. In this course, participants will: explore what AI is, evaluating its rationale and objectives, and how it can be regulated by law. This includes discussing key topics such as compliance, privacy, governance, and the bias to be found in AI-powered systems.

Artificial Intelligence for Students

(*Note: a subscription to LinkedIn Learning is required for access to this course)

Total duration: 1 hour 28 minutes

AI is a growing area of interest for the future of work. However, it can feel overwhelming to begin learning about AI given the vastness of the subject. In this course, Jim Sterne, a longtime marketing analyst, shares the basics of what you need to know to get started using artificial intelligence. First, Jim reviews the basics of what AI truly is. He then explores some additional concepts related to AI, such as natural language processing, computer vision, and machine learning. Then, Jim shares applications of machine learning, AI, and how the two work together. He also reviews the relationship between humans and AI, and how you can use AI to your benefit. He closes by reviewing the future state of AI. Upon completion of this course, you will have a solid base of knowledge to begin leveraging AI to your advantage.

Additional resources for online learning

Guided projects - One of the online learning platforms Coursera’s new offerings is called “guided projects”. These are short courses, usually less than 2 hours, that offer opportunities for applied learning for those who already have, or want to develop, experience with machine learning technologies. Examples in the AI space include Build & Deploy AI Messenger Chatbot using IBM Watson, Fake News Detection with Machine LearningPredicting House Prices with Regression using TensorFlow, Bank Loan Approval Prediction With Artificial Neural Nets,

AI for Good is a year-round digital platform where AI innovators and problem owners learn, build and connect to identify practical AI solutions to advance the UN SDGs. Check out their calendar for frequent free webinars and online meetings.


Upcoming Webinar (in Spanish)
Reconocimiento Facial - Perspectiva legal
7 October 2021 @20.00 CEST

Free registration via EventBrite. Follow link here.

Organised by Berkeley Center on Comparative Equality & Anti-Discrimination Law , Leading with AI, and WebJusticia.

Register on EventBrite here

Challenging abuse and violence on social media through AI

By ElsaMarie D’Silva*

(Image: Unsplash | Dole777)

Several years ago, a woman journalist shared with me one of the rape threats she had received on Twitter because of her work. Not only did the person threaten her with rape, but also told her that he knew where she lived. This direct threat made her fearful and she filed a police complaint. All the police advised her to do was to move to a friend’s place for a couple of weeks, which she did. She was upset as being displaced from her home which was a safe haven. It was not an ideal solution, but she didn’t want to be at risk should the man follow up on his threat.

This story is not unique. Many women journalists, politicians, activists and influencers are subject to hate and violence on social media platforms. Many believe it is easier to type in hateful messages like rape and death threats on social media compared with a physical setting.

A survey conducted by the  International Center for Journalists and the United Nations Educational, Scientific and Cultural Organization (UNESCO) of 1,210 international media workers found that 73% of the female respondents had experienced online abuse, harassment, threats and attacks. What was further distressing is that 20% of these women reported that they had also been targeted offline with abuse and attacks.

A recent study of the 2020 US election found that women and women candidates from an ethnic minority background were more likely than white men to receive abusive content on mainstream social media platforms (e.g. Facebook and Twitter).

Global movements like #NotTheCost, #NameItChangeIt, #ReclaimTheInternet, #ByteBack have highlighted violence against women online, making it more visible and helping to shift the narrative about it with social media companies and government legislators. Yet, despite these campaigns, social media organisations are still not doing much to address these problems in a comprehensive and effective way. 

Recently, my organisation, Red Dot Foundation, which works on preventing violence against women and girls through our crowdmapping platform Safecity, was invited by a social media company to attend a training on priority channels for reporting online harassment and violence both as a prevention and reactive measure. Whilst non-profits can be roped in to aid these companies in flagging off malicious and inappropriate content, I believe Artificial Intelligence (AI) can aid in these efforts more efficiently and without putting the burden on organisations that are often already overstretched and underfunded.

There are several examples of AI-powered social monitoring and listening tools to gauge people’s consumption preferences, or even serve as early warning systems for large scale violence. On Twitter, for example, if you were to retweet content involving a news article, you are prompted to read it first. During COVID19, on Facebook, Instagram and Twitter, if you post about the pandemic, a pop up alerting and prompting the reader to educate themselves with local health information appears.

So, if social media companies are already designing tools to help one reflect on certain aspects of what one is posting, why can’t they do it for online gender-based violence? The UN Secretary General Antonio Guterres called gender-based violence a shadow pandemic to the COVID19. The WHO confirms that this pervasive violence affects one in three women around the world. The virtual space is another channel for this violence to be directed towards women and girls.

Furthermore, why should social media companies wait for an abusive post to appear online, create panic and cross boundaries before it is attended to? Why not invest in preventive tools where words that involve rape, sexual assault, etc. can initiate a prompt for the user to look at with a warning to not post malicious, harmful and often criminal content? These could be a digital version of the “Bell Bajao”, or Ring the Bell, campaign by Breakthrough where they encourage interrupting domestic violence with a simple action of ringing the doorbell. Nudges like these give a person time to pause and reflect. For example, when a neighbour rings the doorbell in the midst of a domestic violence situation, the abuser knows that there are other witnesses to the incident and who could report him. Online nudges might distract a person with information or educate them on legislation that might label the act a crime, prompting them to refrain from posting.

Some years ago, activist Soraya Chemaly and a few others set up a system to tackle pages on Facebook, some with names like “Raping your Girlfriend”, when the company failed to respond to complaints to take them down. They encouraged people to take screenshots of these pages, tweet at the companies whose ads appeared on those pages and publicly shamed them for hosting their ads on offensive pages. The companies would then withdraw their ads until Facebook was pressured to remove them. This manual action by several activists and organisations can easily be done with AI tools.

As we think of building back a better world post COVID19, we need to use the resources available to end gender-based violence. We have the technology and tools to design such interventions which are preventive rather than reactive, it is time to start using them.

Share

ElsaMarie D’Silva is the Founder of Red Dot Foundation (India) and President of Red Dot Foundation Global (USA). Its platform Safecity, crowdsources personal experiences of sexual violence and abuse in public spaces. ElsaMarie is a 2020 Gratitude Network Fellow, 2019 IWF Fellow and a Reagan Fascell Fellow, a 2018 Yale World Fellow and an alumni of the Stanford Draper Hills Summer School, the US State Department’s Fortune Mentoring Program, Oxford Chevening Gurukul and the Duke of Edinburgh’s Commonwealth Leadership Program. She is also a fellow with Rotary Peace, Aspen New Voices, Vital Voices and a BMW Foundation Responsible Leader. She co-founded the Gender Alliance which is a cross-network initiative bringing together feminists from the BMW Foundation Herbert Quandt's Responsible Leaders Network, the Global Diplomacy Lab, the Bosch Alumni Network and Global Leadership Academy Community (by GIZ). She is listed as one of BBC Hindi’s 100 Women and has won several awards including Government of India Niti Aayog’s #WomenTransformingIndia award and The Digital Woman Award in Social Impact by SheThePeople. In 2017, she was awarded the Global Leadership Award by Vital Voices in the presence of Secretary Hillary Clinton. She is also the recipient of Gold Stevie Award for Female Executive of the Year - Government or Non Profit -10 or Less Employees in 2016. 


Announcement: Summer Schedule

Dear Readers,

For the months of July and August, the Leading with AI newsletter will move to a bi-weekly schedule. Our next issue will follow on Thursday (as usual) 9th September. Wishing everyone a safe and healthy summer.

Warm wishes,
The Leading with AI Team

What the Leading with AI Team is reading and listening to this summer

By Leading with AI Team

(Image: Unsplash | Perfecto Capucine)

In this week’s edition, Leading with AI team members recommend their summer reads and listens. They include books, reports, articles and podcasts we have found informative and thought provoking. Wherever and however you are spending your summer, we hope those of you who are learning more about the fast-developing and fascinating world of artificial intelligence (AI) find these interesting and inspiring.

Books

Kate Crawford - Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence

What happens when artificial intelligence saturates political life and depletes the planet? How is AI shaping our understanding of ourselves and our societies? Drawing on more than a decade of research, award‑winning scholar Kate Crawford reveals how AI is a technology of extraction: from the minerals drawn from the earth, to the labour pulled from low-wage information workers, to the data taken from every action and expression. This book reveals how this planetary network is fueling a shift toward undemocratic governance and increased inequity. Rather than taking a narrow focus on coding and algorithms, Crawford offers us a material and political perspective on what it takes to make AI and how it centralises power. This is an urgent account of what is at stake as technology companies use artificial intelligence to reshape the world. (Yale University Press)

Erik J. Larson - The Myth of Artificial Intelligence

Futurists insist that AI will soon eclipse the capacities of the most gifted human mind. What hope do we have against superintelligent machines? But we aren’t really on the path to developing intelligent machines. In fact, we don’t even know where that path might be.

A tech entrepreneur and pioneering research scientist working at the forefront of natural language processing, Erik Larson takes us on a tour of the landscape of AI to show how far we are from superintelligence, and what it would take to get there. Ever since Alan Turing, AI enthusiasts have equated artificial intelligence with human intelligence. This is a profound mistake. AI works on inductive reasoning, crunching data sets to predict outcomes. But humans don’t correlate data sets: we make conjectures informed by context and experience. Human intelligence is a web of best guesses, given what we know about the world. We haven’t a clue how to program this kind of intuitive reasoning, known as abduction. Yet it is the heart of common sense. That’s why Alexa can’t understand what you are asking, and why AI can only take us so far. (Harvard University Press)

Max Tegmark - Life 3.0

AI is the future - but what will that future look like? Will superhuman intelligence be our slave, or become our god?

Taking us to the heart of the latest thinking about AI, Max Tegmark, the MIT professor whose work has helped mainstream research on how to keep AI beneficial, separates myths from reality, utopias from dystopias, to explore the next phase of our existence.

How can we grow our prosperity through automation, without leaving people lacking income or purpose? How can we ensure that future AI systems do what we want without crashing, malfunctioning or getting hacked? Should we fear an arms race in lethal autonomous weapons? Will AI help life flourish as never before, or will machines eventually outsmart us at all tasks, and even, perhaps, replace us altogether? (Vantage | Penguin)

Report

Steering AI and Advanced ICTs for Knowledge Societies was published by UNESCO earlier this year, and highlights the opportunities and challenges for the use of AI applications to help achieve the Sustainable Development Goals (SDGs). It reports on findings from a survey of 32 member countries in Africa, identifying needs and priority areas for more work and research on the continent. 

Articles

Jason Fagone, San Francisco Chronicle - The Jessica Simulation: Love and loss in the age of AI

The death of the woman he loved was too much to bear. Could a mysterious website allow him to speak with her once more?

Alex Beard, The Guardian - Can Computers Ever Replace the Classroom?

With 850 million children worldwide shut out of schools, tech evangelists claim now is the time for AI education. But as the technology’s power grows, so too do the dangers that come with it.

Podcasts

Kyle Polich - Data Skeptic

Data Skeptic is your source for a perspective of scientific skepticism on topics in statistics, machine learning, big data, artificial intelligence, and data science. The weekly podcast and blog bring stories and tutorials to help understand our data-driven world.

Center for Democracy and Technology - Tech Talks

CDT’s Tech Talk is a podcast where they discuss tech and Internet policy, while also explaining what these policies mean to our daily lives. 

Center for Humane Technology - Your Undivided Attention

In this podcast from the Center for Humane Technology, co-hosts Tristan Harris and Aza Raskin expose how social media’s race for attention manipulates our choices, breaks down truth, and destabilises our real-world communities. Tristan and Aza also explore solutions: what it means to become sophisticated about human nature by interviewing anthropologists, researchers, cultural and faith-based leaders, activists, and experts on everything from conspiracy theories to existential global threats.

Daniel Faggella- AI in Business

A weekly podcast from Daniel Faggella, founder of the AI research firm Emerj, interviews AI executives from start-ups and Fortune 500 companies to explore the diverse use of AI technologies. Guests from included leaders from diverse industries, including healthcare, life sciences, finance, and gaming. Related issues such as global trends, policy initiatives and data strategies are also discussed.

Chris Benson and Daniel Whitenack - Practical AI

In this weekly podcast, released every Monday, Co-hosts Chris Benson and Daniel Whitenack focus on the practical and application of AI and related technologies, including machine learning and neural networks. With a focus on real-world applications and scenarios, the content is accessible to the non-technical experts.

Re-work - Women in AI

Women in AI is a bi-weekly podcast featuring discussions with females leaders in AI, machine learning, and deep learning. Guests include start-up founders, academics, and engineers at learning global companies including IBM and Google.

Notable Podcast Episodes

For those interested in specific applications of AI, here are some notable episodes:

Share


Announcement: Summer Schedule

Dear Readers,

For the months of July and August, the Leading with AI newsletter will move to a bi-weekly schedule. Our next issue will follow on Thursday (as usual) 26th August. Wishing everyone a safe and healthy summer.

Warm wishes,
The Leading with AI Team

Share Leading with AI


Artificial Intelligence and Radiology in Indonesia

By Jum’atil Fajar*

Image: National Cancer Institute | Unsplash

Despite the common belief that Artificial Intelligence (AI) is a recent phenomenon, historical records show the use of AI in radiology started in the United States in the 1960s. In 1963, the study Computer Diagnosis of Primary Bone Tumors, reported on the progress of developing a computer program to evaluate bone cancer as shown on x-rays. To our knowledge this was the first study of its kind linking the use of AI and radiology, and setting the basis for the development of the sector.

More recently, efforts in the field have included the digitisation of 22,864 images from 1,664 radiology cases of bone tumors that were collected by Professor Henry H. Jones from Stanford Medical Center between 1955 and 2005. During the process, researchers annotated key images from 811 cases using the Annotation and Image Markup (AIM) standard. These data are now used for machine learning.

Progress in the field takes me to the question of how is AI in radiology being used in Indonesia? And how did the use of AI in radiology develop in the country? To answer these questions I conducted a series of interviews to better understand how the sector evolved over time.

The interviews revealed different approaches in public and private hospitals. In private hospitals, the practice of using AI in radiology is more widespread and supported. Public hospitals, on the other hand, are lagging behind mainly due to funding constraints and the topic not being prioritised at the moment.

Dr Pandu, a radiology specialist at Omni International hospital in Jakarta, mentioned hospital management provided X-rays equipped with AI technology. This tool has also been used for COVID-19, but has experienced some challenges along the way. 

A radiology specialist from the Doris Sylvanus Hospital, a public hospital from the Central Kalimantan province, explained that he has not been able to implement AI due to funding constraints and sub-standard facilities in the radiology room. He added that the implementation of AI in the near future is still not seen as a priority. Similar observations were conveyed by Dr Denny Muda Permana, a radiology specialist at Murjani Hospital in Sampit, Central Kalimantan. He mentioned that despite offers to implement the technology, they had been unable to do so because it is still not a priority in his department.

The issue is also one that goes beyond funding or priorities. Dr Ceva Wicaksono Pitoyo, a specialist in internal medicine and consultant in pulmonary diseases at the Cipto Mangunkusumo National General Hospital in Jakarta, argues that if AI only relies on image management, it will never be able to go beyond a radiology professor. Specialists and consultants still want to meet patients and ask questions (anamnesis) or review the patient clinical data. Dr Pitoyo also warns about only relying on AI for diagnoses. He reminds us of the important work of doctors and additional information doctors need to make a diagnosis, such as medical history, physical examinations, data on understanding pathogenesis and pathophysiology, laboratory data, and anatomical pathology data. The comments provided by Dr Pitoyo highlight the unique human skills and capacities that should always be present as part of the diagnosis process.

As stated in this study:

while the ultimate goal of machine learning algorithms and artificial intelligence may be to automatically learn from the data with limited or no human interaction, it needs to be recognized that achieving accurate results for complex image interpretation tasks such as medical images may require higher levels of cognitive processing.

In this context, the “human-in-the-loop” integration or so-called “interactive machine learning” becomes important and shows potential promise for other complex interpretation tasks in radiology. Approaches like this could help ensure we use AI in radiology to assist in the process without forgetting about the importance of the human presence.

Share

*Jum’atil Fajar is an AI enthusiast. He holds a Masters degree in Health Sciences. He helped develop the hospital management information system. He currently manages the Hospital Accreditation Data Management Information System.


Next Event: AI Leadership Webinar Series
Friday, 30th July @ 16.00 CEST

Register here for this free event.

Register here for this free event.

Share


Announcement: New Summer Schedule

Dear Readers,

For the months of July and August, the Leading with AI newsletter will move to a bi-weekly schedule. Our next issue will follow on Thursday (as usual) 12th August. Wishing everyone a safe and healthy summer.

Warm wishes,
The Leading with AI Team

Share Leading with AI

Loading more posts…