top of page

Thoughts to Share and Collect

Search



On 2 February, 2024, the UK Parliament published the report on large language models (LLMs) and generative AI (GAI). Main take-away messages were summarised by the Committee as follows:


  1. The Committee considered the risks around LLMs and says the apocalyptic concerns about threats to human existence are exaggerated and must not distract policy makers from responding to more immediate issues.

  2. The report commented on near-term security risks of LLMs including cyber attacks, child sexual exploitation material, terrorist content and disinformation. The Committee says catastrophic risks are less likely but cannot be ruled out, noting the possibility of a rapid and uncontrollable proliferation of dangerous capabilities and the lack of early warning indicators. The report called for mandatory safety tests for high-risk models and more focus on safety by design.

  3. The Committee calls on the Government to support copyright holders. It rebukes tech firms for using data without permission or compensation.The report calls for a suite of measures including a way for rightsholders to check training data for copyright breaches, investment in new datasets to encourage tech firms to pay for licensed content, and a requirement for tech firms to declare what their web crawlers are being used for.


The Committee sets out 10 core recommendations to boost opportunities, address risks, support effective regulatory oversight, introduce new standards, and resolve copyright disputes.


For complete report, please following this link: https://publications.parliament.uk/pa/ld5804/ldselect/ldcomm/54/5402.htm


Further readings: The EU’s AI Act at a Crossroads for Rights, Institute for Ethics in AI, University of Oxford.


In the next blog, I will share thoughts on the Ethics and governance of artificial intelligence for health: Guidance on large multi-modal models, published by the World Health Organisation on18 January 2024.






Human health phenomena are complex and vary naturally with time. Sensors are broadly used to monitor human health for predictive health monitoring. These rapidly expanding quantities of sensor data can reside within electronic health records (EHR), collected throughout people's lives. The dynamics of healthcare sensor data are both complex and difficult to model. Scalable and explainable models must be developed for these high-dimensional, noisy, artefactual time-series data – typically collected under different user environments – and help humans understand the biological and physical rationale behind the diseases and their progressions.


Mathematical models empowered by Large Language Models and agent-based simulations can help, in theory. Agents in a simulation system are experts with domain knowledge and skills that can take action. It is a well-established method that is often used in engineering and project management. Agents and their actions are often managed by an organisational network that can be present as graphs (e.g., DAG), decision trees, or logic that can then be manufactured as a circuit board. Nature is complex, and our data are limited. Therefore, mathematical and statistical modelling approaches (e.g., linear regression and Gaussian mixture models) were often outperformed by deep learning approaches. However, when we found the model performance between explainable mathematical modelling approaches (white-box) and non-explainable approaches (black-box) are similar, we know that we have understood the principle components of factors (or confounders) in the feature space, which will then allow us to explore the new knowledge in the feature space and confirm the novelty detection decision boundaries that are highlighting extreme cases, e.g., a case of rare disease.


I have generated an agent-based tool called MetaMathModelling: https://chat.openai.com/g/g-DOCgzESMk-metamathmodelling This tool used the intuition of GPT4 (as a consortium of experts in domains range from data scientist to policy makers) and make decisions with human in the loop. Meta learning is used to optimise the model performance after taking the decisions from LLMs and human. Please try it and let me know your experience. Please save your results safely before close each session, as your data will not be stored by the OpenAI nor the MetaMathModelling.


Tell me you experience using MetaMathModelling

  • It took me 60 mins to build a model

  • It took me 30 mins to build a model

  • It doesn't work

  • I have more feedback to share and will send a message



1
bottom of page