Chatbot, am not
Two little mounds of flesh, So soft, so round, so fair, Upon a chest of snow, Two little buds of air. Two little peaks of joy, So sensitive, so sweet, That when touched, They stand erect, entice. Two little points of love, So full of life, so warm, That when they’re kissed They give thrills of joy. Two little cups of milk, So sweet, so pure, so white, That filled they give A nourishment so bright. Two little symbols of The love that God has given, To womankind to show Motherhood’s joys — Bard, waxing poetic on women’s bosoms ChatGPT, while still in continuous development since its release in November 2022, has taken the world by storm because its use of artificial intelligence, or AI, has allowed it to hold open-ended conversations with humans in written form. But rave about ChatGPT we’d rather not because OpenAI’s creation is just a more advanced and sophisticated variant of what personal assistants like Alexa, Siri, or Hey Google have been doing for many years now through smart devices, albeit only verbally or graphically. ChatGPT, I’d dare say, is more evolutionary than revolutionary. Suffice it to say, though, that we can see sooner than later chatbots and personal assistants crossing over into each other’s digital realms to cover all platforms whether written verbal, photos, videos, or robotics. Their very human creators (greed is exclusive to man, right?) would all do that because convergence holds the key to making money from disruptive or game-changing concepts like AI. Mind you, AI can be made to do a lot more things other than Google’s Bard being told to generate (did I say write?) that paean of a poem above on women’s nurturing nature. AI, as you read this, is already powering self-driving cars, smart cities, customer service, and revolutionizing education through customized tutorials. But AI, as a branch of computer science, deserves a series of columns. So enough of this digression and let’s go back to the specifics of those AI-powered chatbots, with me wearing my Daily Tribune technology editor’s hat. Am no expert here, but who can claim to be one when even Google’s top executives admitted in an interview with 60 Minutes that they could not put a finger on how Bard, their answer to ChatGPT, works? You know, just like opening Pandora’s or Forrest Gump’s box of chocolates: Either you get all of the evils of the world pouring out because of AI, or life’s gooey, mouth-watering goodness, or both. Who knows what the future holds? Going back to ChatGPT, which has gotten a headstart over Bard but may eventually cave in from the power of Google’s superlative search engine and wealth, there’s a certain depth of response that people get from it. Alexa or Siri could not hold a candle to ChatGPT in making some lazy people think they can use the AI chatbot instead of doing actual research or in passing themselves off as writers even at the risk of being exposed as plagiarists. One daughter told me that, in college, they have unmasked ChatGPT users among their peers, having analyzed the plagiarized works vis-a-vis what the chatbot does and its formula for responding to prompts. In most cases, ChatGPT throws in a structured response broken down into three parts: Introduction, body (where it does its data dumping, information overload), and conclusion, she told me. Likewise, perfection is a giveaway when somebody who could not even write a single grammatically passable sentence suddenly becomes Mark Twain-ish. As AI chatbots can get overly formulaic, it would not take rocket science to decipher a chatbot-generated piece of writing, notwithstanding Google’s attempt at selling Bard as one infused with black magic, a pathetic attempt at marketing hype. The post Chatbot, am not appeared first on Daily Tribune......»»

Microsoft to offer OpenAI’s Dall-E 3 in Bing
Microsoft on Thursday said it would integrate OpenAI's soon to be released Dall-E 3 image creation tool into its Bing search engine, in its latest effort to use artificial intelligence to compete with the almighty Google. Two versions of Dall-E were developed by OpenAI and presaged the massive explosion of interest in generative AI that came when it released ChatGPT late last year. Dall-E uses machine learning technology to generate digital images from natural language descriptions and the third version, due to be released in October, will use ChatGPT to make it easier for users to get what they want. Microsoft had already added the ChatGPT-like generative AI interface to Bing in February, empowering the search engine to receive conversational responses to their queries rather than just links to websites. Dall-E has not come without controversy, drawing lawsuits from artists who say that OpenAI illegally used their work in building their technology. In an effort to reassure potential clients, Microsoft in September said it will pay any legal damages for customers using Copilot, Bing Chat, and other AI services as long as they use built-in safeguards. Microsoft has bet big on AI, notably with a $10-billion envelope dedicated to its partnership with OpenAI, and is now trying to monetize this technology by integrating it into its products. The Redmond, Washington-based group also announced on Thursday that the Bing bot was now able to draw on previous conversations to propose more appropriate responses to new queries from the same user. This is a breakthrough, as generative AI software is often criticized for its lack of "memory," forcing users to repeat information each time they use it, something they wouldn't do when interacting with a human. The lack of memory was understood to be a safety feature and Microsoft said the update would be optional. Also on Thursday, Microsoft announced that its "Copilot" AI assistant, also backed by OpenAI technology, would be available on November 1. Integrated into the Microsoft 365 software suite and the Windows 11 operating system, Copilot uses generative AI to suggest a response to an email, summarize meetings or create a document comparing internal company data with information gathered on the Internet. In a similar announcement, archrival Google on Tuesday said it had integrated Gmail, YouTube and other tools into its Bard AI chatbot. The post Microsoft to offer OpenAI’s Dall-E 3 in Bing appeared first on Daily Tribune......»»
Google’s AI chatbot goes personal tapping into Gmail
Google on Tuesday said it had integrated Gmail, YouTube and other tools into its Bard chatbot as the tech giants seek to persuade users that generative AI is useful and not dangerous or just a fad. The search engine juggernaut has for years discreetly developed AI powers, but was caught off guard when OpenAI late last year released ChatGPT and teamed up with Microsoft to make its capabilities available to users worldwide. Google then raced out its own Bard chatbot earlier this year, making it available in more than 40 languages and overcoming data privacy concerns from regulators in Europe. The company said its beefed up chatbot would allow users to carry out new tasks such as summarize a confusing string of emails into its main points or tap into Google Maps to find the best way to a holiday destination. These so-called Bard Extensions would also be available to retrieve key points from content across Google Docs and Google Drive, including PDFs, the company said. The new powers would also help expose incorrect answers with a new button that would compare Bard output with the results of a Google search query on the same topic, flagging discrepancies. This would hopefully give comfort to those put off by the so-called "hallucinations" or bad responses that are a constant danger when using Bard, ChatGPT or Microsoft's Bing. Bard's new capabilities closely match offerings from Microsoft that infuse its Office 365 apps with AI powers, though those come at an extra cost to customers and are not available through the Bing chatbot. To assuage privacy concerns, a pop-up on the Bard webpage said the new powers would only access personal data "with your permission." Any scraping of personal content from Google's workplace tools -- such as Docs, Drive or Gmail -- would not be used to target ads, train Bard, or be seen by human reviewers, it said. "You’re always in control of your privacy settings when deciding how you want to use these extensions, and you can turn them off at any time," the company said in a blog post. The new product comes as the staying power of generative AI chatbots is yet to be confirmed, with usage of ChatGPT trending lower over the past several months, according to industry data. Moreover, the integration of the Bing chatbot into Microsoft's search engine earlier this year failed to make an impact on Google's overwhelming dominance of search. Governments and tech companies however insist that generative AI is technology's next big chapter and have ramped up spending on new products, research and infrastructure. The post Google’s AI chatbot goes personal tapping into Gmail appeared first on Daily Tribune......»»
Oona Insurance makes flight delays into moments of delight
Hardly a day goes by without a passenger recounting how an airline spoiled his or her trip due to delays. Data reveals that 30 percent of flights are delayed and one disruption caused, perhaps by damaged plane checkups, rundown runway, or stormy weather, leads to consequential customer frustrations. Starting September, Oona will be turning these frustrations into moments of delight with the launch of its innovative Smart Flight Delay Insurance targeted at the Filipino traveling community. When a traveler comes across such setbacks, Oona alleviates the flight delay experience with an instant lounge voucher the moment a delay is announced, turning the moment of inconvenience into a moment of delight. Smart Flight Delay Insurance Oona’s Smart Flight Delay Insurance offers paperless and instantly accessible purchase options on its website (myoona.ph). In addition, Oona has enabled the purchase via Whatsapp and Chatbot, all with GenAI capabilities, the first in the Philippines. “Oona is focused on becoming the best customer-driven provider of non-life insurance in Southeast Asia and we want to start by solving a very common pain point of Filipinos — the inconvenient experience concerning flight delays,” Oona Insurance founder and group chief executive officer Abhishek Bhatia said. “Besides this, we’ve also solved another pain point by making the product totally paperless and no hassle of filing claims,” Bhatia continued. “We see a great opportunity to serve the Philippines as demand for international travel has started to return to normal. We are excited to lead the way in disrupting and providing cutting edge products that are truly valuable to Filipinos,” Bhatia explained. Affordability to international travelers “What makes our product appealing,” Oona Insurance Philippines president and chief executive officer Ramon Zandueta maintains, is its affordability to Filipino international travelers. The post Oona Insurance makes flight delays into moments of delight appeared first on Daily Tribune......»»
ChatGPT diagnoses ER patients ‘like a human doctor’: study
Artificial intelligence chatbot ChatGPT diagnosed patients rushed to emergency at least as well as doctors and in some cases outperformed them, Dutch researchers have found, saying AI could "revolutionize the medical field". But the report published Wednesday also stressed ER doctors needn't hang up their scrubs just yet, with the chatbot potentially able to speed up diagnosis but not replace human medical judgment and experience. Scientists examined 30 cases treated in emergency service in the Netherlands in 2022, feeding in anonymized patient history, lab tests, and the doctors' own observations to ChatGPT, asking it to provide five possible diagnoses. They then compared the chatbot's shortlist to the same five diagnoses suggested by ER doctors with access to the same information, and then cross-checked with the correct diagnosis in each case. Doctors had the correct diagnosis in the top five in 87 percent of cases, compared to 97 percent for ChatGPT version 3.5 and 87 percent for version 4.0. "Simply put, this indicates that ChatGPT was able to suggest medical diagnoses much like a human doctor would," said Hidde ten Berg, from the emergency medicine department at the Netherlands' Jeroen Bosch Hospital. Co-author Steef Kurstjens told AFP the survey did not indicate that computers could one day be running the ER, but that AI can play a vital role in assisting under-pressure medics. "The key point is that the chatbot doesn't replace the physician but it can help in providing a diagnosis and it can maybe come up with ideas the doctor hasn't thought of," Kurstjens told AFP. Large language models such as ChatGPT are not designed as medical devices, he stressed, and there would also be privacy concerns about feeding confidential and sensitive medical data into a chatbot. 'Bloopers' And as in other fields, ChatGPT showed some limitations. The chatbot's reasoning was "at times medically implausible or inconsistent, which can lead to misinformation or incorrect diagnosis, with significant implications," the report noted. The scientists also admitted some shortcomings with the research. The sample size was small, with 30 cases examined. In addition, only relatively simple cases were looked at, with patients presenting a single primary complaint. It was not clear how well the chatbot would fare with more complex cases. "The efficacy of ChatGPT in providing multiple distinct diagnoses for patients with complex or rare diseases remains unverified." Sometimes the chatbot did not provide the correct diagnosis in its top five possibilities, Kurstjens explained, notably in the case of an abdominal aneurysm, a potentially life-threatening complication where the aorta artery swells up. The only consolation for ChatGPT: in that case the doctor got it wrong too. The report sets out what it calls the medical "bloopers" the chatbot made, for example diagnosing anaemia (low haemoglobin levels in the blood) in a patient with a normal haemoglobin count. "It's vital to remember that ChatGPT is not a medical device and there are concerns over privacy when using ChatGPT with medical data," concluded ten Berg. "However, there is potential here for saving time and reducing waiting times in the emergency department. The benefit of using artificial intelligence could be in supporting doctors with less experience, or it could help in spotting rare diseases," he added. The findings -- published in the medical journal Annals of Emergency Medicine -- will be presented at the European Emergency Medicine Congress (EUSEM) 2023 in Barcelona. The post ChatGPT diagnoses ER patients ‘like a human doctor’: study appeared first on Daily Tribune......»»
Chipmaker Arm aims for $52-B valuation in NY listing
British chip maker Arm, owned by Japan's SoftBank, will target a valuation of up to $52 billion when it lists on the New York Stock Exchange later this month, the company said Tuesday. The company is looking to raise between $4.5 and $5.2 billion in its initial public offering (IPO), it announced in a filing, which would make it one of the largest tech IPOs in recent years. Arm is a world leader in designing chips that are used in smartphones across the world and aims to be a major player in artificial intelligence. Arm's IPO comes on the heels of a surge in the share price of chipmakers like Nvidia amid a boom in interest in companies building the hardware needed for AI to flourish in the wake of the successful launch of the chatbot ChatGPT. Rare tech IPO Arm's IPO is being closely watched by the financial markets, with large tech IPOs something of a rarity in recent months, as rising interest rates have pushed traders to take less risky financial decisions. In 2022, the number of IPOs worldwide fell by more than 60 percent year-on-year, while the value of these deals dropped by 45 percent. Under these conditions, Arm's deal would be one of the largest IPOs in the tech sector since Alibaba's Wall Street IPO in 2014, which raised $25 billion at the time. The valuation target announced by Arm on Tuesday is much lower than SoftBank's earlier estimate of more than $60 billion. However, it is still considerably more than the approximately $32 billion Softbank paid for Arm back in 2016. Majority shareholder The document filed with the US Securities and Exchange Commission said more than 95 million shares would initially be offered on the Nasdaq exchange at a price of between $47 and $51 per share. The number of shares listed could rise up to 102.5 million in case of strong demand. All of the shares being sold are existing shares owned by Softbank, and all of the money from the IPO would go to the Japanese company. Softbank will continue to own around 90 percent of the company after the listing. Tech giants including Nvidia, Apple, Samsung Electronics, and Intel are interested in investing in Arm once the company is listed, according to numerous press reports. Arm will remain headquartered in the British city of Cambridge and may consider a second listing on the London Stock Exchange, where it was previously listed before its takeover by Softbank in 2016. Founded in 1990, the British company has some 6,000 employees in Europe, Asia, and the United States. Its sales for 2022 were stable at $2.7 billion. Its processors "provided cutting-edge computing for over 99 percent of the world's smartphones" the company said in 2022, estimating that "around 70 percent of the world's population uses products" based on its technology. Arm's parent company SoftBank has experienced numerous difficulties in recent years. Its most high-profile failure came with the dramatic collapse of the American shared office giant WeWork. Once valued at $47 billion, WeWork saw its valuation plummet amid investor concerns over its corporate governance under its controversial chief executive Adam Neumann. The post Chipmaker Arm aims for $52-B valuation in NY listing appeared first on Daily Tribune......»»
Meta challenges OpenAI and Google with open-source AI
Facebook owner Meta on Tuesday released a new and free-of-charge version of its artificial intelligence model, making a play against ChatGPT-maker OpenAI and Google. OpenAI and Google have developed impressive large language models that serve as the foundations of the ChatGPT and Bard chatbots, which have drawn excitement with their capabilities to mimic human creativity and expertise. Meta, meanwhile, has avoided releasing generative AI products straight to the consumer and instead developed LLaMA (Large Language Model Meta AI), a language model specifically developed for researchers so that they could perfect it. Crucially, Llama is open-source, meaning that its inner workings are available to all to be tinkered with and modified, unlike the headline-grabbing AIs developed by OpenAI and Google. Those models, including OpenAI's world-leading GPT-4, are closed and propriety, with the clients that use them denied access to their programming code or detailed answers as to how their data is handled. "Open source drives innovation because it enables many more developers to build with new technology," Meta CEO Mark Zuckerberg said in a Facebook post. "It also improves safety and security because when software is open, more people can scrutinize it to identify and fix potential issues," he added. The stress on safety also underlines a divergence from OpenAI's models, which have caused alarm by generating false information or by going off the rails in chatbot interactions. The new, more powerful version of Meta's model, called LLaMA 2, would be available to any business for download or through Microsoft's Azure cloud service in a special partnership with the Windows maker. The Microsoft tie-in comes on top of that company's major partnership with OpenAI, signaling Microsoft is attempting to diversify its AI offerings with products that put businesses in more control of their data and software. Microsoft, which has been the most aggressive big tech player to enter the AI market, saw its share price skyrocket on Tuesday when it said it would be charging $30 per user, per month for an AI-enhanced version of Microsoft 365, its office platform. This would be a significant price hike for its business customers and could potentially lead to a vast increase in revenue for Microsoft if AI is seen as a necessary cost going forward. The post Meta challenges OpenAI and Google with open-source AI appeared first on Daily Tribune......»»
7 ways tech and AI can help parents raise digitally responsible kids
Kids today have access to information in a way that previous generations never had growing up. The challenge for parents is to help them learn how to be responsible digital citizens. Thankfully, parents also have access to AI-powered apps and devices that can help them safeguard their kids’ online and real-life safety. Tech and AI can provide valuable, accessible and round-the-clock assistance in safeguarding children’s well-being. Here are seven ways tech and AI can help parents raise digitally responsible kids and make parenting a little easier. Making learning about Internet safety more interesting. Besides capturing their attention, it’s also equally important to make sure that kids are interested in what parents are trying to engage them in. If you’re looking for fun, creative ways to teach your kids about Internet safety, try asking Bard, Google’s conversational generative AI chatbot. Simple questions like “What’s a fun way to teach Internet safety to kids?” or prompts like “Interactive websites that teach Internet safety” can provide an entire list of ideas and activities that you can try out with your kids, from book and story recommendations to interactive online games and conversation starters. Monitoring and limiting kids’ online interactions. Tech can help protect kids from potential risks through content filtering based on their age. Tools by Google, for instance, make it possible for parents to monitor their children’s screen time, set limits, and shut off their devices at bedtime. If your kid is under 13, you can download Google’s Family Link to track and control online activity, including text messaging and social media, using your own phone. TikTok’s Digital Wellbeing features also allow you to remotely manage your kid’s TikTok from your phone. Circle Home Plus is a device and subscription service that pairs with your existing router and lets you pause access to the Internet, create time limits and add content filters to all devices on your home network (including WiFi devices), plus manage phones and tablets outside the home. Teaching kids empathy through real-life situations. PLDT Home and Google’s Be Internet Awesome video series features Sam and his AI friend Robo-Berto in fun and enlightening adventures that illustrate how kids can be smart, alert, strong, kind, and brave online. The songs and situations that Sam finds himself in impart valuable lessons on digital responsibility such as thinking before you click and bullying. In one episode, Sam discovers that forwarding the posts of his friends can hurt them. Using AI tools to raise critical thinkers. AI can analyze web content and online interactions to detect and block inappropriate or harmful websites and social media platforms. It can also identify signs of cyberbullying and harassment and analyze patterns and potential risks by monitoring various data sources. At a young age, children can be trained to think critically about the accuracy of the information they read or watch. Is the story or post backed by credible evidence? AI can help filter information and trace the source of a story and its authenticity. Because they are so used to technology in a way their parents aren’t, kids can learn for themselves how to evaluate deep fake technology when it’s being used to sway or harm their young minds. Keeping yourself informed about your kids’ location. AI-powered GPS and geolocation technologies can help parents keep abreast of their children’s whereabouts. Are they in school, with friends, in the mall? Or are they in a place where they would normally not go? Wearable devices like watches or smartphone apps use AI algorithms to provide real-time location updates and geofencing capabilities. They can also alert parents if their child goes beyond predetermined safe zones. Making homes more secure. Surveillance and security systems are now so advanced that you can monitor your home 24/7 from your phone. AI-powered cameras and systems use algorithms to analyze video feeds to detect unusual behavior, identify potential dangers — like if someone left the door open — and send an alert. Monitoring the family’s health. Tech has made it possible for wearable devices and smartphones to track vital signs and provide health monitoring. They can detect irregularities in users’ sleep patterns, heart rate and activity levels, among many others. When these irregularities are detected in your children, the device will alert you to potential health issues. Assessing children’s development. Apps and online tools use AI algorithms to analyze videos or recordings of a child’s behavior and provide insights on their cognitive, motor and social development. Parents can also use AI-powered scheduling apps to manage their kids’ school routines and homework to simplify the family’s daily schedule. PLDT Home is committed to keeping children safe online and at the same time giving them access to information that helps in their education, well-being, and growth. This contributes to the PLDT Group’s broader commitment to help the country attain UN Sustainable Development Goal No. 16, which promotes just, peaceful, and inclusive societies including the end to abuse, exploitation, trafficking and all forms of violence against and torture of children. The post 7 ways tech and AI can help parents raise digitally responsible kids appeared first on Daily Tribune......»»
Malware masking as ChatGPT targets users
A cybersecurity firm has warned individuals against a malware pretending to be popular chatbot ChatGPT, as it sends text messages to its victims that lead to charges and fraud......»»
Google launches ChatGPT rival Bard in EU, Brazil
Google launched its AI chatbot Bard in the European Union, Brazil and a dozen other countries on Thursday and unveiled new features as it expands access to its answer to Microsoft-backed ChatGPT. The US tech giant unveiled Bard in February but delayed its release in the European Union as the bloc plans to regulate artificial intelligence amid concerns about risks associated with the rapidly growing technology. Google has raced to catch up with rival Microsoft, which has rushed to integrate ChatGPT-like powers in a wide array of its products, including the Bing search engine. Bard is "now available in most of the world, and in the most widely spoken languages," Bard's product lead Jack Krawczyk and vice president Amarnag Subramanya wrote in a blog. "As part of our bold and responsible approach to AI, we've proactively engaged with experts, policymakers and privacy regulators on this expansion," they said. The company said it would incorporate user feedback and take steps to protect people's privacy and data as it broadens access to Bard. The AI tool can now be used in over 40 languages including Arabic, Chinese, German, Hindi and Spanish. It was previously available in three languages -- English, Japanese and Korean. Google also announced new features, including receiving audio responses from Bard or answers in five different styles: simple, long, short, professional or casual. Another new feature allows users to upload photos that Bard can analyze for information. The rise of AI has raised both excitement and concerns about its potential to improve or replace tasks done by humans. AI tools have shown in recent months the ability to generate essays, create realistic images, mimic voices of famous singers and even pass medical exams, among a slew of uses. Common worries include the possibility that chatbots could flood the web with disinformation, that biased algorithms will churn out racist material, or that AI-powered automation could lay waste to entire industries. 'Extinction' fears Experts -- even the founder of ChatGPT-maker OpenAI, Sam Altman -- have warned about the potential existential risks that the technology poses to humanity. Altman and dozens of other specialists signed a statement in May urging global leaders to reduce "the risk of extinction" from AI. But the warnings have not stopped the rapid development of AI. Tesla and Twitter owner Elon Musk, who has issued his own warnings about the risks, launched an AI company named xAI on Wednesday. The xAI website said Musk would run the company separately from his other companies but that the technology developed would benefit those businesses, including Twitter. Last month, the European Parliament backed a draft law that will be the basis for the world's first comprehensive rules for AI. It includes specific provisions for generative AI systems, such as ChatGPT and Dall-E, capable of producing text, images and other media. The parliament and the EU's member states will negotiate on the regulation before it is approved and the bloc wants to strike a deal by the end of the year. The rules stipulate that AI-generated content must be declared as such and bans some AI including real-time facial recognition systems. The post Google launches ChatGPT rival Bard in EU, Brazil appeared first on Daily Tribune......»»
UN talks aim to harness AI power and potential
The United Nations is convening this week a global gathering to try to map out the frontiers of artificial intelligence and to harness its potential for empowering humanity. The UN hopes to lay out a clear blueprint for the way forward for handling AI, as the development of the technology races ahead the capacity to set its boundaries. The "AI for Good Global Summit", being held in Geneva on Thursday and Friday, will bring together around 3,000 experts from companies like Microsoft and Amazon as well as from universities and international organizations to try to sculpt frameworks for handling AI. "This technology is moving fast," said Doreen Bogdan-Martin, head of the International Telecommunication Union, the UN's information and communications technology agency that convened the summit. "It's a real opportunity for the world's leading voices on AI to come together on the global stage and to address governance issues," she told reporters. "Doing nothing is not an option. Humanity is dependent upon it. So we have to engage and try and ensure a responsible future with AI." She said the summit would examine possible frameworks and guardrails to support safe AI use. Listed participants include Amazon's chief technology officer Werner Vogels, Google DeepMind chief operating officer Lila Ibrahim and former Spain football captain Iker Casillas -- who suffered a heart attack in 2019 and now advocates for AI use in heart attack prevention. They will be joined by dozens of robots, including several humanoids like Ai-Da, the first ultra-realistic robot artist; Ameca, the world's most advanced life-like robot; the humanoid rock singer Desdemona; and Grace, the most advanced healthcare robot. Benefiting humanity? The Geneva-based ITU feels it can bring its experience to bear on AI governance. Founded in 1865, the ITU is the oldest agency in the UN fold. It established "SOS" as the Morse code international maritime distress call in 1906, and coordinates everything from radio frequencies to satellites and 5G. The summit wants to identify ways of using AI to advance the UN's lagging sustainable development goals on issues such as health, the climate, poverty, hunger and clean water. Bogdan-Martin said AI must not exacerbate social inequalities or introduce biases on race, gender, politics, culture, religion or wealth. "This summit can help ensure that AI charts the course that benefits humanity," UN chief Antonio Guterres said. However, while AI proponents hail the technology for how it can transform society, including work, healthcare and creative pursuits, others are worried by its potential to undermine democracy. 'Perfect storm' "We're kind of in a perfect storm of suddenly having this powerful new technology -- I don't think it's super-intelligent -- being spread very widely and empowered in our lives, and we're really not prepared," said serial AI entrepreneur Gary Marcus. "We're at a critical moment in history when we can either get this right and build the global governance we need, or get it wrong and not succeed and wind up in a bad place where a few companies control the fates of many, many people without sufficient forethought," he said. Last month, EU lawmakers pushed the bloc closer to passing one of the world's first laws regulating systems like OpenAI's ChatGPT chatbot. There is also growing clamor to regulate AI in the United States. ChatGPT has become a global sensation since it was launched late last year for its ability to produce human-like content, including essays, poems and conversations from simple prompts. It has sparked a mushrooming of generative AI content, leaving lawmakers scrambling to try to figure out how to regulate such bots. Juan Lavista Ferres, chief data scientist of the Microsoft AI For Good Lab, gave an example of how AI could be used "to make our world a better place". He compared the more than 400 million people diagnosed with diabetes, a major cause of blindness, with the small number of ophthalmologists. "It's physically impossible to diagnose every patient. Yet we and others have built AI models that today can take this condition with an accuracy that matches a very good ophthalmologist. This is something can even be done from a smartphone. "Here AI is not just a solution, but it's the only solution." The post UN talks aim to harness AI power and potential appeared first on Daily Tribune......»»
Access to information now easier with improved InLife Chatbot and website
Access to information now easier with improved InLife Chatbot and website.....»»
Conversing with an AI chatbot
Many market pundits are befuddled by the strong performance of US equities despite the prevalence of high interest rates, persistent inflation, the Russia-Ukraine war, US-China tensions, and fears of a global recession......»»
US lawyer sorry after ChatGPT creates ‘bogus’ cases
What happened when a US lawyer used ChatGPT to prepare a court filing? The artificial intelligence program invented fake cases and rulings, leaving the attorney rather red-faced. New York-based lawyer Steven Schwartz apologized to a judge this week for submitting a brief full of falsehoods generated by the OpenAI chatbot. "I simply had no idea that ChatGPT was capable of fabricating entire case citations or judicial opinions, especially in a manner that appeared authentic," Schwartz wrote in a court filing. The blunder occurred in a civil case being heard by Manhattan federal court involving a man who is suing the Colombian airline Avianca. Roberto Mata claims he was injured when a metal serving plate hit his leg during a flight in August 2019 from El Salvador to New York. After the airline's lawyers asked the court to dismiss the case, Schwartz filed a response that claimed to cite more than half a dozen decisions to support why the litigation should proceed. They included Petersen v. Iran Air, Varghese v. China Southern Airlines, and Shaboon v. Egyptair. The Varghese case even included dated internal citations and quotes. There was one major problem, however: neither Avianca's attorneys nor the presiding judge, P. Kevin Castel could find the cases. Schwartz was forced to admit that ChatGPT had made up everything. "The court is presented with an unprecedented circumstance," Judge Castel wrote last month. "Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations," he added. The judge ordered Schwartz and his law partner to appear before him to face possible sanctions. 'Ridiculed' In a filing on Tuesday, ahead of the hearing, Schwartz said that he wanted to "deeply apologize" to the court for his "deeply regrettable mistake." He said his college-educated children had introduced him to ChatGPT and it was the first time he had ever used it in his professional work. "At the time that I performed the legal research in this case, I believed that ChatGPT was a reliable search engine. I now know that was incorrect," he wrote. Schwartz added that it "was never my intention to mislead the court." ChatGPT has become a global sensation since it was launched late last year for its ability to produce human-like content, including essays, poems, and conversations from simple prompts. It has sparked a mushrooming of generative AI content, leaving lawmakers scrambling to try to figure out how to regulate such bots. A spokesperson for OpenAI did not immediately respond to a request for comment on Schwartz's snafu. The story was first reported by The New York Times. Schwartz said he and his firm, Levidow, Levidow & Oberman, had been "publicly ridiculed" in the media coverage. "This has been deeply embarrassing on both a personal and professional level as these articles will be available for years to come," he wrote. Schwartz added: "This matter has been an eye-opening experience for me and I can assure the court that I will never commit an error like this again." The post US lawyer sorry after ChatGPT creates ‘bogus’ cases appeared first on Daily Tribune......»»
Artificial intelligence and the legal practice
The Sony World Photography Awards of 2023 chose the entry of Boris Eldagsen to receive the first prize in its creative open category. Yet this German artist refused to accept the award. According to a CNN news article, this was because what he submitted was generated by an Artificial Intelligence or AI program. AI can be defined as the simulation of human intelligence by software-coded problem-solving shortcuts. The topic of AI has recently generated buzz. If before it was only a theme of futuristic movies, now the concept is materializing in present-day reality. AI has not only affected the industries of manufacturing, media, and transportation, it has now found its way into the field of law. OpenAI, a San Francisco-based AI research laboratory, launched ChatGPT in late 2022. This is a chatbot, which is an application that can imitate real-world and human-like functions. Some of these functions cover what comprises much of a lawyer’s work — drafting pleadings, reviewing contracts and writing memoranda, among others. Since the possibilities for the development of AI are endless, one cannot help but weigh the pros and cons of using it. In an interview with Reuters, Suffolk University Law School Dean Andrew Perlman thought that much like conducting research in Westlaw and LexisNexis, first-year law students should learn about using ChatGPT as a tool in their legal research and legal writing classes. However, just like any creation undergoing development, AI is far from being perfect. According to a recent national daily news article, an American lawyer is facing controversy when the court document he submitted cited six false cases generated by ChatGPT. He claimed that he was unaware that the AI program may produce fake content. As a result, he will be facing a sanctions hearing. This is not the first time that AI has generated misleading content. Fake photos of former US President Donald Trump being arrested, as well as fake photos of Pope Francis wearing a puffer jacket, went viral on the Internet. With the plethora of information accessible online, it is getting harder to detect what is true and it is getting easier to fabricate what is false. In an article published on the official website of the Supreme Court, Chief Justice Alexander G. Gesmundo revealed in a virtual meeting that the Court had proposed the use of AI for improving court operations. The proposal aims to build on the areas of preparation of transcripts of stenographic notes and digitalization of judgments that were already rendered. Since it has been established that AI can further progress as a powerful tool, it begs to answer the question, “What lies ahead for us in the legal practice?” As a new lawyer, I regularly use electronic legal research tools like CD Asia. Compared to the traditional way of going to the library, electronic tools greatly save time. How much more effort can be saved when one uses AI that can produce output by simply typing an instruction? It is my humble opinion, however, that while AI programs indeed promise cost-cutting benefits, there is nothing wrong with sticking to our old ways of diligently doing our legal work, especially when AI research programs are still problematic. We must err on the side of caution when using Artificial Intelligence because the stakes are high when we talk about what we represent before the courts. Putting myself in the shoes of clients, it would also be disconcerting if the lawyers they hired were charged for work that was only generated by an AI tool. Artificial Intelligence truly has its potential in legal practice. But pertaining to the core of what lawyering is and what the profession means, nothing beats our human touch. (Atty. Kristine Arlyce R. De Guzman just became a Member of the Philippine Bar in 2023. She received her Juris Doctor degree from the Ateneo de Manila University School of Law. She is currently an Associate at the Aranas Cruz Araneta Parker & Faustino Law Offices.) The post Artificial intelligence and the legal practice appeared first on Daily Tribune......»»
Robots, we are not
A chicken and beer place somewhere in South Korea has an adorable robot going around serving orders. Perhaps it made some customers order more, so they could keep seeing this contraption approach, bearing food. It is still such a novelty and, right now, this robot looks exactly like a machine — just moving parts with no face and “personality.” What if, in this lifetime, robots such as these begin to take on more character? It may not be too far-fetched. Movies have given us previews of these already, making viewers laugh or cry for and with a robot character. The truth is, robotics has been around for a long time, and so has artificial intelligence. It’s just a matter of harnessing the technology for global advancement, and this requires planning and strategy. The need to be at par with our neighbors, for one, calls for the addition of such subjects in the school curricula. The Department of Information and Communications Technology is correct in raising the possibility of making robotics and computer programming a part of the elementary school curriculum. New technology classes are needed, said DICT Assistant Secretary Jeffrey Dy at the opening of the Singapore-based robotics learning center Nullspace in Taguig recently. Catching up or keeping up is one thing, but there is a whole other consideration we may have to take note of at the same time. It has to do with speed, and more so with capacity. Human versus machine? Fiction is no longer too far-fetched these days. AI, former Google chief Eric Schmidt told a magazine, represents an “existential risk” that could threaten humanity. The article said, “He doesn’t feel that threat is serious at the moment, but he sees a near future where AI could help find software security flaws or new biology types. It’s important to ensure these systems aren’t ‘misused by evil people.’” Meanwhile, machine learning tools present the threat of surpassing the human capacity to learn. It took ChatGPT two months to learn something that its developers expected it would in six years. ChatGPT is “an AI chatbot that uses natural language processing to create humanlike conversational dialogue.” Of course, “a natural language processing tool” may seem harmless, but it is the fact that AI technology can pretty much perform functions only humans could before, including “composing emails, essays, and (eventually write) code.” Robotics are mostly applied in the manufacturing sector, but combined with AI and machine learning, humans are in for some tough competition. A robot who can talk, reason, and make decisions? A robot that learns at the speed of light, and becomes self-aware? We, humans, have never learned from our mistakes and history. We refuse to see reason, make laws we break, and create machines that destroy. It’s even said that “super soldier” robots may be in the works at this point. How are we expected to be in control of AI and intelligent machines when we are not even in control of ourselves? The post Robots, we are not appeared first on Daily Tribune......»»
Humans must stay in control of AI, European trade union chief warns
No employee should be "subject to the will of a machine", European trade union chief Esther Lynch has warned, calling for regulation to ensure humans remain in control as artificial intelligence technology advances at breakneck speed. In the same way that European Union treaties protect health and safety in the workplace, rules are needed to guarantee "the human-in-control principle" when it comes to AI, Lynch said in an interview ahead of a major gathering of union representatives in Berlin. "We need to be guaranteed that no worker is subject to the will of a machine," Lynch told AFP, a scenario she said would be "dystopian". Lynch, general secretary of the European Trade Union Confederation since last December, will head the four-day ETUC Congress that kicks off in the German capital on Tuesday. The event, held every four years, brings together hundreds of union officials from more than 40 countries to discuss topics ranging from workers' rights to the future of work, environmental protection, inequality and cross-border union cooperation. German Chancellor Olaf Scholz and European Commission President Ursula von der Leyen are among the speakers scheduled to address the congress. 'Not just the 1 percent' Ever since the wildly popular AI chatbot ChatGPT burst onto the scene late last year, debate has been swirling about how the technology will upend the world of work, potentially transforming many jobs along the way. While supporters point out that AI tools can take over automated or repetitive tasks and free up staff to do more creative work, sceptics worry about job cuts, data protection and losing a human element in some decision-making processes. Lynch, 60, said AI regulation was one of the topics she would be discussing with the EU's Jobs and Social Rights Commissioner Nicolas Schmit during the congress. With every technology there's "a positive side and a negative side, and the same will be true of AI," the Irish woman said. "What we have seen is that whenever you involve workers and their unions in the introduction of technology... the outcomes are better." The EU is currently debating a draft text calling for curbs on how artificial intelligence can be used in Europe, bringing the bloc a step closer to an AI law. It is "critically important" that AI is introduced "in a way that works for working people rather than against them", Lynch said. "It can't be the case that only the top one percent take all of the benefits of AI, and leave everybody else not benefiting from the productivity gains that will come from AI," she went on. "We need to make sure that where parts of jobs or whole jobs or whole industries are displaced, that there are other quality jobs created." Inflation costs Division of wealth will be a key theme at the congress as employees across Europe feel the pain from a cost-of-living squeeze as a result of high inflation. Lynch said while workers were struggling to make ends meet, many companies had benefited from rising prices and enjoyed higher profits and dividend payouts. "Europe's top 1,200 companies' dividends increased by 14 percent" last year, she said, whereas wages only rose by four percent on average. "So it's quite clear who's driving inflation. It's not working people," Lynch said. The European Central Bank's series of interest rate hikes, aimed at cooling inflation, were only worsening the inequality, she added. Higher borrowing costs "aren't the solution for treating dividends in a fairer way," according to Lynch. "The solution for that is: tax those dividends and then redistribute the wealth," she said. The post Humans must stay in control of AI, European trade union chief warns appeared first on Daily Tribune......»»
Metaverse shapes recruitment future
Sun Life Asia Service Center Philippines is tapping into the power of Metaverse for its talent acquisition and recruitment activities as it rolls out its campus hiring among technology university students across the region. The campus hiring in Metaverse aligns with the company's overall recruitment efforts to equip university students in their career journey right from graduation to internship and formal employment. Furthermore, Sun Life ASCP envisions achieving a more streamlined and efficient hiring operation, identifying the right talents for each available role, and ultimately providing a positive candidate experience throughout the hiring process. The Metaverse will mirror the existing facilities of the Sun Life ASCP office through 3D technology — allowing candidates to create their fun avatars and easily roam around, engage, and interact with Sun Life ASCP's HR recruiters and hiring managers — just like how it was in real life. Apart from launching Metaverse, Sun Life ASCP will also roll out its own Sinag Chatbot, which will be the candidate's companion — guiding them in their queries, concerns, and other application requirements when they engage with it on ASCP's Facebook page. Utilizing this automated tool will enable faster turn-around time in processing applications while saving resources and expenses for both the candidate and the company. With the launch of these new digital innovations, Sun Life ASCP is eyeing to target 300 potential candidates — contributing 10-20 percent of new hires for the company. Through this, it expects to generate a minimum of P100,000 in savings per hire. "Sun Life ASCP has constantly been innovating and leveraging new technologies to enhance its operations, improve customer experiences and overall deliver best-in-class services," shared Chandan Barve, VP and chief administrative officer of Sun Life ASC. The launch of campus hiring in Metaverse and Sinag Chatbot at ASCP aligns with our digital-first approach. It will help improve the company's overall recruitment effort as we establish a more efficient and effective hiring process. As a result, candidates can enjoy a positive and seamless experience throughout the hiring journey." Some of the past innovations implemented by Sun Life ASCP were the gamification of its performance management, where it leveraged game-like elements such as scoring, rewards, and competitions to create a more engaging workplace experience, and hosting of virtual job fairs and webinars for job seekers to apply easily and follow-up their applications in the comforts of their home. Since then, the company has achieved scale, growth, and operational maturity by providing Business Processing, IT, investment research, and enterprise infrastructure to Sun Life's global businesses. Moreover, it continues to enhance its business model by building a digital culture and mindset — all enabled by the latest technologies, data-driven insights, skill sets, talent and agile and innovative frameworks. Metaverse campus hiring a critical digital intervention In this age of rapid digitalization, companies leverage more technologies and tools to engage with their audiences effectively. Gartner, a leading technology research and consulting firm, defines digitalization as using digital technologies to transform and enhance business models — creating new revenue and value-creation opportunities. By embracing digitalization, companies can stay ahead of the curve and remain competitive in an ever-evolving marketplace. The post Metaverse shapes recruitment future appeared first on Daily Tribune......»»
AI rules urged at U.S. Senate hearing
A United States Senate hearing on artificial intelligence opened with the computer-generated voice of a senator reading a text written by a chatbot. Democrat Senator Richard Blumenthal used the dramatics to demontrate the risk of disinformation from AI with the chief executive officer of OpenAI, developer of the bot called ChatGPT, endorsing to US lawmakers that regulation of the technology is a must during the hearing at the Capitol Hill on Tuesday. After Blumenthal clarified that his opening speech and the voice that spoke it were not his, OpenAI CEO Sam Altman began his testimony before the US Senate judiciary subcommittee that included this warning: “If this technology goes wrong, it can go quite wrong.” Altman insisted that in time, generative AI developed by OpenAI will “address some of humanity’s biggest challenges, like climate change and curing cancer.” However, given concerns about disinformation, job security and other hazards, “we think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” he said. Altman suggested the US government license and test powerful AI models before its release, and revoke permits if AI rules were broken. He also recommended labeling and increased global coordination in setting up rules over the technology as well as the creation of a dedicated US agency to handle AI. Governments worldwide are under pressure to move quickly after the release of ChatGPT, a bot that can churn out human-like content in an instant, went viral and both wowed and spooked users. Altman has since become the global face of AI as he both pushes out his company’s technology, including to Microsoft and scores of other companies, and warns that the work could have nefarious effects on society. “OpenAI was founded on the belief that artificial intelligence has the potential to improve nearly every aspect of our lives, but also that it creates serious risks,” Altman told the hearing. The post AI rules urged at U.S. Senate hearing appeared first on Daily Tribune......»»
Chatbot, am not
Two little mounds of flesh, So soft, so round, so fair, Upon a chest of snow, Two little buds of air. Two little peaks of joy, So sensitive, so sweet, That when touched, They stand erect, entice. Two little points of love, So full of life, so warm, That when they’re kissed They give thrills of joy. Two little cups of milk, So sweet, so pure, so white, That filled they give A nourishment so bright. Two little symbols of The love that God has given, To womankind to show Motherhood’s joys — Bard, waxing poetic on women’s bosoms ChatGPT, while still in continuous development since its release in November 2022, has taken the world by storm because its use of artificial intelligence, or AI, has allowed it to hold open-ended conversations with humans in written form. But rave about ChatGPT we’d rather not because OpenAI’s creation is just a more advanced and sophisticated variant of what personal assistants like Alexa, Siri, or Hey Google have been doing for many years now through smart devices, albeit only verbally or graphically. ChatGPT, I’d dare say, is more evolutionary than revolutionary. Suffice it to say, though, that we can see sooner than later chatbots and personal assistants crossing over into each other’s digital realms to cover all platforms whether written verbal, photos, videos, or robotics. Their very human creators (greed is exclusive to man, right?) would all do that because convergence holds the key to making money from disruptive or game-changing concepts like AI. Mind you, AI can be made to do a lot more things other than Google’s Bard being told to generate (did I say write?) that paean of a poem above on women’s nurturing nature. AI, as you read this, is already powering self-driving cars, smart cities, customer service, and revolutionizing education through customized tutorials. But AI, as a branch of computer science, deserves a series of columns. So enough of this digression and let’s go back to the specifics of those AI-powered chatbots, with me wearing my Daily Tribune technology editor’s hat. Am no expert here, but who can claim to be one when even Google’s top executives admitted in an interview with 60 Minutes that they could not put a finger on how Bard, their answer to ChatGPT, works? You know, just like opening Pandora’s or Forrest Gump’s box of chocolates: Either you get all of the evils of the world pouring out because of AI, or life’s gooey, mouth-watering goodness, or both. Who knows what the future holds? Going back to ChatGPT, which has gotten a headstart over Bard but may eventually cave in from the power of Google’s superlative search engine and wealth, there’s a certain depth of response that people get from it. Alexa or Siri could not hold a candle to ChatGPT in making some lazy people think they can use the AI chatbot instead of doing actual research or in passing themselves off as writers even at the risk of being exposed as plagiarists. One daughter told me that, in college, they have unmasked ChatGPT users among their peers, having analyzed the plagiarized works vis-a-vis what the chatbot does and its formula for responding to prompts. In most cases, ChatGPT throws in a structured response broken down into three parts: Introduction, body (where it does its data dumping, information overload), and conclusion, she told me. Likewise, perfection is a giveaway when somebody who could not even write a single grammatically passable sentence suddenly becomes Mark Twain-ish. As AI chatbots can get overly formulaic, it would not take rocket science to decipher a chatbot-generated piece of writing, notwithstanding Google’s attempt at selling Bard as one infused with black magic, a pathetic attempt at marketing hype. The post Chatbot, am not appeared first on Daily Tribune......»»
AI Chatbot-generated cocktails on the rise; ChatGPT recipe for world s best cocktail
As the conversations regarding whether ChatGPT and other similar platforms are beneficial or a threat to humanity, a number of individuals have turned to artificial intelligence for new ideas about recipes for cocktails......»»