01. WSORP2011: Web-Scale OCR Research Project
Subject Area: Computational Linguistics, Optical Character Recognition
Date: 2011
Status: Completed
Funding: LACSC
Description: The aim of this research project was to design and
develop optical character recognition error-correction algorithms based on
web-scale data to detect and correct OCR misspellings using information
collected from online web search engines.
Findings & Publications: The results were
two research papers published in
international refereed journals:
Paper 1: Youssef Bassil, Mohammad Alwani, “OCR Post-Processing Error Correction
Algorithm using Google Online Spelling Suggestion”, International Journal of
Emerging Trends in Computing and Information Sciences, vol. 3, no. 1, pp. 90-99, 2012.
[pdf]
Abstract: With the advent of digital optical scanners, a lot of paper-based books, textbooks, magazines, articles, and documents are being transformed into an electronic version that can be manipulated by a computer.
For this purpose, OCR, short for Optical Character Recognition was developed to translate scanned graphical
text into editable computer text. Unfortunately, OCR is still imperfect as it occasionally mis-recognizes letters
and falsely identifies scanned text, leading to misspellings and linguistics errors in the OCR output text. This paper proposes a post-processing context-based error correction algorithm for detecting and correcting OCR non-word and real-word errors.
The proposed algorithm is based on Google’s online spelling suggestion which harnesses an internal database containing a huge collection of terms and word sequences gathered from all over the web, convenient to suggest possible replacements for words that
have been misspelled during the OCR process. Experiments carried out revealed a significant improvement in OCR error correction rate. Future research can improve upon the proposed algorithm so much so that it can be parallelized and executed over multiprocessing platforms.
Paper 2: Youssef Bassil, Mohammad Alwani, “OCR Context-Sensitive Error
Correction Based on Google Web 1T 5-Gram Data Set”, American Journal of
Scientific Research, no. 50, pp. 14-25, 2012.
[pdf]
Abstract: Since the dawn of the computing era, information has been represented digitally so that it can be processed by electronic computers. Paper books and documents were abundant and widely being published at that time; and hence, there was a need to convert them into
digital format. OCR, short for Optical Character Recognition was conceived to translate paper-based books into digital e-books. Regrettably, OCR systems are still erroneous and inaccurate as they produce misspellings in the recognized text, especially when the source document is of low printing quality.
This paper proposes a post-processing OCR context-sensitive error correction method for detecting and correcting non-word and real-word OCR errors. The cornerstone of this proposed approach is the use of Google Web 1T 5-gram data set as a dictionary of words to spell-check OCR text. The Google data set incorporates a very
large vocabulary and word statistics entirely reaped from the Internet, making it a reliable source to perform dictionary-based error correction. The core of the proposed solution is a combination of three algorithms: The error detection, candidate spellings generator, and error correction algorithms, which all exploit
information extracted from Google Web 1T 5-gram data set. Experiments conducted on scanned images written in different languages showed a substantial improvement in the OCR error correction rate. As future developments, the proposed algorithm is to be parallelised so as to support parallel and distributed computing architectures.
02. WSSCRP2011: Web-Scale Spell-Checking Research Project
Subject Area: Computational Linguistics, Spell-Checking
Date: 2011
Status: Completed
Funding: LACSC
Description: The aim of this research project was to investigate and build text spell-checking algorithms based on web-scale information of web search engines which house millions of public web pages containing trillions of word collocations and word n-gram sequences, suitable for emulating a universal dictionary that can be used for spell-checking applications.
Findings & Publications: The results were
two research papers published in
international refereed journals:
Paper 1: Youssef Bassil, Mohammad Alwani, “Context-Sensitive Spelling Correction using Google Web 1T 5-Gram", Information, Computer and Information Science, vol. 5, no. 3, pp. 23-31, 2012.
[pdf]
Abstract: In computing, spell checking is the process of detecting and sometimes providing spelling suggestions for incorrectly spelled words in a text. Basically, a spell checker is a computer program that uses a dictionary of words to perform spell checking. The bigger the dictionary is, the higher
is the error detection rate. The fact that spell checkers are based on regular dictionaries, they suffer from data sparseness problem as they cannot capture large vocabulary of words including proper names, domain-specific terms, technical jargons, special acronyms, and terminologies. As a result, they exhibit low error
detection rate and often fail to catch major errors in the text. This paper proposes a new context-sensitive spelling correction method for detecting and correcting non-word and real-word errors in digital text documents. The approach hinges around data statistics from Google Web 1T 5-gram data set which consists of a big
volume of n-gram word sequences, extracted from the World Wide Web. Fundamentally, the proposed method comprises an error detector that detects misspellings, a candidate spellings generator based on a character 2-gram model that generates correction suggestions, and an error corrector that performs contextual error correction.
Experiments conducted on a set of text documents from different domains and containing misspellings, showed an outstanding spelling error correction rate and a drastic reduction of both non-word and real-word errors. In a further study, the proposed algorithm is to be parallelized so as to lower the computational cost of the error detection and correction processes.
Paper 2: Youssef Bassil, “Parallel Spell-Checking Algorithm Based on Yahoo! N-Grams Dataset”, International Journal of Research and Reviews in Computer Science, vol. 3, no. 1, pp.1429-1435, 2012.
[pdf]
Abstract: Spell-checking is the process of detecting and sometimes providing suggestions for incorrectly spelled words in a text. Basically, the larger the dictionary of a spell-checker is, the higher is the error detection rate; otherwise, misspellings would pass undetected. Unfortunately, traditional dictionaries suffer from out-of-vocabulary and data
sparseness problems as they do not encompass large vocabulary of words indispensable to cover proper names, domain-specific terms, technical jargons, special acronyms, and terminologies. As a result, spell-checkers will incur low error detection and correction rate and will fail to flag all errors in the text. This paper proposes a new parallel shared-memory spell-checking
algorithm that uses rich real-world word statistics from Yahoo! N-Grams Dataset to correct non-word and real-word errors in computer text. Essentially, the proposed algorithm can be divided into three sub-algorithms that run in a parallel fashion: The error detection algorithm that detects misspellings, the candidates generation algorithm that generates correction suggestions,
and the error correction algorithm that performs contextual error correction. Experiments conducted on a set of text articles containing misspellings, showed a remarkable spelling error correction rate that resulted in a radical reduction of both non-word and real-word errors in electronic text. In a further study, the proposed algorithm is to be optimized for message-passing systems
so as to become more flexible and less costly to scale over distributed machines.
03. WSSRRP2011: Web-Scale Speech Recognition Research Project
Subject Area: Computational Linguistics, Speech Recognition
Date: 2011
Status: Completed
Funding: LACSC
Description: The aim of this research project was to design and experiment error correction algorithms for speech recognition systems using web-scale data. Such web-scale data can be seamlessly provided by online search engines which incorporate gigantic repositories of terms, jargons, expression, and n-gram word sequences.
Findings & Publications: The results were
two research papers published in
international refereed journals:
Paper 1: Youssef Bassil, Mohammad Alwani, “Post-Editing Error Correction Algorithm for Speech Recognition using Bing Spelling Suggestion ", International Journal
of Advanced Computer Science and Applications, vol. 3, no. 2, pp. 95-101, 2012.
[pdf]
Abstract: ASR short for Automatic Speech Recognition is the process of converting a spoken speech into text that can be manipulated by a computer. Although ASR has several applications, it is still erroneous and imprecise especially if used in a harsh surrounding wherein the input speech is of low quality. This paper proposes a post-editing
ASR error correction method and algorithm based on Bing’s online spelling suggestion. In this approach, the ASR recognized output text is spell-checked using Bing’s spelling suggestion technology to detect and correct misrecognized words. More specifically, the proposed algorithm breaks down the ASR output text into several word-tokens that are submitted as
search queries to Bing search engine. A returned spelling suggestion implies that a query is misspelled; and thus it is replaced by the suggested correction; otherwise, no correction is performed and the algorithm continues with the next token until all tokens get validated. Experiments carried out on various speeches in different languages indicated a successful
decrease in the number of ASR errors and an improvement in the overall error correction rate. Future research can improve upon the proposed algorithm so much so that it can be parallelized to take advantage of multiprocessor computers.
Paper 2: Youssef Bassil, Paul Semaan, “ASR Context-Sensitive Error Correction Based on Microsoft N-Gram Dataset”, Journal of Computing, vol. 4, no. 1, pp. 34-42, 2012.
[pdf]
Abstract: At the present time, computers are employed to solve complex tasks and problems ranging from simple calculations to intensive digital image processing and intricate algorithmic optimization problems to computationally-demanding weather forecasting problems. ASR short for Automatic Speech Recognition is yet another type of
computational problem whose purpose is to recognize human spoken speech and convert it into text that can be processed by a computer. Despite that ASR has many versatile and pervasive real-world applications, it is still relatively erroneous and not perfectly solved as it is prone to produce spelling errors in the recognized text, especially if the ASR system is operating
in a noisy environment, its vocabulary size is limited, and its input speech is of bad or low quality. This paper proposes a post-editing ASR error correction method based on Microsoft N-Gram dataset for detecting and correcting spelling errors generated by ASR systems. The proposed method comprises an error detection algorithm for detecting word errors; a candidate corrections
generation algorithm for generating correction suggestions for the detected word errors; and a context-sensitive error correction algorithm for selecting the best candidate for correction. The virtue of using the Microsoft N-Gram dataset is that it contains real-world data and word sequences extracted from the web which can mimic a comprehensive dictionary of words having a large
and all inclusive vocabulary. Experiments conducted on numerous speeches, performed by different speakers, showed a remarkable reduction in ASR errors. Future research can improve upon the proposed algorithm so much so that it can be parallelized to take advantage of multiprocessor and distributed systems.
04. WIRRP2011: Web Information Retrieval Research Project
Subject Area: Computational Linguistics, Information Retrieval
Date: 2011
Status: Completed
Funding: LACSC
Description: The aim of this research project was to develop information retrieval (IR) models that are suitable for the indexing and retrieving of web documents. These models should not be based on keyword matching but on hybrid methods that combine syntactic, semantic, and visual properties of HTML documents.
Findings & Publications: The results were
two research papers published in
international refereed journals:
Paper 1: Youssef Bassil, “Hybrid Information Retrieval Model For Web Images”, International Journal of Computer Science & Emerging Technologies, vol. 3, no. 1, pp. 23-31, 2012.
[pdf]
Abstract: The Bing Bang of the Internet in the early 90’s increased dramatically the number of images being distributed and shared over the web. As a result, image information retrieval systems were developed to index and retrieve image files spread over the Internet. Most of these systems are keyword-based which search for
images based on their textual metadata; and thus, they are imprecise as it is vague to describe an image with a human language. Besides, there exist the content-based image retrieval systems which search for images based on their visual information. However, content-based type systems are still immature and not that effective as they suffer from
low retrieval recall/precision rate. This paper proposes a new hybrid image information retrieval model for indexing and retrieving web images published in HTML documents. The distinguishing mark of the proposed model is that it is based on both graphical content and textual metadata. The graphical content is denoted by color features and color histogram
of the image; while textual metadata are denoted by the terms that surround the image in the HTML document, more particularly, the terms that appear in the tags p, h1, and h2, in addition to the terms that appear in the image’s alt attribute, filename, and class-label. Moreover, this paper presents a new term weighting scheme called VTF-IDF short for
Variable Term Frequency-Inverse Document Frequency which unlike traditional schemes, it exploits the HTML tag structure and assigns an extra bonus weight for terms that appear within certain particular HTML tags that are correlated to the semantics of the image. Experiments conducted to evaluate the proposed IR model showed a high retrieval precision rate that
outpaced other current models. As future work, the proposed model is to be extended to support not only web images but also web videos and audio clips, as well as other types of multimedia files.
Paper 2: Youssef Bassil, Paul Semaan, “Semantic-Sensitive Web Information Retrieval Model for HTML Documents”, European Journal of Scientific Research, vol. 69, no. 4, pp. 550-559, 2012.
[pdf]
Abstract: With the advent of the Internet, a new era of digital information exchange has begun. Currently, the Internet encompasses more than five billion online sites and this number is exponentially increasing every day. Fundamentally, Information Retrieval (IR) is the science and practice of storing documents and retrieving information from within
these documents. Mathematically, IR systems are at the core based on a feature vector model coupled with a term weighting scheme that weights terms in a document according to their significance with respect to the context in which they appear. Practically, Vector Space Model (VSM), Term Frequency (TF), and Inverse Term Frequency (IDF) are among other long-established
techniques employed in mainstream IR systems. However, present IR models only target generic-type text documents, in that, they do not consider specific formats of files such as HTML web documents. This paper proposes a new semantic-sensitive web information retrieval model for HTML documents. It consists of a vector model called SWVM and a weighting scheme called BTF-IDF,
particularly designed to support the indexing and retrieval of HTML web documents. The chief advantage of the proposed model is that it assigns extra weights for terms that appear in certain pre-specified HTML tags that are correlated to the semantics of the document. Additionally, the model is semantic-sensitive as it generates synonyms for every term being indexed and later
weights them appropriately to increase the likelihood of retrieving documents with similar context but different vocabulary terms. Experiments conducted, revealed a momentous enhancement in the precision of web IR systems and a radical increase in the number of relevant documents being retrieved. As further research, the proposed model is to be upgraded so as to support the indexing
and retrieval of web images in multimedia-rich web documents.
Paper 3: Youssef Bassil, “A Survey on Information Retrieval, Text Categorization, and Web Crawling”, Journal of Computer Science & Research, vol. 1, no. 6, pp. 1-11, 2012.
[pdf]
Abstract: This paper is a survey discussing Information Retrieval concepts, methods, and applications. It goes deep into the document and query modelling involved in IR systems, in addition to pre-processing operations such as removing stop words and searching by synonym techniques. The paper also tackles text categorization along with its application in neural networks
and machine learning. Finally, the architecture of web crawlers is to be discussed shedding the light on how internet spiders index web documents and how they allow users to search for items on the web.
05. ACRP2011: Autonomic Computing Research Project
Subject Area: Autonomic Computing, Sustainable Computing
Date: 2011
Status: Completed
Funding: LACSC
Description: The aim of this research project was to investigate autonomic computing theories to build new models able of self-configuring of computer applications, relieving IT specialists from the burden of manually maintaining and customizing computing systems.
Findings & Publications: The results were
two research papers published in
international refereed journals:
Paper 1: Youssef Bassil, Paul Semaan, “Autonomic Model for Self-Configuring C#.NET Applications”, International Journal of Research Studies in Computing, vol. 1, no. 1, pp. 21-34, 2012.
[pdf]
Abstract: With the advances in computational technologies over the last decade, large organizations have been investing in Information Technology to automate their internal processes to cut costs and efficiently support their business projects. However, this comes to
a price. Business requirements always change. Likewise, IT systems constantly evolves as developers make new versions of them, which require endless administrative manual work to customize and configure them, especially if they are being used in different contexts, by different types of users,
and for different requirements. Autonomic computing was conceived to provide an answer to these ever-changing requirements. Essentially, autonomic systems are self-configuring, self-healing, self-optimizing, and self-protecting; hence, they can automate all complex IT processes without human intervention.
This paper proposes an autonomic model based on Venn diagram and set theory for self-configuring C#.NET applications, namely the self-customization of their GUI, event-handlers, and security permissions. The proposed model does not require altering the source-code of the original application; rather, it uses an
XML-based customization file to turn on and off the internal attributes of the application. Experiments conducted on the proposed model, showed a successful automatic customization for C# applications and an effective self-adaption based on dynamic business requirements. As future work, other programing languages
such as Java and C++ are to be supported, in addition to other operating systems such as Linux and Mac so as to provide a standard platform-independent autonomic self-configuring model.
Paper 2: Youssef Bassil, Mohammad Alwani, “Autonomic HTML Interface Generator for Web Applications”, International Journal of Web & Semantic Technology, vol. 3, no. 1, pp. 33-47, 2012.
[pdf]
Abstract: Recent advances in computing systems have led to a new digital era in which every area of life is nearly interrelated with information technology. However, with the trend towards large-scale IT systems, a new challenge has emerged. The complexity of IT systems is becoming an obstacle that
hampers the manageability, operability, and maintainability of modern computing infrastructures. Autonomic computing popped up to provide an answer to these ever-growing pitfalls. Fundamentally, autonomic systems are self-configuring, self-healing, self-optimizing, and self-protecting; hence, they can automate all
complex IT processes without human intervention. This paper proposes an autonomic HTML web-interface generator based on XML Schema and Style Sheet specifications for self-configuring graphical user interfaces of web applications. The goal of this autonomic generator is to automate the process of customizing GUI web-interfaces
according to the ever-changing business rules, policies, and operating environment with the least IT labor involvement. The conducted experiments showed a successful automation of web interfaces customization that dynamically self-adapts to keep with the always-changing business requirements. Future research can improve upon the
proposed solution so that it supports the self-configuring of not only web applications but also desktop applications.
06. DERP2011: Digital Ecosystem Research Project
Subject Area: Digital Ecosystem, Service Science
Date: 2011
Status: Completed
Funding: LACSC
Description: The aim of this research project was to investigate and study the emerging Digital Ecosystems and Ecosystem-Oriented Architectures. The research centred on defining standards and reference frameworks for digital ecosystems in terms of communication, management, interoperation, and sustainability.
Findings & Publications: The results were
one book and
three research papers published in
international refereed journals:
Book 1: Youssef Bassil, “Digital Ecosystems: Design and Implementation: A Practical Approach for Building Digital Ecosystems”, LAP LAMBERT Academic Publishing, ISBN13: 978-3659253041, 2012.
Description: Digital Ecosystems have been there for a while now. Most publications and books on digital ecosystems deal with theory while ignoring practice. This book discusses digital ecosystems from a design and implementation perspective. With this book, you will learn the inner-cuisine of digital ecosystems,
including the architecture of their components, their languages, their protocols, their management, their communication, and of course their implementation. It is about the know-how of digital ecosystems - how to put them into real action.
Paper 1: Youssef Bassil, “Building Sustainable Ecosystem-Oriented Architectures”, International Journal in Foundations of Computer Science & Technology, vol. 2, no. 1, pp. 1-13, 2012.
[pdf]
Abstract: Currently, organizations are transforming their business processes into e-services and service-oriented architectures to improve coordination across sales, marketing, and partner channels, to build flexible and scalable systems, and to reduce integration-related maintenance and development costs.
However, this new paradigm is still fragile and lacks many features crucial for building sustainable and progressive computing infrastructures able to rapidly respond and adapt to the always-changing market and environmental business. This paper proposes a novel framework for building sustainable Ecosystem-Oriented Architectures (EOA)
using e-service models. The backbone of this framework is an ecosystem layer comprising several computing units whose aim is to deliver universal interoperability, transparent communication, automated management, self-integration, self-adaptation, and security to all the interconnected services, components, and devices in the ecosystem.
Overall, the proposed model seeks to deliver a comprehensive and a generic sustainable business IT model for developing agile e-enterprises that are constantly up to new business constraints, trends, and requirements. Future research can improve upon the proposed model so much so that it supports computational intelligence to help in decision making and problem solving.
Paper 2: Youssef Bassil, “Communication Language Specifications For Digital Ecosystems”, International Journal of Advanced Research in Computer Science, vol. 3, no. 1, pp. 31-35, 2012.
[pdf]
Abstract: Service-based IT infrastructures are today’s trend and the future for every enterprise willing to support dynamic and agile business to contend with the ever changing e-demands and requirements. A digital ecosystem is an emerging business IT model for developing agile e-enterprises made out of self-adaptable, self-manageable, self-organizing,
and sustainable service components. This paper defines the specifications of a communication language for exchanging data between connecting entities in digital ecosystems. It is called ECL short for Ecosystem Communication Language and is based on XML to format its request and response messages. An ECU short for Ecosystem Communication Unit is also presented which interprets,
validates, parses ECL messages and routes them to their destination entities. ECL is open and provides transparent, portable, and interoperable communication between the different heterogeneous distributed components to send requests, and receive responses from each other, regardless of their incompatible protocols, standards, and technologies. As future research, digital signature
for ECL is to be investigated so as to deliver data integrity as well as message authenticity for the digital ecosystem.
Paper 3: Youssef Bassil, “Management Language Specifications For Digital Ecosystems”, Journal of Global Research in Computer Science, vol. 3, no. 1, pp. 1-6, 2012.
[pdf]
Abstract: This paper defines the specifications of a management language intended to automate the control and administration of various service components connected to a digital ecosystem. It is called EML short for Ecosystem Management Language and it is based on proprietary syntax and notation and contains a set of managerial commands issued by the system’s
administrator via a command console. Additionally, EML is shipped with a collection of self-adaptation procedures called SAP. Their purpose is to provide self-adaptation properties to the ecosystem allowing it to self-optimize itself based on the state of its execution environment. On top of that, there exists the EMU short for Ecosystem Management Unit which interprets, validates, parses, and executes EML
commands and SAP procedures. Future research can improve upon EML so much so that it can be extended to support a larger set of commands in addition to a larger set of SAP procedures.
07. SOARRP2012: Service-Oriented Architecture Robotics Research Project
Subject Area: Service-Oriented Architecture, Neural Networks
Date: 2012
Status: Completed
Funding: LACSC
Description: The aim of this research project was to study how SOA architectures can be applied in the robotics field to build scalable, reusable, maintainable, survivable, and interoperable component-based automated robot systems.
Findings & Publications: The results were
three research papers published in
international refereed journals:
Paper 1: Youssef Bassil, “Service-Oriented Architecture for Weaponry and Battle Command and Control Systems in Warfighting”, International Journal of Information and Communication Technology Research, vol. 2, no. 2, pp. 189-196, 2012.
[pdf]
Abstract: Military is one of many industries that is more computer-dependent than ever before, from soldiers with computerized weapons, and tactical wireless devices, to commanders with advanced battle management, command
and control systems. Fundamentally, command and control is the process of planning, monitoring, and commanding military personnel, weaponry equipment, and combating vehicles to execute military missions. In fact, command and control systems are revolutionizing as war
fighting is changing into cyber, technology, information, and unmanned warfare. As a result, a new design model that supports scalability, reusability, maintainability, survivability, and interoperability is needed to allow commanders, hundreds of miles away from the battlefield,
to plan, monitor, evaluate, and control the war events in a dynamic, robust, agile, and reliable manner. This paper proposes a service-oriented architecture for weaponry and battle command and control systems, made out of loosely-coupled and distributed web services. The proposed architecture
consists of three elementary tiers: the client tier that corresponds to any computing military equipment; the server tier that corresponds to the web services that deliver the basic functionalities for the client tier; and the middleware tier that corresponds to an enterprise service bus that
promotes interoperability between all the interconnected entities. A command and control system was simulated and experimented and it successfully exhibited the desired features of SOA. Future research can improve upon the proposed architecture so much so that it supports encryption for securing
the exchange of data between the various communicating entities of the system.
Paper 2: Youssef Bassil, “Service-Oriented Architecture for Space Exploration Robotic Rover Systems”, International Journal of Science & Emerging Technologies, vol. 3, no.2, pp. 61-70, 2012.
[pdf]
Abstract: Currently, industrial sectors are transforming their business processes into e-services and component-based architectures to build flexible, robust, and scalable systems, and reduce integration-related maintenance and development costs. Robotics is yet another promising
and fast-growing industry that deals with the creation of machines that operate in an autonomous fashion and serve for various applications including space exploration, weaponry, laboratory research, and manufacturing. It is in space exploration that the most common type of robots is the planetary rover
which moves across the surface of a planet and conducts a thorough geological study of the celestial surface. This type of rover system is still ad-hoc in that it incorporates its software into its core hardware making the whole system cohesive, tightly-coupled, more susceptible to shortcomings, less flexible,
hard to be scaled and maintained, and impossible to be adapted to other purposes. This paper proposes a service-oriented architecture for space exploration robotic rover systems made out of loosely-coupled and distributed web services. The proposed architecture consists of three elementary tiers: the client tier
that corresponds to the actual rover; the server tier that corresponds to the web services; and the middleware tier that corresponds to an Enterprise Service Bus which promotes interoperability between the interconnected entities. The niche of this architecture is that rover’s software components are decoupled and
isolated from the rover’s body and possibly deployed at a distant location. A service-oriented architecture promotes integrate-ability, scalability, reusability, maintainability, and interoperability for client-to-server communication. Future research can improve upon the proposed architecture so much so that it supports
encryption standards so as to deliver data security as well as message concealment for the various communicating entities of the system.
Paper 3: Youssef Bassil, “Neural Network Model for Path-Planning Of Robotic Rover Systems”, International Journal of Science and Technology, vol. 2, no. 2, pp. 94-100, 2012.
[pdf]
Abstract: Today, robotics is an auspicious and fast-growing branch of technology that involves the manufacturing, design, and maintenance of robot machines that can operate in an autonomous fashion and can be used in a wide variety of applications including space exploration, weaponry, household,
and transportation. More particularly, in space applications, a common type of robots has been of widespread use in the recent years. It is called planetary rover which is a robot vehicle that moves across the surface of a planet and conducts detailed geological studies pertaining to the properties of the landing cosmic environment.
However, rovers are always impeded by obstacles along the traveling path which can destabilize the rover’s body and prevent it from reaching its goal destination. This paper proposes an ANN model that allows rover systems to carry out autonomous path-planning to successfully navigate through challenging planetary terrains and follow
their goal location while avoiding dangerous obstacles. The proposed ANN is a multilayer network made out of three layers: an input, a hidden, and an output layer. The network is trained in offline mode using back-propagation supervised learning algorithm. A software-simulated rover was experimented and it revealed that it was able to follow
the safest trajectory despite existing obstacles. As future work, the proposed ANN is to be parallelized so as to speed-up the execution time of the training process.
08. ESTRP2012: Expert System Troubleshooting Research Project
Subject Area: Computational Intelligence, Expert Systems
Date: 2012
Status: Completed
Funding: LACSC
Description: The aim of this research project was to exploit expert systems and knowledge-based systems for troubleshooting applications, in addition to model real-world troubleshooting parameters using Fuzzy Logic and carry out machine learning using Intelligent Agents.
Findings & Publications: The results were
one research paper published in
international refereed journal:
Paper 1: Youssef Bassil, “Expert PC Troubleshooter With Fuzzy-Logic And Self-Learning Support”, International Journal of Artificial Intelligence & Applications, vol. 3, no. 2, pp. 11-21, 2012.
[pdf]
Abstract: Expert systems use human knowledge often stored as rules within the computer to solve problems that generally would entail human intelligence. Today, with information systems turning out to be more pervasive and with the myriad advances in information technologies,
automating computer fault diagnosis is becoming so fundamental that soon every enterprise has to endorse it. This paper proposes an expert system called Expert PC Troubleshooter for diagnosing computer problems. The system is composed of a user interface, a rule-base, an inference engine, and an expert interface.
Additionally, the system features a fuzzy-logic module to troubleshoot POST beep errors, and an intelligent agent that assists in the knowledge acquisition process. The proposed system is meant to automate the maintenance, repair, and operations (MRO) process, and free-up human technicians from manually performing routine,
laborious, and time-consuming maintenance tasks. As future work, the proposed system is to be parallelized so as to boost its performance and speed-up its various operations.
09. DWRP2012: Data Warehouse Research Project
Subject Area: Database Systems, Data Mining
Date: 2012
Status: Completed
Funding: LACSC
Description: The aim of this research project was to design a data warehouse for an information system suitable for decision makers to carry out data analysis and capture patterns and trends.
Findings & Publications: The results were
one research paper published in
international refereed journal:
Paper 1: Youssef Bassil, “A Data Warehouse Design for A Typical University Information System”, Journal of Computer Science & Research, vol. 1, no. 6, pp. 12-17, 2012.
[pdf]
Abstract: Presently, large enterprises rely on database systems to manage their data and information. These databases are useful for conducting daily business transactions. However, the tight competition in the marketplace has led to the concept
of data mining in which data are analyzed to derive effective business strategies and discover better ways in carrying out business. In order to perform data mining, regular databases must be converted into what so called informational databases also known as data
warehouse. This paper presents a design model for building data warehouse for a typical university information system. It is based on transforming an operational database into an informational warehouse useful for decision makers to conduct data analysis, predication,
and forecasting. The proposed model is based on four stages of data migration: Data extraction, data cleansing, data transforming, and data indexing and loading. The complete system is implemented under MS Access 2010 and is meant to serve as a repository of data for data mining operations.
10. EPRP2012: Evaluation & Performance Research Project
Subject Area: Performance Testing
Date: 2012
Status: Completed
Funding: LACSC
Description: The aim of this research project was to evaluate and test different computing applications from a performance perspective. This included DBMS, Algorithms, and Operating systems.
Findings & Publications: The results were
three research papers published in
international refereed journals:
Paper 1: Youssef Bassil, “A Comparative Study on the Performance of Permutation Algorithms”, Journal of Computer Science & Research, vol. 1, no. 1, pp. 7-19, 2012.
[pdf]
Abstract: Permutation is the different arrangements that can be made with a given number of things taking some or all of them at a time. The notation P(n,r) is used to denote the number of permutations of n things taken r at a time. Permutation is
used in various fields such as mathematics, group theory, statistics, and computing, to solve several combinatorial problems such as the job assignment problem and the traveling salesman problem. In effect, permutation algorithms have been studied and experimented for
many years now. Bottom-Up, Lexicography, and Johnson-Trotter are three of the most popular permutation algorithms that emerged during the past decades. In this paper, we are implementing three of the most eminent permutation algorithms, they are respectively:
Bottom-Up, Lexicography, and Johnson-Trotter algorithms. The implementation of each algorithm will be carried out using two different approaches: brute-force and divide and conquer. The algorithms codes will be tested using a computer simulation tool to measure and evaluate
the execution time between the different implementations.
Paper 2: Youssef Bassil, “A Comparative Study on the Performance of the Top DBMS Systems”, Journal of Computer Science & Research, vol. 1, no. 1, pp. 20-31, 2012.
[pdf]
Abstract: Database management systems are today’s most reliable mean to organize data into collections that can be searched and updated. However, many DBMS systems are available on the market each having their pros and cons in terms of reliability, usability,
security, and performance. This paper presents a comparative study on the performance of the top DBMS systems. They are mainly MS SQL Server 2008, Oracle 11g, IBM DB2, MySQL 5.5, and MS Access 2010. The testing is aimed at executing different SQL queries with different level of
complexities over the different five DBMSs under test. This would pave the way to build a head-to-head comparative evaluation that shows the average execution time, memory usage, and CPU utilization of each DBMS after completion of the test.
Paper 3: Youssef Bassil, “Windows And Linux Operating Systems From A Security Perspective”, Journal of Global Research in Computer Science, vol. 3, no. 2, pp. 17-24, 2012.
[pdf]
Abstract: Operating systems are vital system software that, without them, humans would not be able to manage and use computer systems. In essence, an operating system is a collection of software programs whose role is to manage computer resources and provide an
interface for client applications to interact with the different computer hardware. Most of the commercial operating systems available today on the market have buggy code and they exhibit security flaws and vulnerabilities. In effect, building a trusted operating system that can mostly
resist attacks and provide a secure computing environment to protect the important assets of a computer is the goal of every operating system manufacturer. This paper deeply investigates the various security features of the two most widespread and successful operating systems, Microsoft Windows and Linux.
The different security features, designs, and components of the two systems are to be covered elaborately, pin-pointing the key similarities and differences between them. In due course, a head-to-head comparison is to be drawn for each security aspect, exposing the advantage of one system over the other.
11. STRP2012: Simulation & Testing Research Project
Subject Area: Computational Simulation, Testing Architecture
Date: 2012
Status: Completed
Funding: LACSC
Description: The aim of this research project was to develop testing architectures and simulation models for complex and dynamic systems to help in decision making and validation and verification processes.
Findings & Publications: The results were
two research papers published in
international refereed journals:
Paper 1: Youssef Bassil, “Distributed, Cross-Platform, and Regression Testing”, Advances in Computer Science and its Applications, vol. 1, no. 1, pp. 9-15, 2012.
[pdf]
Abstract: As per leading IT experts, today’s large enterprises are going through business transformations. They are adopting service-based IT models such as SOA to develop their enterprise information systems and applications.
In fact, SOA is an integration of loosely-coupled interoperable components, possibly built using heterogeneous software technologies and hardware platforms. As a result, traditional testing architectures are no more adequate for verifying and validating the quality of
SOA systems and whether they are operating to specifications. This paper first discusses the various state-of-the-art methods for testing SOA applications, and then it proposes a novel automated, distributed, cross-platform, and regression testing architecture for SOA systems.
The proposed testing architecture consists of several testing units which include test engine, test code generator, test case generator, test executer, and test monitor units. Experiments conducted showed that the proposed testing architecture managed to use parallel agents to test
heterogeneous web services whose technologies were incompatible with the testing framework. As future work, testing non-functional aspects of SOA applications are to be investigated so as to allow the testing of such properties as performance, security, availability, and scalability.
Paper 2: Youssef Bassil, “A Simulation Model for the Waterfall Software Development Life Cycle”, International Journal of Engineering & Technology, vol. 2, no. 5, pp. 23-31, 2012.
[pdf]
Abstract: Software development life cycle or SDLC for short is a methodology for designing, building, and maintaining information and industrial systems. So far, there exist many SDLC models, one of which is the Waterfall model which comprises five phases to be
completed sequentially in order to develop a software solution. However, SDLC of software systems has always encountered problems and limitations that resulted in significant budget overruns, late or suspended deliveries, and dissatisfied clients. The major reason for these deficiencies is that
project directors are not wisely assigning the required number of workers and resources on the various activities of the SDLC. Consequently, some SDLC phases with insufficient resources may be delayed; while, others with excess resources may be idled, leading to a bottleneck between the arrival
and delivery of projects and to a failure in delivering an operational product on time and within budget. This paper proposes a simulation model for the Waterfall development process using the Simphony.NET simulation tool whose role is to assist project managers in determining how to achieve the
maximum productivity with the minimum number of expenses, workers, and hours. It helps maximizing the utilization of development processes by keeping all employees and resources busy all the time to keep pace with the arrival of projects and to decrease waste and idle time. As future work, other
SDLC models such as spiral and incremental are to be simulated, giving project executives the choice to use a diversity of software development methodologies.
12. TWNRP2012: TCP for Wireless Networks Research Project
Subject Area: Computer Networking, Wireless Network
Date: 2012
Status: Completed
Funding: LACSC
Description: The aim of this research project was to investigate and find a solution for TCP congestion problems over wireless networks.
Findings & Publications: The results were
one research paper published in
international refereed journal:
Paper 1: Youssef Bassil, “TCP Congestion Control Scheme for Wireless Networks based on TCP Reserved Field and SNR Ratio”, International Journal of Research and Reviews in Information Sciences, vol. 2, no. 2, pp. 180-186, 2012.
[pdf]
Abstract: Currently, TCP is the most popular and widely used network transmission protocol. In actual fact, about 90% of connections on the internet use TCP to communicate. Through several upgrades and improvements, TCP became well optimized for the very reliable wired networks. As a result, TCP considers
all packet timeouts in wired networks as due to network congestion and not to bit errors. However, with networking becoming more heterogeneous, providing wired as well as wireless topologies, TCP suffers from performance degradation over error-prone wireless links as it has no mechanism to differentiate error losses from congestion losses.
It therefore considers all packet losses as due to congestion and consequently reduces the burst of packet, diminishing at the same time the network throughput. This paper proposes a new TCP congestion control scheme appropriate for wireless as well as wired networks and is capable of distinguishing congestion losses from error losses.
The proposed scheme is based on using the reserved field of the TCP header to indicate whether the established connection is over a wired or a wireless link. Additionally, the proposed scheme leverages the SNR ratio to detect the reliability of the link and decide whether to reduce packet burst or retransmit a timed-out packet. Experiments
conducted, revealed that the proposed scheme proved to behave correctly in situations where timeouts were due to error and not to congestion. Future work can improve upon the proposed scheme so much so that it can leverage CRC and HEC errors so as to better determine the cause of transmission timeouts in wireless networks.
13. SSRP2012: Stealthy Steganography Research Project
Subject Area: Information Security, Steganography
Date: 2012
Status: Completed
Funding: LACSC
Description: The aim of this research project was to devise very advanced, robust, and out-of-the-box steganography algorithms supporting stealthy and highly robust secret communication.
Findings & Publications: The results were
one book and
seven research papers published in
international refereed journals:
Book 1: Youssef Bassil, “Steganography: From Black Magic to the Magic of Science”, LAP LAMBERT Academic Publishing, ISBN13: 978-3659299315, 2012.
Description: Perhaps, steganography emerged in ancient times as a dark magic, but certainly, it evolved during the computer age at large. Currently, it has many techniques, methods, and applications,
making it worth having a closer look at. This book presents a comprehensive overview on steganography and on its different techniques that have been proposed in the literature during the last decades. It additionally
sheds the light on its history before and after the advent of digital computers, its various algorithms, requirements, and processes.
Paper 1: Youssef Bassil, “An Image Steganography Scheme using Randomized Algorithm and Context-Free Grammar”, Journal of Advanced Computer Science & Technology, vol. 2, no. 4, pp. 291-305, 2012.
[pdf]
Abstract: Currently, cryptography is in wide use as it is being exploited in various domains from data confidentiality to data integrity and message authentication. Basically, cryptography shuffles data so that they become unreadable by unauthorized parties. However, clearly visible
encrypted messages, no matter how unbreakable, will arouse suspicions. A better approach would be to hide the very existence of the message using steganography. Fundamentally, steganography conceals secret data into innocent-looking mediums called carriers which can then travel from the sender to the
receiver safe and unnoticed. This paper proposes a novel steganography scheme for hiding digital data into uncompressed image files using a randomized algorithm and a context-free grammar. Besides, the proposed scheme uses two mediums to deliver the secret data: a carrier image into which the secret data
are hidden into random pixels, and a well-structured English text that encodes the location of the random carrier pixels. The English text is generated at runtime using a context-free grammar coupled with a lexicon of English words. The proposed scheme is stealthy, and hard to be noticed, detected, and recovered.
Experiments conducted showed how the covering and the uncovering processes of the proposed scheme work. As future work, a semantic analyzer is to be developed so as to make the English text medium semantically correct, and consequently safer to be transmitted without drawing any attention.
Paper 2: Youssef Bassil, “A Generation-Based Text Steganography Method using SQL Queries”, International Journal of Computer Applications, vol. 57, no. 12, pp. 27-31, 2012.
[pdf]
Abstract: Cryptography and Steganography are two techniques commonly used to secure and safely transmit digital data. Nevertheless, they do differ in important ways. In fact, cryptography scrambles data so that they become unreadable by eavesdroppers; while, steganography hides
the very existence of data so that they can be transferred unnoticed. Basically, steganography is a technique for hiding data such as messages into another form of data such as images. Currently, many types of steganography are in use; however, there is yet no known steganography application for
query languages such as SQL. This paper proposes a new steganography method for textual data. It encodes input text messages into SQL carriers made up of SELECT queries. In effect, the output SQL carrier is dynamically generated out of the input message using a dictionary of words implemented as a
hash table and organized into 65 categories, each of which represents a particular character in the language. Generally speaking, every character in the message to hide is mapped to a random word from a corresponding category in the dictionary. Eventually, all input characters are transformed into
output words which are then put together to form an SQL query. Experiments conducted, showed how the proposed method can operate on real examples proving the theory behind it. As future work, other types of SQL queries are to be researched including INSERT, DELETE, and UPDATE queries, making the SQL
carrier quite puzzling for malicious third parties to recuperate the secret message that it encodes.
Paper 3: Youssef Bassil, “Image Steganography Method Based On Brightness Adjustment”, Advances in Computer Science and its Applications, vol. 2, no. 2, pp. 350-356, 2012.
[pdf]
Abstract: Steganography is an information hiding technique in which secret data are secured by covering them into a computer carrier file without damaging the file or changing its size. The difference between steganography and cryptography is that steganography is a stealthy method of
communication that only the communicating parties are aware of; while, cryptography is an overt method of communication that anyone is aware of, despite its payload is scribbled. Typically, an irrecoverable steganography algorithm is the algorithm that makes it hard for malicious third parties to discover
how it works and how to recover the secret data out of the carrier file. One popular way to achieve irrecoverability is to digitally process the carrier file after hiding the secret data into it. However, such process is irreversible as it would destroy the concealed data. This paper proposes a new image
steganography method for textual data, as well as for any form of digital data, based on adjusting the brightness of the carrier image after covering the secret data into it. The algorithm used is parameterized as it can be configured using three different parameters defined by the communicating parties.
They include the amount of brightness to apply on the carrier image after the completion of the covering process, the color channels whose brightness should be adjusted, and the bytes that should carry in the secret data. The novelty of the proposed method is that it embeds bits of the secret data into the
three LSBs of the bytes that compose the carrier image in such a way that does not destroy the secret data when restoring back the original brightness of the carrier image. The simulation conducted proved that the proposed algorithm is valid and correct. As future work, other image processing techniques are
to be examined such as adjusting the contrast or the gamma level of the carrier image, enabling the communicating parties to more flexibly configure their secret communication.
Paper 4: Youssef Bassil, “Steganography & The Art of Deception: A Comprehensive Survey”, International Journal of Latest Trends in Computing, vol. 3, no. 4, pp. 78-88, 2012.
[pdf]
Abstract: Ever since the beginning of human civilization, mankind had always confidential things to hide or share secretly. Endless methods were devised; an ingenious one is called steganography which refers to secret writing. In essence, steganography is the science of hiding secret data into
innocuous-looking mediums in such a way that only the communicating parties are aware of this trick. Steganography maybe started during the Stone Age and greatly evolved during the computer age. Currently, it has many techniques, methods, and applications making it worth having a closer look at. This paper presents
a comprehensive overview on steganography and on its different techniques that have been proposed in the literature during the last decades. It additionally sheds the light on its history before and after the computer age, its various models, requirements, and processes.
Paper 5: Youssef Bassil, “Image Steganography Based on a Parameterized Canny Edge Detection Algorithm”, International Journal of Computer Applications, vol. 60, no. 4, pp. 35-40, 2012.
[pdf]
Abstract: Steganography is the science of hiding digital information in such a way that no one can suspect its existence. Unlike cryptography which may arouse suspicions, steganography is a stealthy method that enables data communication in total secrecy. Steganography has many requirements,
the foremost one is irrecoverability which refers to how hard it is for someone apart from the original communicating parties to detect and recover the hidden data out of the secret communication. A good strategy to guarantee irrecoverability is to cover the secret data not using a trivial method based on a
predictable algorithm, but using a specific random pattern based on a mathematical algorithm. This paper proposes an image steganography technique based on the Canny edge detection algorithm. It is designed to hide secret data into a digital image within the pixels that make up the boundaries of objects detected
in the image. More specifically, bits of the secret data replace the three LSBs of every color channel of the pixels detected by the Canny edge detection algorithm as part of the edges in the carrier image. Besides, the algorithm is parameterized by three parameters: The size of the Gaussian filter, a low threshold value,
and a high threshold value. These parameters can yield to different outputs for the same input image and secret data. As a result, discovering the inner-workings of the algorithm would be considerably ambiguous, misguiding steganalysts from the exact location of the covert data. Experiments showed a simulation tool codenamed
GhostBit, meant to cover and uncover secret data using the proposed algorithm. As future work, examining how other image processing techniques such as brightness and contrast adjustment can be taken advantage of in steganography with the purpose of giving the communicating parties more preferences to manipulate their secret communication.
Paper 6: Youssef Bassil, “A Two Intermediates Audio Steganography Technique”, Journal of Emerging Trends in Computing and Information Sciences, vol. 3, no. 10, pp. 1459-1465, 2012.
[pdf]
Abstract: On the rise of the Internet, digital data became openly public which has driven IT industries to pay special consideration to data confidentiality. At present, two main techniques are being used: Cryptography and Steganography. In effect, cryptography garbles a secret message
turning it into a meaningless form; while, steganography hides the very existence of the message by embedding it into an intermediate such as a computer file. In fact, in audio steganography, this computer file is a digital audio file in which secret data are concealed, predominantly, into the bits that
make up its audio samples. This paper proposes a novel steganography technique for hiding digital data into uncompressed audio files using a randomized algorithm and a context-free grammar coupled with a lexicon of words. Furthermore, the proposed technique uses two intermediates to transmit the secret
data between communicating parties: The first intermediate is an audio file whose audio samples, which are selected randomly, are used to conceal the secret data; whereas, the second intermediate is a grammatically correct English text that is generated at runtime using a context-free grammar and it encodes
the location of the random audio samples in the audio file. The proposed technique is stealthy and irrecoverable in a sense that it is difficult for unauthorized third parties to detect the presence of and recover the secret data. Experiments conducted showed how the covering and the uncovering processes of the
proposed technique work. As future work, a semantic analyzer is to be developed so as to make the intermediate text not only grammatically correct but also semantically plausible.
Paper 7: Youssef Bassil, “A Text Steganography Method Using Pangram and Image Mediums”, International Journal of Scientific and Engineering Research, vol. 3, no. 12, pp. 1-6, 2012.
[pdf]
Abstract: Steganography is the art and science of writing hidden messages in such a way that no one apart from the sender and the receiver would realize that a secret communicating is taking place. Unlike cryptography which only scrambles secret data keeping them overt, steganography
covers secret data into medium files such as image files and transmits them in total secrecy avoiding drawing eavesdroppers’ suspicions. However, considering that the public channel is monitored by eavesdroppers, it is vulnerable to stego-attacks which refer to randomly trying to break the medium file
and recover the secret data out of it. That is often true because steganalysts assume that the secret data are encoded into a single medium file and not into multiple ones that complement each other. This paper proposes a text steganography method for hiding secret textual data using two mediums;
a Pangram sentence containing all the characters of the alphabet, and an uncompressed image file. The algorithm tries to search for every character of the secret message into the Pangram text. The search starts from a random index called seed and ends up on the index of the first occurrence of the
character being searched for. As a result, two indexes are obtained, the seed and the offset indexes. Together they are embedded into the three LSBs of the color channels of the image medium. Ultimately, both mediums mainly the Pangram and the image are sent to the receiver. The advantage of the proposed
method is that it makes the covert data hard to be recovered by unauthorized parties as it uses two mediums, instead of one, to deliver the secret data. Experiments conducted, illustrated an example that explained how to encode and decode a secret text message using the Pangram and the image mediums. As future work,
other formats of files for the second medium are to be supported enabling the proposed method to be generically employed for a wide range of applications.
Software: The results were the development of GhostBit, a Steganography software that implements several novel and proprietary Steganography algorithms and techniques. GhostBit is capable of concealing secret data such as text, images, documents, PDFs, executables, music, and video into other form of data. Some of the algorithms implemented are LSB, Canny Edge Detection, Double Intermediates, Pangrams, Brightness Adjustment, NLP-based, Generation-based, and Injection-based.
14. DFRP2016: Digital Forensics Research Project
Subject Area: Computer Security, Digital Forensics
Date: 2016
Status: Completed
Funding: LACSC
Description: The aim of this research project was to create an Anti-Forensics method able to prevent File Carving and Data Recovery on the NTFS file system.
Findings & Publications: The results were the development of La Rose-Croix File System, a Steganography file system that layers up to the NTFS file system. Its purpose is to store user files in a ciphered way to prevent their recovery using digital forensics file carving techniques. La Rose-Croix File System is also protected by a novel 4 stages Time-based One-time Password (TOTP) mechanism, where the user authenticates himself to his computer using a combination of a thumb drive, a textual security token, and an Android app that generates Time-Based Cryptograms.
15. RATRP2017: Remote Access Trojan Research Project
Subject Area: Computer Security, Trojan and Backdoor
Date: 2017
Status: Completed
Funding: LACSC
Description: The aim of this research project was to create a RAT or Backdoor that can infiltrate computer systems that are deployed in a highly hostile environment where security is ultra tight.
Findings & Publications: The results were the development of RevSneak. It is a RAT (Remote Access Trojan) and Backdoor that allows an administrator to control a remote computer system. RevSneak is a revolutionary RAT that can 1) Bypass Antivirus software & Internet Security Suites, 2) Bypass NAT devices, 3) Bypass Proxy servers, 4) Bypass Firewalls, and 5) Bypass IPS, IDS, and Active Directory. RevSneak provides several sneaking features and functionalities such as but not limited to: Remote Sneaking, File Sneaking, Typo Sneaking, Zombie Unleash, Password Digger, Hardware Nuke, Geolocation Detection, WebCam and Audio Capture, and SMS Attack
16. EDIRP2017: Electronic Data Interchange Research Project
Subject Area: Service Science, Distributed Computing
Date: 2017
Status: Completed
Funding: LACSC
Description: The aim of this research project was to design a Service Oriented Architecture based Electronic Data Interchange platform that allows computer-to-computer interchange of electronic business documents in a Distributed fashion.
Findings & Publications: The results were the development of D-EDI. D-EDI (Distributed Electronic Data Interchange) is a business data communication platform that provides standards for exchanging digital data via electronic means. The system is designed to support electronic ordering, shipping logistics, inventory information, stock information, and many other functionalities. The technology behind D-EDI is a Service Oriented Architecture (SOA) composed of multiple services operating in a distributed fashion. The electronic format employed in the system is a proprietary standard language that allows common business procedures to be transformed into a standard data format and transferred between trading partners. The system is currently being managed and operated by a US company and is processing gigabytes of data every day.
17. DWRP2019: Deep Web Research Project
Subject Area: Computer Security
Date: 2019
Status: Completed
Funding: LACSC
Description: The aim of this research project was to create a special HTTP Steganography-based protocol that permits accessing certain web resources that cannot be reached by regular browsers.
Findings & Publications: The results were
three research papers published in
international refereed journals:
Paper 1: Youssef Bassil, “The Deep Web: Implementation using Steganography”, International Journal of Emerging Science and Engineering, vol. 6, no. 1, pp. 1-5, 2019.
[pdf]
Abstract: The Deep Web is about the web content that is invisible to public and not indexed by search engines. The purpose of the Deep Web is to ensure the privacy and anonymity of web publishers who want to remain anonymous and untraceable.
A popular method to create a Deep Web is to host web content on a private network that is secret and restricted. Tor short for The Onion Router is a private Deep Web network that is accessible only by using a special web browser called the Tor browser.
It uses special non-standard communication protocols to provide anonymity between its users and websites. Although the Tor network delivers exceptional capabilities in protecting the privacy of data and their publishers, the fact that it is free, open-source and
accessible can raise suspicions that confidential, sometimes illicit data exist. Moreover, the Tor traffic can be easily blocked and its nodes blacklisted. This paper proposes an innovative method for building Deep Web networks on the public World Wide Web using Steganography.
In a nutshell, the method uses a steganography algorithm to hide secret web content into a benign carrier image that is hosted on a carrier website on the public domain. When using a regular browser, the carrier website displays the benign carrier image. However, when
a special proprietary browser is used, the secret web page is displayed. Experiments proved that the proposed method is plausible and can be implemented. Likewise, results showed that the entire process is seamless and transparent as a particular web content can simultaneously
be part of the Deep Web and the Surface Web while drawing no suspicions whatsoever regarding the existence of any secret data. As future work, more advanced steganography algorithms are to be studied and developed in an attempt to provide an irreversible yet reliable algorithm.
Paper 2: Youssef Bassil, “Text Steganography: The Deep Web in Plain Sight”, International Journal of Inventive Engineering and Sciences, vol. 5, no. 2, pp. 16-21, 2019.
[pdf]
Abstract: Essentially, the Deep Web also known as the Invisible Web is a hidden web whose content cannot be found by search engines and thus is inaccessible using conventional means. With the rise of activism, many has started using the Deep Web as a way to
bypass regulations in order to distribute their ideologies while keeping their identity totally in secret. Tor short for The Onion Router is a Deep Web network that has been for many years used by many people from whistleblowers to cyber criminals to disguise their identities.
However, as the Tor network is free and open to public, its inner workings and protocols can be seamlessly reverse-engineered. As a result, security experts were able to restrict the Tor traffic and block its network ports and IPs, making it prone to constant investigation by
intelligence, security bodies, and law enforcement agencies. This paper proposes a novel method for implementing the Deep Web on the public Internet using Text Steganography. In short, the proposed method hides a secret page into another benign page called the carrier page using
Cascading Style Sheets. When the carrier page is accessed using a regular browser, the benign page is rendered. Nonetheless, when the very same carrier page is accessed using a proprietary browser that implements the proposed algorithm, the hidden version of the page is rendered,
mainly the secret web page that was originally concealed into the carrier page. The experiments conducted showed that the proposed method is plausible, seamless, and transparent as it allowed a single web page to exhibit two versions, one that is part of the Surface Web and another
one that is part of the Deep Web. As future work, the proposed Text Steganography algorithm can be improved so much so to make it more robust and harder to reverse engineer.
Paper 3: Youssef Bassil, “Audio Steganography Method for Building the Deep Web”, American Journal of Engineering Research, vol. 8, no. 5, pp. 45-51, 2019.
[pdf]
Abstract: The Deep Web is the portion of the Internet that cannot be crawled nor indexed by common web search engines. The idea behind the Deep Web is to host web content on a private network that is only accessible using proprietary software. For instance, the
Tor network is a private network that hosts petabytes of data that are confidential by nature and not open to the public views. Additionally, the Tor uses non-standard protocols to ensure the anonymity of its users. Although the Tor network delivers unprecedented capabilities
in protecting the privacy of data being published on its network while ensuring the total anonymity of their owners, the fact that it is free, open-source platform, and accessible via community tools, can raise suspicions that something illicit and secret exists, and as a result
it can be easily shutdown and have its network nodes censored. This paper proposes a new method for implementing the Deep Web using Audio Steganography. In essence, the proposed method camouflages a secret web page into an audio carrier file that is hosted on a carrier website on
the public domain. When users access the carrier website using a regular browser, they would only see the innocuous version of the website, mainly the website that plays the benign audio clip; whereas, when users access the carrier website using a proprietary browser that implements
our algorithm, they would see a totally different website, mainly the secret website that was originally hidden inside the benign audio file using Steganography. Experiments proved that the proposed method is feasible to build and that it supports implementing the Deep Web in plain sight
without drawing any suspicions whatsoever regarding the existence of any secret data. As future work, file types other than audio are to be investigated and experimented including image files, video files, and text files.
Software: The results were the development of TDWB (The Deep Web Browser), an Internet browser that features proprietary protocols allowing normal HTTP browsing, Proxy-based browsing, Trackable Proxy-based browsing, Censored browsing, and Deep Web browsing.
18. SOASCRP2019: Service-Oriented Architecture Smart Cities Research Project
Subject Area: Service-Oriented Architecture, Smart City
Date: 2019
Status: Completed
Funding: LACSC
Description: The aim of this research project was to exploit Distributed computing to build open, scalable, and extendable architectures for Smart Cities. They can be either Service-based or P2P composed of granular interoperable and heterogeneous micro services.
Findings & Publications: The results were
two research papers published in
international refereed journals:
Paper 1: Youssef Bassil, “4-Tier Service-Oriented Architecture For Building Smart Cities”, International Journal of Soft Computing and Engineering, vol. 8, no. 6, pp. 18-21, 2019.
[pdf]
Abstract: Currently, the world is increasingly focusing on transforming its traditional way of living into a digital, intelligent, mobile, and futuristic new urban environment called Smart City. This new paradigm shift heavily relies on information
and communication technologies and has led to the rise of web services, service-oriented architectures, digital ecosystems, intelligent transport systems, e-services, and online social collaboration. This paper proposes a 4-Tier, Distributed, Open, and Service-Oriented Architecture
for building Smart Cities. It is a 4-Tier architecture comprising Presentation, Middleware, Service, and Data tiers. It exploits Distributed computing as it is made up of small computational units operating over distant machines. It is open due to its scalable and extendable architecture,
and it is Service-based as it is composed of granular interoperable and heterogeneous micro services. At the core of the proposed architecture is the middleware which provides Standardization and Communication Language, Application Programming Interface, Service Registry, and Security Services.
All in all, the proposed architecture could prove to be a role model for building sustainable, interoperable, scalable, agile, open, and collaborative Smart Cities for 21st century. Future research can improve upon the proposed architecture so much so that data intelligence can be integrated
into the middleware allowing the system to infer, reason, and help in decision making and problem solving.
Paper 2: Youssef Bassil, “Standard Protocols for Heterogeneous P2P Vehicular Networks”, International Journal of Trend in Scientific Research and Development, vol. 3, no. 3, pp. 698-703, 2019.
[pdf]
Abstract: Vehicular Communication Systems are developing form of networks in which moving vehicles and side road units are the main communicating nodes. In such networks, vehicular nodes provide information to other nodes via Vehicle-to-Vehicle communication protocols.
A vehicular communication system can be used to support smart road applications such as accidents and traffic congestion avoidance, collision warning forwarding, forensic accidents assistance, crime site investigation, and alert notification. However, current Vehicular Communication Systems
suffer from many issues and challenges, one of which is their poor interoperability as they lack standardization due to the inconsistent technologies and protocols they use. This paper proposes several standard protocols and languages for P2P vehicular networks that are built using heterogeneous
technologies and platforms. These standards consist of three protocols: a Standard Communication Protocol which enables the interoperable operation between the heterogeneous nodes of a P2P Vehicular network; an Autonomous Peers Integration Protocol which enables the self-integration and
self-disintegration of functionalities; and a Standard Information Retrieval Protocol which allows the P2P network to be queried using a standard high-level language. In the experiments, a case study was presented as a proof of concept which demonstrated the feasibility of the proposed protocols
and that they can be used as a standard platform for data exchange in P2P Vehicular Communication Systems. As future work, Service-oriented architectures for vehicular networks are to be investigated while addressing security issues such as confidentiality, integrity, and availability.
19. BDRP2019: Big Data Research Project
Subject Area: Big Data, Software Engineering
Date: 2019
Status: Completed
Funding: LACSC
Description: The aim of this research project was to develop novel methods for processing Big Data using memory-based, multi-processing, and one-server architecture operating in a non-distributed computing environment.
Findings & Publications: The results were
three research papers published in
international refereed journals:
Paper 1: Youssef Bassil, “Memory-Based Multi-Processing Method for Big Data Computation”, International Journal of Advanced Research and Publications, vol. 3, no. 3, pp. 141-146, 2019.
[pdf]
Abstract: The evolution of the Internet and computer applications have generated colossal amount of data. They are referred to as Big Data and they consist of huge volume, high velocity, and variable datasets that need to be managed at the right speed and within
the right time frame to allow real-time data processing and analysis. Several Big Data solutions were developed, however they are all based on distributed computing which can be sometimes expensive to build, manage, troubleshoot, and secure. This paper proposes a novel method for
processing Big Data using memory-based, multi-processing, and one-server architecture. It is memory-based because data are loaded into memory prior to start processing. It is multi-processing because it leverages the power of parallel programming using shared memory and multiple
threads running over several CPUs in a concurrent fashion. It is one-server because it only requires a single server that operates in a non-distributed computing environment. The foremost advantages of the proposed method are high performance, low cost, and ease of management.
The experiments conducted showed outstanding results as the proposed method outperformed other conventional methods that currently exist on the market. Further research can improve upon the proposed method so that it supports message passing between its different processes using
remote procedure calls among other techniques.
Paper 2: Youssef Bassil, “Software Engineering Approach For Designing Retail Information Systems”, International Journal of Scientific & Engineering Research, vol. 10, no. 5, pp. 201-207, 2019.
[pdf]
Abstract: Software Engineering is an engineering discipline that is concerned with all aspects of software production including both scientific and technological knowledge, methods, design, implementation, testing, and documentation of software. Today, a
successful software project is initially designed by individuals who follow well-defined engineering approaches to problem-solving. This paper discusses the design of an information system related to computer retail store using thorough and strict software engineering practices
and principles. The Waterfall software development life cycle model is exploited to analyze, design, and build the proposed system through following a development process made up of a list of phases that must be executed in sequential order. Furthermore, the functional and non-functional
requirements, in addition to the business rules, project scheduling and planning, and design specifications are to be discussed. In the design specifications, several detailed illustrations are presented, they include drawings, diagrams, and schemas including but not limited to Viewpoints,
Context Models, Use-Cases, Data Flows, and User/System Interactions. As future work, the implementation phase is to be tackled while showing how the design specifications can be transformed into a tangible software, database, and components through writing algorithms, coding, and initial deployment.
Paper 3: Youssef Bassil, “A Digital Forensics Framework For Facebook Activity Logs”, IOSR Journal of Computer Engineering, vol. 21, no. 2, pp. 12-18, 2019.
[pdf]
Abstract: Facebook is one of the most widely used social networks with over two billion active users. According to recent surveys, five new users are created every second on Facebook, of which 3.6% are fake. Fake users are generally created for hiding people's real identity,
nonetheless, they are sometimes created to commit illegal activities and cybercrimes. Facebook has lately introduced to their platform a feature called “Activity Logs”. It is a tool that lists all online activities performed by a particular user on his Facebook account including posts, comments,
likes, tags, friends added, connections made, location visited, and people searched for. As a result, Facebook Activity Logs can represent valuable forensics evidences as they maintain a history of online user’s behavior. This paper proposes a framework for formalizing, processing, and analyzing
Facebook Activity Logs in a digital forensics context. It comprises four processes: 1) an ontology which formally represents the knowledge contained in the Facebook Activity Logs domain using OWL or RDF, 2) an automated data extractor which extracts Activity Logs data into structured XML datasets,
3) a data visualization model with data mining and Social network analysis (SNA) features which discovers intelligence, patterns, and trends from the digital evidences extracted from the Facebook Activity Logs, and 5) a query language which provides a broad retrieval capabilities for searching the
acquired digital evidences. The experiments conducted demonstrated how the Vector Space Model and the Cosine Similarity metric can be used to classify Facebook users' comments as either malicious or innocent.
20. PPARP2019: Parallel Programming Algorithms Research Project
Subject Area: Algorithms, Parallel Programming
Date: 2019
Status: Completed
Funding: LACSC
Description: The aim of this research project was to implement several computational algorithms using parallel programming techniques and distributed message passing. The algorithms to be investigated were Mandelbrot set, Bucket Sort, Monte Carlo, Grayscale Image Transformation, Insertion Sort, among other algorithms.
Findings & Publications: The results were
two research papers published in
international refereed journals:
Paper 1: Youssef Bassil, “Implementation of Combinatorial Algorithms using Optimization Techniques”, International Journal of Trend in Scientific Research and Development, vol. 3, no. 3, pp. 660-666, 2019.
[pdf]
Abstract: In theoretical computer science, combinatorial optimization problems are about finding an optimal item from a finite set of objects. Combinatorial optimization is the process of searching for maxima or minima of an unbiased function whose domain is a discrete
and large configuration space. It often involves determining the way to efficiently allocate resources used to find solutions to mathematical problems. Applications for combinatorial optimization include determining the optimal way to deliver packages in logistics applications, determining
taxis best route to reach a destination address, and determining the best allocation of jobs to people. Some common problems involving combinatorial optimizations are the Knapsack problem, the Job Assignment problem, and the Travelling Salesman problem. This paper proposes three new optimized
algorithms for solving three combinatorial optimization problems namely the Knapsack problem, the Job Assignment problem, and the Traveling Salesman respectively. The Knapsack problem is about finding the most valuable subset of items that fit into the knapsack. The Job Assignment problem is about
assigning a person to a job with the lowest total cost possible. The Traveling Salesman problem is about finding the shortest tour to a destination city through travelling a given set of cities. Each problem is to be tackled separately. First, the design is proposed, then the pseudo code is created
along with analyzing its time complexity. Finally, the algorithm is implemented using a high-level programming language. As future work, the proposed algorithms are to be parallelized so that they can execute on multiprocessing environments making their execution time faster and more scalable.
Paper 2: Youssef Bassil, “Implementation of Computational Algorithms using Parallel Programming”, International Journal of Trend in Scientific Research and Development, vol. 3, no. 3, pp. 704-710, 2019.
[pdf]
Abstract: Parallel computing is a type of computation in which many processing are performed concurrently often by dividing large problems into smaller ones that execute independently of each other. There are several different types of parallel computing. The first one is the
shared memory architecture which harnesses the power of multiple processors and multiple cores on a single machine and uses threads of programs and shared memory to exchange data. The second type of parallel computing is the distributed architecture which harnesses the power of multiple machines
in a networked environment and uses message passing to communicate processes actions to one another. This paper implements several computational algorithms using parallel programming techniques namely distributed message passing. The algorithms are Mandelbrot set, Bucket Sort, Monte Carlo, Grayscale
Image Transformation, Array Summation, and Insertion Sort algorithms. All these algorithms are to be implemented using C#.NET and tested in a parallel environment using the MPI.NET SDK and the DeinoMPI API. Experiments conducted showed that the proposed parallel algorithms have faster execution time than
their sequential counterparts. As future work, the proposed algorithms are to be redesigned to operate on shared memory multi-processor and multi-core architectures.
21. APLRP2019: Arabic Programming Language Research Project
Subject Area: Programming Languages, Compiler Design
Date: 2019
Status: Completed
Funding: LACSC
Description: The aim of this research project was to design and create an easy to learn, simple to use, yet powerful Arabic programming language for developping computer applications using the Arabic language.
Findings & Publications: The results were
two research papers published in
international refereed journals:
Paper 1: Youssef Bassil, “Phoenix: The Arabic Object-Oriented Programming Language”, International Journal of Computer Trends and Technology, vol. 67, no. 2, pp. 7-11, 2019.
[pdf]
Abstract: A computer program is a set of electronic instructions executed from within the computer’s memory by the computer's central processing unit. Its purpose is to control the functionalities of the computer allowing it to perform various tasks. Basically,
a computer program is written by humans using a programming language. A programming language is the set of grammatical rules and vocabulary that governs the correct writing of a computer program. In practice, the majority of the existing programming languages are written in
English-speaking countries and thus they all use the English language to express their syntax and vocabulary. However, many other programming languages were written in non-English languages, for instance, the Chinese BASIC, the Chinese Python, the Russian Rapira, and the Arabic Loughaty.
This paper discusses the design and implementation of a new programming language, called Phoenix. It is a General-Purpose, High-Level, Imperative, Object-Oriented, and Compiled Arabic programming language that uses the Arabic language as syntax and vocabulary. The core of Phoenix is a
compiler system made up of six components, they are the Preprocessor, the scanner, the parser, the semantic analyzer, the code generator, and the linker. The experiments conducted have illustrated the several powerful features of the Phoenix language including functions, while-loop,
and arithmetic operations. As future work, more advanced features are to be developed including inheritance, polymorphism, file processing, graphical user interface, and networking.
Paper 2: Youssef Bassil, “Compiler Design for Legal Document Translation in Digital Government”, International Journal of Engineering Trends and Technology, vol. 67, no. 3, pp. 100-104, 2019.
[pdf]
Abstract: One of the main purposes of a computer is automation. In fact, automation is the technology by which a manual task is performed with minimum or zero human assistance. Over the years, automation has proved to reduce operation cost and maintenance time in
addition to increase system productivity, reliability, and performance. Today, most computerized automation are done by a computer program which is a set of instructions executed from within the computer’s memory by the computer central processing unit to control the computer's
various operations. This paper proposes a compiler program that automates the validation and translation of input documents written in the Arabic language into XML output files that can be read by a computer. The input document is by nature unstructured and in plain-text as it is
written by people manually; while, the generated output is a structured machine-readable XML file. The proposed compiler program is actually a part of a bigger project related to digital government and is meant to automate the processing and archiving of juridical data and documents.
In essence, the proposed compiler program is composed of a scanner, a parser, and a code generator. Experiments showed that such automation practices could prove to be a starting point for a future digital government platform for the Lebanese government. As further research, other types
of juridical documents are to be investigated, mainly those that require error detection and correction.
Software: The results were the development of Phoenix, a High-Level, Imperative, Object-Oriented, Compiled, Arabic computer programming language. Phoenix is a C# syntax-like language that uses the Arabic language to express its syntax, keywords, variable and function names, and other declarations and programming structures. Phoenix features global and local variable Scopes, Conditional Structures, Control Structures, Data Structures, Function declaration, Arithmetic calculation, Classes, Objects, Inheritance, and Polymorphism.
A. Independent Research Studies & Projects
Subject Area: Political Science
Date: 2012
Funding: LACSC
Book 1: Youssef Bassil, “The 2003 Iraq War: Operations, Causes, and Consequences”, LAP LAMBERT Academic Publishing, ISBN13: 978-3659299612, 2012.
Description: The Iraq war is the Third Gulf War that was initiated with the military invasion of Iraq on March 2003 by the United States of American and its allies to put an end to the Baath Party of Saddam Hussein, the fifth President of Iraq and a prominent leader of the Baath party in the Iraqi region. The chief cause of this war was the Global War on Terrorism (GWOT) that George W. Bush declared in response to the attacks of September 11. The events of this war were both brutal and severe on both parties as it resulted in the defeat of the Iraqi army and the depose and execution of Saddam Hussein, in addition to thousands of causalities and billions of dollars expenses. This book discusses the overt as well as the covert reasons behind the Iraqi war, in addition to its different objectives. It also discusses the course of the war and its aftermath including the consequences of the war on the political, economic, social, and humanitarian levels.
Book 2: Youssef Bassil, “Lebanon under Syrian Hegemony Post Lebanese Civil War”, LAP LAMBERT Academic Publishing, ISBN13: 978-3659299667, 2012.
Description: This book examines carefully the Lebanese-Syrian relations upon the end of the French mandate, and prior to, during, and after the Lebanese civil war. It systematically discusses the Syrian military intervention in Lebanon during the Lebanese civil war and its consequences on Lebanon as a sovereign country which have led to plenty of hegemonizing joint Syrian-biased agreements, accords, pacts, and treaties, in addition to a Syrian-controlled puppet regime installed in Lebanon whose impact continues to be seen to the present days, on the freedom of speech, human rights, international laws, and political repressions. This book approaches the problem of Syrian hegemony over Lebanon from the theory of political hegemony of modern political economy which analyses and evaluates the control of wealth, the control of resources and raw materials, and the control of the market exerted by the Syrian government over Lebanon.
Paper 1: Youssef Bassil, “The 2003 Iraq War: Operations, Causes, and Consequences”, Journal of Humanities and Social Science, vol. 4, no. 5, pp. 29-47, 2012.
[pdf]
Abstract: The Iraq war is the Third Gulf War that was initiated with the military invasion of Iraq on March 2003 by the United States of American and its allies to put an end to the Baath Party of Saddam Hussein, the fifth President of Iraq and a prominent leader of the Baath party in the Iraqi region. The chief cause of this war was the Global War on Terrorism (GWOT) that George W. Bush declared in response to the attacks of September 11. The events of this war were both brutal and severe on both parties as it resulted in the defeat of the Iraqi army and the depose and execution of Saddam Hussein, in addition to thousands of causalities and billions of dollars expenses. This paper discusses the overt as well as the covert reasons behind the Iraqi war, in addition to its different objectives. It also discusses the course of the war and its aftermath. This would shed the light on the consequences of the war on the political, economic, social, and humanitarian levels. Finally, the true intentions of the war are speculated.
Paper 2: Youssef Bassil, “Syrian Hegemony over Lebanon after the Lebanese Civil War”, Journal of Science, vol. 2, no. 3, pp. 136-147, 2012.
[pdf]
Abstract: This paper examines carefully the Lebanese-Syrian relations upon the end of the French mandate, and prior to, during, and after the Lebanese civil war. It systematically discusses the Syrian military intervention in Lebanon during the Lebanese civil war and its consequences on Lebanon as a sovereign country which have led to plenty of hegemonizing joint Syrian-biased agreements, accords, pacts, and treaties, in addition to a Syrian-controlled puppet regime installed in Lebanon whose impact continues to been seen to the present days, on the freedom of speech, human rights, international laws, and political repressions. This paper approaches the problem of Syrian hegemony over Lebanon from the theory of political hegemony of modern political economy which analyses and evaluates the control of wealth, the control of resources and raw materials, and the control of the market exerted by the Syrian government over Lebanon.
Subject Area: Socioeconomic & Geopolitics
Date: 2012
Funding: LACSC
Book 1: Youssef Bassil, “Water in the Middle East: A Socioeconomic & Geopolitical Approach”, LAP LAMBERT Academic Publishing, ISBN13: 978-3659299520, 2012.
Description: Water is the most precious and valuable natural resource in the world, vital for the growth of society, economy, agriculture, and industry. This book deals with the socioeconomic and geopolitical water problems in the Middle East. It is an analytical and comprehensive study from a socioeconomic and geopolitical perspective that examines the water status-quo, facts, challenges, problems, and solutions in several Middle Eastern countries including Lebanon, Jordan, Egypt, and Palestine. The different topics that are discussed in this book are the water resources of the Middle East and their management; water problems, their challenges, and their possible solutions; climate change and its impact on the economy and the social life; water geopolitics; international laws for water exploitation during the war; shared water and their legal framework; water wars and conflicts; among many other topics.
Paper 1: Youssef Bassil, “The Socioeconomic Water Problems & Challenges In the Middle East”, Journal of Advances in Applied Economics and Finance, vol. 3, no. 3, pp. 148-159, 2012.
[pdf]
Abstract: Water is the most precious and valuable natural resource in the world, vital for the growth of society, economy, agriculture, and industry. This paper deals with the socio-economic water problems in the Middle East. It is an analytical and comprehensive study from a socio-economic perspective that examines the water status-quo, facts, challenges, problems, and solutions in several Middle Eastern countries including Lebanon, Jordan, Egypt, and Palestine. The different topics that are discussed in this paper are the water resources in the Middle East and their management including surface and ground water, water supply and demands, rainfalls and precipitations, rivers and basin, and water hydrological properties; the water problems and their challenges including water pollution, shortage of supply, and scarcity of rainfalls; the possible water solutions including water reuse, desalination, and reduction of population growth; the climate change and its impact on the economy and the social life; among many other issues and topics.
Paper 2: Youssef Bassil, “Water Geopolitics in the Middle East”, Journal of Science, vol. 2, no. 3, pp. 70-83, 2012.
[pdf]
Abstract: According to many experts, water is the new gold of the century as water crises are increasingly being observed throughout the world and billions of dollars are being spent to solve water shortage problems, more particularly, in the Middle Eastern countries. As countries of the Middle East are generally scarce in water supplies, they will try to use their economic, political, and military power to seize other neighboring lands that are plenty with water resources such as surface and ground water, rivers, and basins. This paper deals with the geopolitical water problems and challenges in the Middle East. It is an analytical study that examines the geopolitical issues related to water in several Middle Eastern countries including Lebanon, Jordan, Egypt, Israel, and Palestine. It sheds the light on the relation between the geographical characteristics of the water capitals in the Middle East and the national and regional politics, disputes, and conflicts. Furthermore, the international laws for water exploitation including the Humanitarian laws, Geneva Convention, Helsinki rules, in addition to other legislative rules and resolutions pertaining to water conservation and protection are all to be examined. Another discussed issue is the problem of water sharing between the different riparian and the legislative framework that governs them. This would pave the way to discuss the various conflicts and wars waged to seize water wealth in the Middle East, stressing on the different water clashes between Israel, Lebanon, Syria, Palestine, and Jordan.
Subject Area: Anthropology
Date: 2012
Funding: LACSC
Paper 1: Youssef Bassil, “Spiritual Asia: An Anthropological Review”, ARPN Journal of Science and Technology, vol. 2, no. 10, pp. 886-891, 2012.
[pdf]
Abstract: Asia is the largest and most densely inhabited continent in the world, comprising a wide variety of ethnic groups and races, each of which following a diversity of different religions, beliefs, and rituals. Asia is regarded as the origin of the world's mainstream religions including Christianity, Islam, Judaism, Hinduism, Buddhism, among others. This paper discusses from an anthropological perspective the major Far Eastern religions in relation to each other, shedding the light on their origins and histories, their different religious beliefs and doctrines, their sacred rituals, and their practices across cultures. The East Asian religions tackled are respectively Hinduism, Buddhism, Sikhism, Confucianism, Jainism, Taoism, and Zoroastrianism.
Subject Area: Computer Science
Date: 2012
Funding: LACSC
Book 1: Youssef Bassil, “Fast Algorithms for Arithmetic Computations of Big Numbers: The Fastest Algorithms for Computing Big Numbers”, LAP LAMBERT Academic Publishing, ISBN13: 978-3659246753, 2012.
Description: This book is meant for computer scientists, researchers, practitioners, and students looking for a fast algorithm for performing arithmetic computations over big-integer numbers. As it provides complete pseudo-code, implementation, and source-code, this book is also a great reference for application developers to build big numbers-capable applications. In fact, four new algorithms are proposed in this book for handling arithmetic addition and subtraction of big-integer numbers whose length is much greater than 64 bits. The algorithms’ execution runtime is outstanding as they outperform other existing solutions by wide margins.
Book 2: Joe Saliby, “Beyond Computer Graphics: A Programmer's Perspective”, LAP LAMBERT Academic Publishing, ISBN13: 978-3659418891, 2013.
Description: Digital Image Processing (DIP) is the use of computer programs to carry out image processing tasks on digital images. Currently, DIP has many techniques, methods, algorithms, and applications, making it worth having a practical look at. This book examines carefully the most popular image processing algorithms that are standards in modern image processing applications such as Photoshop. The book is flooded with source-code all written in the CSharp language. Their purpose is to match theory with reality. Furthermore, this book presents a complete overview on several hot topics including parallel processing, steganography, optical character recognition, and digital photography.
Paper 1: Paul Semaan, “Natural Language Generation: An Overview”, Journal of Computer Science & Research, vol. 1, no. 3, pp. 50-57, 2012.
[pdf]
Abstract: In this paper, we are discussing the basic concepts and fundamentals of Natural Language Generation, a field in Natural Language Engineering that deals with the conversion of non-linguistic data into natural information. We will start our investigation by introducing the NLG system and its different types. We will also pin point the major differences between NLG and NLU also known as Natural Language Understanding. Afterwards, we will shed the light on the architecture of a basic NLG system, its advantages and disadvantages. Later, we will examine the different applications of NLG, showing a case study that illustrates how an NLG system operates from an algorithmic point of view. Finally, we will review some of the existing NLG systems together with their features, taken from the real world.
Paper 2: Paul Semaan, “WiMAX Security: Problems & Solutions”, Journal of Computer Science & Research, vol. 1, no. 4, pp. 14-20, 2012.
[pdf]
Abstract: This paper is a survey discussing the WiMAX technology and its security features. The paper starts with the history of WiMAX, then it goes into reviewing its security features and properties such as data association and user authorization. Next, data encryption algorithms are to be examined including DES and AES. Finally, the various security threats and vulnerabilities that face WiMAX technology are to be discussed elaborately.
Paper 3: Joe Saliby, “Design & Implementation of Digital Image Transformation Algorithms”, International Journal of Trend in Scientific Research and Development, vol. 3, no. 3, pp. 623-631, 2019.
[pdf]
Abstract: In computer science, Digital Image Processing or DIP is the use of computer
hardware and software to perform image processing and computations on digital images. Generally, digital image processing requires the use of complex
algorithms, and hence, can be more sophisticated from a performance perspective at doing simple tasks. Many applications exist for digital image
processing, one of which is Digital Image Transformation. Basically, Digital Image Transformation or DIT is an algorithmic and mathematical function that converts
one set of digital objects into another set after performing some operations. Some techniques used in DIT are image filtering, brightness, contrast, hue, and
saturation adjustment, blending and dilation, histogram equalization, discrete cosine transform, discrete Fourier transform, edge detection, among others. This
paper proposes a set of digital image transformation algorithms that deal with converting digital images from one domain to another. The algorithms to be
implemented are grayscale transformation, contrast and brightness adjustment, hue and saturation adjustment, histogram equalization, blurring and sharpening
adjustment, blending and fading transformation, erosion and dilation transformation, and finally edge detection and extraction. As future work, some
of the proposed algorithms are to be investigated with parallel processing paving the way to make their execution time faster and more scalable.
Paper 4: Joe Saliby, “Survey on Natural Language Generation”, International Journal of Trend in Scientific Research and Development, vol. 3, no. 3, pp. 618-622, 2019.
[pdf]
Abstract: NLG or Natural Language Generation is the process of constructing natural language outputs from nonlinguistic inputs. One of the central goals of NLG is to
investigate how computer programs can be made to produce high-quality, expressive, uncomplicated, and natural language text from computer-internal sophisticated representations of information.
Subject Area: Technology
Date: 2014
Funding: LACSC
Book 1: Joe Saliby, “Inventions Shaping our History”, CreateSpace Independent Publishing Platform, ISBN13: 978-1503121775, 2014.
Description: There are simple gadgets, sometimes considered primitive, that have improved the quality of our daily life during the past decades. Many of these gadgets can be categorized as scientific, medical, technological, or even linguistic inventions that have radically changed the course of the human history. Furthermore, as we are living in an ever-evolving world, some promising inventions are on their way to see the light. They are anticipated to shape our present as well as our future. This book invites the reader to discover the top inventions that changed our world and that will change our future. The topics covered are inventions related to the civilization, the emergence of language and agriculture, to the invention of vaccine and microprocessor. Other futuristic yet realizable inventions are to be discussed thoroughly including but not limited to the 3D TV, invisible cloak, flying cars, anti-smoking drugs, and artificial blood.
Subject Area: Psychology
Date: 2017
Funding: LACSC
Book 1: Samar Bassil, “Le Suicide Hysterique (French Edition)”, CreateSpace Independent Publishing Platform, ISBN13: 978-1542497220, 2017.
Description: Ce livre traite la relation d'objet et sa perte chez les personnes de structure hystérique. Le concept de narcissisme sera abordé pour connaître quels sont les effets de la perte de l'objet d'amour sur le narcissisme chez les personnes de structure névrotique. La signification des symptômes hystériques et en particulier la tentative de suicide seront abordées.