gravatar

derderi

Eldirdiri Fadol Ibrahim

Recently Published

Document How to do & interpret results of metaanalysi
Meta-analysis is a systematic method for synthesizing quantitative results of different empirical studies regarding the effect of an independent variable (or determinant, or intervention, or treatment) on a defined outcome (or dependent variable). Mainly developed in medical and psychological research as a tool for synthesizing empirical information about the outcomes of a treatment, meta-analysis is now increasingly used in the social sciences as a tool for hypothesis testing. However, the assumptions underlying meta-analytic hypothesis testing in the social sciences will usually not be met under real-life conditions. This is the reason why meta-analysis is increasingly conducted with a different aim, based on more realistic assumptions. That aim is to explore the dispersion of effect sizes.
Credit Resic Return
Financial Risk Analytics involves the systematic use of statistical and mathematical techniques to assess and manage financial risks within various contexts, such as banking, investment management, insurance, and corporate finance. This field is crucial for organizations to understand and mitigate potential financial losses stemming from market fluctuations, credit defaults, operational failures, and other unforeseen events. Overview of Financial Risk Analytics Types of Financial Risks: Market Risk: Arises from changes in market prices, such as stocks, bonds, commodities, and currencies. Credit Risk: Potential losses due to default by borrowers or counterparties. Operational Risk: Risks from internal processes, systems, human errors, and external events. Liquidity Risk: Concerns the ability to quickly convert assets into cash without loss. Importance of Financial Risk Analytics: Risk Measurement: Quantifies risks using models like Value-at-Risk (VaR), stress testing, and scenario analysis. Risk Management: Helps in devising strategies to mitigate risks, allocate capital effectively, and comply with regulatory requirements. Decision Support: Provides insights for investment decisions, hedging strategies, and overall financial planning. Techniques and Models: Statistical Analysis: Utilizes probability distributions, correlation analysis, and regression to model risks. Machine Learning: Applies algorithms to identify patterns, forecast market movements, and detect anomalies. Simulation Methods: Monte Carlo simulation for assessing the impact of uncertain events on portfolios. Optimization Techniques: Mathematical models to optimize asset allocation and risk-adjusted returns. Tools and Software: Risk Management Systems: Integrated platforms for risk assessment, reporting, and compliance. Data Analytics Platforms: Utilize big data frameworks and analytics tools for processing large datasets. Visualization Tools: Dashboards and reporting tools for visual representation of risk metrics. Challenges: Data Quality: Ensuring accuracy and reliability of data inputs for risk models. Model Validation: Assessing the effectiveness and reliability of risk models under various scenarios. Regulatory Compliance: Adhering to regulatory requirements such as Basel III, Solvency II, and IFRS 9. Dynamic Environment: Adapting to changing market conditions and emerging risks. Applications: Financial Institutions: Banks, investment firms, and insurance companies use risk analytics to manage portfolios and assess creditworthiness. Corporate Finance: Helps in managing currency exposures, interest rate risks, and operational risks. Government and Regulatory Bodies: Monitor systemic risks and enforce regulatory standards. In conclusion, Financial Risk Analytics plays a pivotal role in modern finance by providing insights into potential risks, enabling proactive risk management strategies, and supporting informed decision-making in an increasingly complex financial landscape.
Credit Risk Returns
<!-- R Commander Markdown Template --> Credit Resic Return ======================= ### DR.Edirdiri Fadol Ibrahim Fadol Scientific Research Center) ### `r as.character(Sys.Date())` ```{r echo=FALSE} # include this code chunk as-is to set options knitr::opts_chunk$set(comment=NA, prompt=TRUE, out.width=750, fig.height=8, fig.width=8) library(Rcmdr) library(car) library(RcmdrMisc) ``` ```{r} # Load necessary libraries library(quantmod) library(ggplot2) ``` ```{r} # Step 1: Fetch historical stock prices using quantmod package ticker <- "AAPL" # Example: Apple Inc. start_date <- "2021-01-01" end_date <- "2021-12-31" ``` ```{r} getSymbols(ticker, from = start_date, to = end_date) ``` ```{r} # Step 2: Extract adjusted closing prices stock_prices <- Ad(get(ticker)) ``` ```{r} # Step 3: Calculate daily returns daily_returns <- diff(log(stock_prices)) ``` ```{r} # Step 4: Calculate key statistics mean_return <- mean(daily_returns, na.rm = TRUE) volatility <- sd(daily_returns, na.rm = TRUE) ``` ```{r} cat("Mean Daily Return:", mean_return, "\n") cat("Volatility (Standard Deviation of Daily Returns):", volatility, "\n") ``` ```{r} # Step 5: Visualize daily returns dates <- index(daily_returns) returns_data <- data.frame(Date = as.Date(dates), Daily_Return = as.numeric(daily_returns)) ``` ```{r} ggplot(returns_data, aes(x = Date, y = Daily_Return)) + geom_line(color = "blue") + labs(title = paste("Daily Returns of", ticker), x = "Date", y = "Daily Returns") + theme_minimal() ``` ```{r} ### Summarize Data Set: returns_data ``` ```{r} summary(returns_data) ``` ### Normality Test: ~Daily_Return ```{r} normalityTest(~Daily_Return, test="shapiro.test", data=returns_data) ``` ### Normality Test: ~Daily_Return ```{r} normalityTest(~Daily_Return, test="shapiro.test", data=returns_data) ``` ```{r} library(abind, pos=23) ``` ```{r} library(e1071, pos=24) ``` ### Numerical Summaries: returns_data ```{r} numSummary(returns_data[,"Daily_Return", drop=FALSE], statistics=c("mean", "sd", "IQR", "quantiles"), quantiles=c(0,.25,.5,.75,1)) ``` ### Numerical Summaries: returns_data ```{r} numSummary(returns_data[,"Daily_Return", drop=FALSE], statistics=c("mean", "sd", "IQR", "quantiles"), quantiles=c(0,.25,.5,.75,1)) ``` ### Single-Sample t-Test: Daily_Return ```{r} with(returns_data, (t.test(Daily_Return, alternative = "two.sided", mu = 0.0, conf.level = .95))) ``` ### Single-Sample t-Test: Daily_Return ```{r} with(returns_data, (t.test(Daily_Return, alternative = "two.sided", mu = 0.0, conf.level = .95))) ```
Meat Production
This is just a start with the global data beef production
Artificial Bee Colony Optimization
This is about the Artificial Bee Colony Optimization This algorithm was proposed by (Karaboga & Akay, 2009). It inspired by type of bee. They are three types of bee employeed, onlooker and scout. Employed bee work by finding food source. Onlooker bee work by finding better food source other than foods that Employed bee found. Scout bee work by removing abandoned food source. Each candidate solution in ABC algorithm represent as bee and they will move in 3 phases employed, onlooker and scout. In order to find the optimal solution, the algorithm follow the following steps. initialize population randomly. Employed bee phase (Perform local search and greedy algorithm for each candidate solution). Onlooker bee phase (Perform local search and greedy algorithm for some candidate solutions). Scout bee phase (Remove abandoned candidate solutions). If a termination criterion (a maximum number of iterations or a sufficiently good fitness) is met, exit the loop, else back to employed bee phase.
Fuzzy_Log dirdiri
For those who needs to know more a bout the classification with The Fuzzy_Log Eldirdiri Fadol Cairo 18/03/2023
Market Basket Analysis Technique
library(DBI) library(dplyr) library(dbplyr) library(odbc) require(RJDBC) require(RODBC) library(arules) library(arulesViz) head(data,10) sum(is.na(bread_basket)) sorted <- bread_basket[order(bread_basket$Transaction),] sorted$Transaction <- as.numeric(sorted$Transaction) str(sorted) itemList <- ddply(sorted, c("Transaction","date_time"), function(df1) paste(df1$Item, collapse = ",")) itemList$Transaction <- NULL itemList$date_time <- NULL itemList$period_day <- NULL itemList$weekday_weekend <- NULL colnames (itemList) <- c("itemList") write.csv(itemList, "itemList.csv", quote=FALSE, row.names=TRUE) head(itemList, 10) str(itemList) transaksi = read.transactions("./itemList.csv", format="basket",sep=",",cols=1) transaksi basket_rules = apriori(transaksi, parameter = list(sup=0.01, minlen=3, conf=0.1, target="rules")) print(length(basket_rules)) summary(basket_rules) inspect(basket_rules) plot(basket_rules, jitter=0) itemFrequencyPlot(transaksi, topN=10) plot(basket_rules, method="graph") plot(basket_rules, method="paracoord")
Document
ARAB GPD in COVID By:Dr.Eldirdiri Fadol (SUDAN) For Dr.ASMA SALEM (LYBIA)
This the analysis of indicators for the dgp for the ARAB COUNTRIES FOR COVID-19 DISASTER
Covid.analytics For Libyan Doctor UNB UNB for Her article published later
This work is done for Dr>Unb Unb
Covid19
dirdiri1
Covid19
Covid impact
NYCOVID19_MY_OWN JOB
ANalysis
VAR IN R BY: Dr.Eldirdiri Fadol Ibrahim Fadol
تقدير نموذج VAR ============= يمكن استخدام نموذج VAR عندما تكون المتغيرات قيد الدراسة I (1) ولكنها غير مدمجة. النموذج هو الموجود في المعادلات ؟؟؟؟؟؟ ، ولكن على اختلاف ، كما هو محدد في المعادلتين 8 و 9. Δyt=β11Δyt−1+β12Δxt−1+νΔyt( Δxt=β21Δyt−1+β22Δxt−1+νΔxt (9) دعونا نلقي نظرة على العلاقة بين الدخل والاستهلاك على أساس فريدFRED detaset ، حيث الاستهلاك والدخل موجودان بالفعل في السجلات ، والفترة هي 1960: 1 إلى 2009: 4. يوضح الشكل 13.2 أن كلا السلسلتين لهما اتجاه. data("fred", package="PoEdata") fred <- ts(fred, start=c(1960,1),end=c(2009,4),frequency=4) ts.plot(fred[,"c"],fred[,"y"], type="l", lty=c(1,2), col=c(1,2)) legend("topleft", border=NULL, legend=c("c","y"), lty=c(1,2), col=c(1,2)) هل السلسلتان مدمجتان؟ Acf(fred[,"c"]) Acf(fred[,"y"]) adf.test(fred[,"c"]) adf.test(diff(fred[,"y"])) يوضح الشكل 13.3 تسلسل ارتباط تسلسلي طويل ؛ لذلك ، سأدع R يحسب ترتيب التأخر في اختبار ADF. كما تظهر نتائج اختبارات adf والاندماج المشترك أعلاه ، فإن كلا من السلسلة هي (I (1 لكنها فشلت في اختبار التكامل المشترك (السلسلة ليست مدمجة.) (تذكر Plese أن دالة adf.test تستخدم ثابتًا واتجاهًا في معادلة الاختبار ؛ لذلك ، فإن القيم الحرجة ليست هي نفسها الموجودة في الكتاب المدرسي. ومع ذلك ، يجب أن تكون نتائج الاختبارات هي نفسها في معظم الأوقات.) library(vars) Dc <- diff(fred[,"c"]) Dy <- diff(fred[,"y"]) varmat <- as.matrix(cbind(Dc,Dy)) varfit <- VAR(varmat) # `VAR()` from package `vars` summary(varfit) تقبل الدالة () VAR ، وهي جزء من حزمة ( vars (Pfaff 2013 ، الوسيطات الرئيسية التالية: y = مصفوفة تحتوي على المتغيرات الداخلية في نموذج VAR ، و p = ترتيب التأخر المطلوب (الافتراضي هو 1) ، و exogen = مصفوفة من المتغيرات الخارجية. (VAR هي أداة أقوى مما أشير هنا ؛ يرجى كتابة VAR () لمزيد من المعلومات.) تعتبر نتائج نموذج VAR أكثر فائدة في تحليل الاستجابة الزمنية للصدمات في المتغيرات ، وهو موضوع المرحلة التالية قسم. الاستجابات الاندفاعية وتحليلات التباين ======================== يتم تمثيل استجابات الاندفاع بشكل أفضل في الرسوم البيانية التي تظهر استجابات متغير داخلي VAR في الوقت المناسب. impresp <- irf(varfit) plot(impresp) يقدر تحليل تباين التنبؤ مساهمة الصدمة في كل متغير في الاستجابة في كلا المتغيرين. يوضح الشكل 13.5 أن ما يقرب من 100 بالمائة من التباين في Dc ناتج عن Dc نفسه ، في حين أن حوالي 80 بالمائة فقط في تباين Dy ناتج عن Dy والباقي ناتج عن Dc. تتيح وظيفة ()R fevd في قارورات الحزمة تحليل التباين المتوقع. سأنزل الموقع المنشور به العمل هنا إن شاء اللهز #الدرديري
DocumentVEC Model by Dr Eldirdiri Fadol Ibrahim Fadol
ستجدون صعوبةً في تحميل هذه الحزمة لهذا إستخدموا (الدرديري) install.packages("remotes") remotes::install_github("ccolonescu/PoEdata") ما هو المصطلح الصحيح لنموذج VEC؟ ========================= نموذج VEC المقابل هو: (44.40) في هذا النموذج البسيط ، المتغير الوحيد على الجانب الأيمن هو مصطلح تصحيح الخطأ. في حالة توازن المدى الطويل ، يكون هذا المصطلح صفرًا. ومع ذلك ، إذا انحرف عن توازن المدى الطويل ، فسيكون مصطلح تصحيح الخطأ غير صفري ويتم تعديل كل متغير لاستعادة علاقة التوازن جزئيًا. كيف يتم استخدام تصحيح خطأ المتجه في نماذج VEC؟ ================================== في المثال الأول ، تُستخدم البيانات الخاصة بإجمالي الناتج المحلي لأستراليا والولايات المتحدة لتقدير نموذج VEC. قررنا استخدام نموذج تصحيح خطأ المتجه لأن (1) السلاسل الزمنية ليست ثابتة في مستوياتها ولكنها تختلف باختلافها (2) يتم دمج المتغيرات معًا لماذا نستخدم VEC للسلاسل الزمنية؟ ======================== تحدد مواصفات VEC السلوك طويل المدى للمتغيرات الداخلية لتتقارب مع علاقات التكامل المشترك مع السماح بمجموعة واسعة من الديناميكيات على المدى القصير. الانحدار التلقائي المتجه (VAR) ==================== هو نموذج إحصائي يُستخدم لالتقاط العلاقة بين الكميات المتعددة أثناء تغيرها بمرور الوقت. VAR هو نوع من نموذج العملية العشوائية. تعمل نماذج VAR على تعميم نموذج الانحدار الذاتي المتغير (أحادي المتغير) من خلال السماح بسلسلة زمنية متعددة المتغيرات. ما هو هيكل نموذج VAR؟ =================== تُستخدم نماذج VAR (نماذج الانحدار التلقائي المتجهة) في السلاسل الزمنية متعددة المتغيرات. الهيكل هو أن كل متغير هو دالة خطية للتأخيرات السابقة لنفسه والتأخيرات السابقة للمتغيرات الأخرى تُظهر المعادلتان 1 و2 نموذجًا للإنحدار التلقائي للمتجه العام بالترتيب 1 ، (VAR (1 ، والذي يمكن تقديره إذا كانت السلسلة كلاهما (I (0. إذا كانت (I (0. ، يجب تقدير نفس المعادلات في الفروق الأولى. Yt = β10 + β11yt − 1 + 12xt − 1 + νyt (1) Xt = β20 + β21yt − 1 + 22xt − 1 + νxt (2) yt=β0+β1xt+et إذا تم دمج المتغيرين في المعادلتين 11 و 22 معًا ، فيجب أن تؤخذ علاقة التكامل المشترك بينهما في الاعتبار في النموذج ، لأنها معلومات قيمة ؛ مثل هذا النموذج يسمى تصحيح خطأ متجه. تذكر أن علاقة التكامل المشترك هي ، كما هو موضح في المعادلة 33 ، حيث ثبت أن مصطلح الخطأ ثابت. تقدير نموذج VEC ============== أبسط طريقة هي إجراء من خطوتين. أولاً ، قم بتقدير علاقة التكامل المشترك الواردة في المعادلة 3 وأنشأ السلسلة المتبقية الناتجة المتأخرة
Capture & Re-Capture Technique with R Dr.Elirdiri Fadol Ibrahim
This article introduces Rcapture, an R package for capture-recapture experiments. The data for analysis consists of the frequencies of the observable capture histories over the t capture occasions of the experiment. A capture history is a vector of zeros and ones where one stands for a capture and zero for a miss. Rcapture can fit three types of models. With a closed population model, the goal of the analysis is to estimate the size N of the population which is assumed to be constant throughout the experiment. The estimator depends on the way in which the capture probabilities of the animals vary. Rcapture features several models for these capture probabilities that lead to different estimators for N. In an open population model, immigration and death occur between sampling periods. The estimation of survival rates is of primary interest. Rcapture can fit the basic Cormack-Jolly-Seber and Jolly-Seber model to such data. The third type of models fitted by Rcapture are robust design models. It features two levels of sampling; closed population models apply within primary periods and an open population model applies between periods. Most models in Rcapture have a loglinear form; they are fitted by carrying out a Poisson regression with the R function glm. Estimates of the demographic parameters of interest are derived from the loglinear parameter estimates; their variances are obtained by linearization. The novel feature of this package is the provision of several new options for modeling capture probabilities heterogeneity between animals in both closed population models and the primary periods of a robust design. It also implements many of the techniques developed by R. M. Cormack for open population models.
DocumentBinomial Generalized Linear Mixed Models
النماذج المختلطة الخطية المعممة ذات الحدين ، أو GLMMs ذات الحدين ، مفيدة لنمذجة النتائج الثنائية للقياسات المتكررة أو المجمعة. على سبيل المثال ، لنفترض أننا صممنا دراسة تتعقب ما يأكله طلاب الجامعات على مدار أسبوعين ، ونحن مهتمون بما إذا كانوا يأكلون الخضار كل يوم أم لا. لكل طالب سيكون لدينا 14 حدثًا ثنائيًا: تناول الخضار أم لا. باستخدام GLMM ذو الحدين ، يمكننا نمذجة احتمالية تناول الخضروات يوميًا نظرًا للعديد من المتنبئين مثل جنس الطالب ، وعرق الطالب ، و / أو بعض 'العلاج' الذي طبقناه على مجموعة فرعية من الطلاب ، مثل فصل التغذية. نظرًا لأن كل طالب يتم ملاحظته على مدار عدة أيام ، فقد قمنا بتكرار التدابير وبالتالي الحاجة إلى نموذج التأثير المختلط.
Mediation Analysis Dr.Eldirdiri Fadol Ibrahim Fadol
This post intends to introduce the basics of mediation analysis and does not explain statistical details. For details, please refer to the articles at the end of this post.
COVID 19
For teaching purpose
RECOVERY/DEATH RATIO FOR COVID
OUR MISSION IN SCIENTIFIC RESEARCH CENTER & AMOS & R IS TO REMOVE THE ILLITERACY AND MAKE KNOWLEDGE ATTAINABLE FOR ALL
WORK WITH SHINEY WITH COVID-19 WHAT IS ELSE
I DID THE JOB TO MAKE PEOPLE TO HAVE INFORMATION EASY FOR MY KIDS YOUR KIDS OUR FAMILIES AND HANDLE IT AS A CHALLENGE TO STOP COVID
CRONA COVID 19
THIS INTERACTIVE IS DONE TO ENHANCE PEOPLE IN SCIENTIFIC RESEARCH & AMOS & R FACW-BOOK GROUPS EXCLUSIVELY
GAZA STRIP DATA CANCER
GAZA-STRIP CANCER
This is a job done and not allowded it belongs to Mr.Sharif Musleh
GAZA-STRIP CANCER
This job is one according to the request
Chronic Kidney Diseases Prediction
This sample and the long term analysis I made it to give chance for those who think that dealing with data is a mistery ,really it is a nice job for every one> And I made it easy also for the professions in FB group ( Scientific Research Center & AMOS & R group specially fir my colleague Dr. Nasir ,and for all members of the groups who thin they can make something nice.
Shanghai Academic Rankings for World Universities (ARWU)
Shanghai Academic Rankings for World Universities (ARWU) The Shanghai World Rankings for universities have a World Rank (smaller value is the best), a National Rank (per country, smaller value is best) and scores from 0 to 100 for: Total Score; Alumni of an institution winning Nobel Prizes and Fields Medals (alumni); Staff of an institution winning Nobel Prizes and Fields Medals (awards); Highly Citated researchers (hici); Number of papers in Nature & Science papers (ns); Papers indexed in Science Citation Index-expanded and Social Science Citation Index (pub); Per capita academic performance of an institution (pcp); All these values are for a corresponding year between 2005 and 2015. As these values are all ranking between 0 and 100, it will be relevant to represent them as Spider-Web graphs. Calculating missing values for Total Score The world rank is a bit misleading, since it is either a single number (like, 1, 2, 3) or an interval (100-400). This will be a bit complicated to use for hierarchy. We can use instead the Total Score (a number between 0 and 100). First we need to solve an issue with a lot of rows with the Total Score missing (NA). Let’s see first how many such rows are. There are 3796 rows with Total Score NA. Fortunatelly, we can calculate the total_score if we do have the other values. The formula is: total_score = 0.1 * alumni + 0.2 * award + 0.2 * hici + 0.2 * ns + 0.2 * pub + 0.1 * pcp Let’s calculate the Total Score from shanghaiData with this formula for the entries misising the Total Score.
Shanghai Ranking Universities
This is a part of the job, How to make universities Ranking
Reproducability of Revisiting World Bank stats
I did this totrain my students how to carry out such an application
Extracting Climate Data from Prism correcting FAULTS DONE IN DTAT ANALYSISI
MANY FAULTS WAS CARRIED BY Mr.Shannon Carter February 22, 2019 IN HIS SIMULATION OF DATA FROM (PRISM) https://rpubs.com/shannon_carter/471278 Mr.Shannon Carter تصحيح النموذج بواسطة د.الدرديري فضل إبراهيم فضل
HTML
HTML
HTML
عبارة عن درس تعليمي وتوضيحي لمنتسبي مركز البحث العلمي ومبادرة (UP)
Sudan Vs.Covid 19 By Dr.Eldirdiri Fadol brahim
John's Hop;in's University Data - Corona Virus Sudan
Dr.Eldirdiri Fadol Work Crona virus 19 Egypt
The Work is about infection in Egypt in Comparison with other countries
Dr..Eldirdiri Fadol work in Corona virus 19
This analysis is done to make the way clear for these souls that are buried ,because we did nothing for them ,now the time for all at ( Scientific Research Center ,AMOUS & R-groups ,as well as the other Statistician all over the world ,medical staff to carry out the job Dr.Eldirdiri Fadol SUDAN
Plot : Goodness-of-fit plot for some items with confidence ellipses.
eRm (R packages): We see that the model fits and a graphical representation of this result (subset of items only) is given in Figure 3 by means of a goodness-of-fit plot with confidence ellipses. > library("eRm", lib.loc="~/R/win-library/3.5") > library("crayon", lib.loc="~/R/win-library/3.5") > library("libcoin", lib.loc="~/R/win-library/3.5") > library("eRm") > res.rasch <- RM(raschdat1) > pres.rasch <- person.parameter(res.rasch) > > lrres.rasch <- LRtest(res.rasch, splitcr = "mean") > lrres.rasch Andersen LR-test: LR-value: 30.288 Chi-square df: 29 p-value: 0.4 > > plotGOF(lrres.rasch, beta.subset = c(14, 5, 18, 7, 1), tlab = "item", + conf = list(ia = FALSE, col = "blue", lty = "dotted"))
HTML
data(LeMis) g <- graphjs(LeMis, main="Les Misérables", showLabels=TRUE) print(g)
Plot Dr.Eldirdiri Fadol Ibrahim (Leasure Times Revisions)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 library(grid) rm(list = ls()) grid.newpage() pmax <- 5 # Depth of the fractal vp1=viewport(x=0.5,y=0.5,w=1, h=1) vp2=viewport(w=0.5, h=0.5, just=c("centre", "bottom")) vp3=viewport(w=0.5, h=0.5, just=c("left", "top")) vp4=viewport(w=0.5, h=0.5, just=c("right", "top")) pushViewport(vp1) m <- as.matrix(expand.grid(rep(list(2:4), pmax))) for (j in 1:nrow(m)) { for(k in 1:ncol(m)) {pushViewport(get(paste("vp",m[j,k],sep="")))} grid.rect(gp=gpar(col="dark grey", lty="solid", fill=rgb(sample(0:255, 1),sample(0:255, 1),sample(0:255, 1), alpha= 95, max=255))) upViewport(pmax) }
HTML~ Dr.Eldirdiri Fadol Ibrahim Fadol ! Threejs example: “stacked” scatterplots
The threejs package: three.js widgets for R
HTML~ Dr.Eldirdiri Fadol Ibrahim Fadol
Adding bars to make a spiky globe
HTML~ Reproducibility in three.js widgets for R~Dr.Eldirdiri Fadol Ibrahim
The threejs package provides interactive 3D scatterplots and globe plots using three.js and the htmlwidgets package for R. These examples render like normal R plots in RStudio. They also work in R Markdown documents, shiny, and from the R command line.
HTML~ Dr.Eldirdiri Fadol Ibrahim Fadol
Reproducibility of previous works as training wordcloud
HTML~ Dr.Eldirdiri Fadol Ibrahim Fadol
Reproucibility of THE WORDCLOUD2 LIBRARY
HTML~ Dr.Eldirdiri Fadol Ibrahim Fadol
# Load libraries library(shiny) library(leaflet) # Make data with several positions data_red=data.frame(LONG=42+rnorm(10), LAT=23+rnorm(10), PLACE=paste("Red_place_",seq(1,10))) data_blue=data.frame(LONG=42+rnorm(10), LAT=23+rnorm(10), PLACE=paste("Blue_place_",seq(1,10)))      # Initialize the leaflet map: leaflet() %>% setView(lng=42, lat=23, zoom=8 ) %>% # Add two tiles addProviderTiles("Esri.WorldImagery", group="background 1") %>% addTiles(options = providerTileOptions(noWrap = TRUE), group="background 2") %>% # Add 2 marker groups addCircleMarkers(data=data_red, lng=~LONG , lat=~LAT, radius=8 , color="black",  fillColor="red", stroke = TRUE, fillOpacity = 0.8, group="Red") %>% addCircleMarkers(data=data_blue, lng=~LONG , lat=~LAT, radius=8 , color="black",  fillColor="blue", stroke = TRUE, fillOpacity = 0.8, group="Blue") %>% # Add the control widget addLayersControl(overlayGroups = c("Red","Blue") , baseGroups = c("background 1","background 2"), options = layersControlOptions(collapsed = FALSE))
HTML reproducibility of library(networkD3)
# libraries library(networkD3) # Load data data(MisLinks) data(MisNodes) # Plot forceNetwork( Links = MisLinks, Nodes = MisNodes, Source = "source", Target = "target", Value = "value", NodeID = "name", Group = "group", opacity = 0.8, linkDistance = JS('function(){d3.select("body").style("background-color", "#DAE3F9"); return 50;}') )
Reproducing grph using library(networkD3)
# libraries library(networkD3) # Load data data(MisLinks) data(MisNodes) # Plot forceNetwork( Links = MisLinks, Nodes = MisNodes, Source = "source", Target = "target", Value = "value", NodeID = "name", Group = "group", opacity = 0.8, linkDistance = JS('function(){d3.select("body").style("background-color", "#DAE3F9"); return 50;}') )
HTML
library(threejs) z <- seq(-10, 10, 0.01) x <- cos(z) y <- sin(z) scatterplot3js(x,y,z, color=rainbow(length(z))) threejs includes a 3D scatterplot and 3D globe (you can directly manipulate the scatterplot below with the mouse).
Examples For work in "R"
library(wordcloud2) wordcloud2(demoFreq, size = 1,shape = 'star')
Eldindir Group of Family( Cairo University Vet.Med. Faculty)
Code by (Eldirdiri Fadol Ibrahim Fadol) For The sake of The late inch Allah Dr.Mamoun
HTML
Training
HTML
الجدول الدوري بلغة R
HTML كيف نعمل رسم شكلين بيانيين من نفس الداتا في نفس الموقع
كيفية عمل نوعين من الاشكال البيانية لنفس البيانات مستخدمين R >install.packages("rbokeh") >library(rbokeh) >Histogram of old faithful geyser data with density overplotted: h <- figure(width = 600, height = 400) %>% ly_hist(eruptions, data = faithful, breaks = 40, freq = FALSE) %>% ly_density(eruptions, data = faithful) h
My Publish with Data Developement
This the job again
My-Slidy
This work is for learning methods
Publish Document
Adding Many Markers Adding one marker at a time is often not practical if you want to display many markers. If you have a data frame with columns lat and lng you can pipe that data frame into leaflet() to add all the points at once. set.seed(2016-04-25) df <- data.frame(lat = runif(20, min = 39.2, max = 39.3), lng = runif(20, min = -76.6, max = -76.5)) df %>% leaflet() %>% addTiles() %>% addMarkers()
This My First Assignment Map
Adding Many Markers Adding one marker at a time is often not practical if you want to display many markers. If you have a data frame with columns lat and lng you can pipe that data frame into leaflet() to add all the points at once. set.seed(2016-04-25) df <- data.frame(lat = runif(20, min = 39.2, max = 39.3), lng = runif(20, min = -76.6, max = -76.5)) df %>% leaflet() %>% addTiles() %>% addMarkers()
My First Map
According to assignment directions leaflet was used dated
DocuMy First Mapment
Your First Map library(leaflet) my_map <- leaflet() %>% addTiles() my_map
My First Map
Peer-graded Assignment: R Markdown and Leaflet
My First Map
It is the first lesson on leaflet
My Publish with Data Developement
R Markdown R Markdown is a file format for making dynamic documents with R. An R Markdown document is written in markdown (an easy-to-write plain text format) and contains chunks of embedded R code, like the document below. --- output: html_document --- This is an R Markdown document. Markdown is a simple formatting syntax for authoring HTML, PDF, and MS Word documents. For more details on using R Markdown see . When you click the **Knit** button a document will be generated that includes both content as well as the output of any embedded R code chunks within the document. You can embed an R code chunk like this: ```{r} summary(cars) ``` You can also embed plots, for example: ```{r, echo=FALSE} plot(cars) ``` Note that the `echo = FALSE` parameter was added to the code chunk to prevent printing of the R code that generated the plot. R Markdown files are designed to be used with the rmarkdown package. rmarkdown comes installed with the RStudio IDE, but you can acquire your own copy of rmarkdown from CRAN with the command install.packages("rmarkdown") R Markdown files are the source code for rich, reproducible documents. You can transform an R Markdown file in two ways. knit - You can knit the file. The rmarkdown package will call the knitr package. knitr will run each chunk of R code in the document and append the results of the code to the document next to the code chunk. This workflow saves time and facilitates reproducible reports. Consider how authors typically include graphs (or tables, or numbers) in a report. The author makes the graph, saves it as a file, and then copy and pastes it into the final report. This process relies on manual labor. If the data changes, the author must repeat the entire process to update the graph. In the R Markdown paradigm, each report contains the code it needs to make its own graphs, tables, numbers, etc. The author can automatically update the report by re-knitting. convert - You can convert the file. The rmarkdown package will use the pandoc program to transform the file into a new format. For example, you can convert your .Rmd file into an HTML, PDF, or Microsoft Word file. You can even turn the file into an HTML5 or PDF slideshow. rmarkdown will preserve the text, code results, and formatting contained in your original .Rmd file. Conversion lets you do your original work in markdown, which is very easy to use. You can include R code to knit, and you can share your document in a variety of formats. In practice, authors almost always knit and convert their documents at the same time. In this article, I will use the term render to refer to the two step process of knitting and converting an R Markdown file. You can manually render an R Markdown file with rmarkdown::render(). This is what the above document looks like when rendered as a HTML file.
Publish Document
This is an Assignment in a course with courser done according to the instructions given from the course instructor ,and as a part of exercising usage of Machine learning in prediction, and hope to practice more to get used with the R tools & Packages.
Publish Document
My correction of the first published
Publish Document
My Rep.2Assignment