May 14 23:48:49.322896 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 14 23:48:49.322921 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Wed May 14 22:22:56 -00 2025 May 14 23:48:49.322930 kernel: KASLR enabled May 14 23:48:49.322935 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') May 14 23:48:49.322942 kernel: printk: bootconsole [pl11] enabled May 14 23:48:49.322948 kernel: efi: EFI v2.7 by EDK II May 14 23:48:49.322956 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20f698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 May 14 23:48:49.322962 kernel: random: crng init done May 14 23:48:49.322968 kernel: secureboot: Secure boot disabled May 14 23:48:49.322974 kernel: ACPI: Early table checksum verification disabled May 14 23:48:49.322980 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) May 14 23:48:49.322985 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 14 23:48:49.322992 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) May 14 23:48:49.322999 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) May 14 23:48:49.323007 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) May 14 23:48:49.323013 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 14 23:48:49.323019 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 14 23:48:49.323027 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 14 23:48:49.323033 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) May 14 23:48:49.323039 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) May 14 23:48:49.323046 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) May 14 23:48:49.323052 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 14 23:48:49.323058 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 May 14 23:48:49.323064 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] May 14 23:48:49.323070 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] May 14 23:48:49.323076 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] May 14 23:48:49.323082 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] May 14 23:48:49.323089 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] May 14 23:48:49.323096 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] May 14 23:48:49.323102 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] May 14 23:48:49.323109 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] May 14 23:48:49.323115 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] May 14 23:48:49.323121 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] May 14 23:48:49.323127 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] May 14 23:48:49.323133 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] May 14 23:48:49.323139 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] May 14 23:48:49.323146 kernel: Zone ranges: May 14 23:48:49.323152 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] May 14 23:48:49.323159 kernel: DMA32 empty May 14 23:48:49.325225 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] May 14 23:48:49.325251 kernel: Movable zone start for each node May 14 23:48:49.325258 kernel: Early memory node ranges May 14 23:48:49.325265 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] May 14 23:48:49.325271 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] May 14 23:48:49.325278 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] May 14 23:48:49.325286 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] May 14 23:48:49.325293 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] May 14 23:48:49.325299 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] May 14 23:48:49.325306 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] May 14 23:48:49.325312 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] May 14 23:48:49.325319 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] May 14 23:48:49.325326 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] May 14 23:48:49.325333 kernel: On node 0, zone DMA: 36 pages in unavailable ranges May 14 23:48:49.325339 kernel: psci: probing for conduit method from ACPI. May 14 23:48:49.325346 kernel: psci: PSCIv1.1 detected in firmware. May 14 23:48:49.325353 kernel: psci: Using standard PSCI v0.2 function IDs May 14 23:48:49.325359 kernel: psci: MIGRATE_INFO_TYPE not supported. May 14 23:48:49.325368 kernel: psci: SMC Calling Convention v1.4 May 14 23:48:49.325374 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 May 14 23:48:49.325381 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 May 14 23:48:49.325387 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 14 23:48:49.325394 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 14 23:48:49.325401 kernel: pcpu-alloc: [0] 0 [0] 1 May 14 23:48:49.325407 kernel: Detected PIPT I-cache on CPU0 May 14 23:48:49.325414 kernel: CPU features: detected: GIC system register CPU interface May 14 23:48:49.325420 kernel: CPU features: detected: Hardware dirty bit management May 14 23:48:49.325427 kernel: CPU features: detected: Spectre-BHB May 14 23:48:49.325433 kernel: CPU features: kernel page table isolation forced ON by KASLR May 14 23:48:49.325442 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 14 23:48:49.325448 kernel: CPU features: detected: ARM erratum 1418040 May 14 23:48:49.325455 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) May 14 23:48:49.325462 kernel: CPU features: detected: SSBS not fully self-synchronizing May 14 23:48:49.325468 kernel: alternatives: applying boot alternatives May 14 23:48:49.325476 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=bfa141d6f8686d8fe96245516ecbaee60c938beef41636c397e3939a2c9a6ed9 May 14 23:48:49.325483 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 23:48:49.325490 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 14 23:48:49.325497 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 23:48:49.325504 kernel: Fallback order for Node 0: 0 May 14 23:48:49.325510 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 May 14 23:48:49.325518 kernel: Policy zone: Normal May 14 23:48:49.325525 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 23:48:49.325532 kernel: software IO TLB: area num 2. May 14 23:48:49.325538 kernel: software IO TLB: mapped [mem 0x0000000036540000-0x000000003a540000] (64MB) May 14 23:48:49.325545 kernel: Memory: 3983592K/4194160K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38336K init, 897K bss, 210568K reserved, 0K cma-reserved) May 14 23:48:49.325551 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 14 23:48:49.325558 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 23:48:49.325565 kernel: rcu: RCU event tracing is enabled. May 14 23:48:49.325572 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 14 23:48:49.325579 kernel: Trampoline variant of Tasks RCU enabled. May 14 23:48:49.325585 kernel: Tracing variant of Tasks RCU enabled. May 14 23:48:49.325594 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 23:48:49.325601 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 14 23:48:49.325607 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 14 23:48:49.325614 kernel: GICv3: 960 SPIs implemented May 14 23:48:49.325620 kernel: GICv3: 0 Extended SPIs implemented May 14 23:48:49.325627 kernel: Root IRQ handler: gic_handle_irq May 14 23:48:49.325633 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 14 23:48:49.325640 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 May 14 23:48:49.325646 kernel: ITS: No ITS available, not enabling LPIs May 14 23:48:49.325653 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 14 23:48:49.325660 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 23:48:49.325666 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 14 23:48:49.325675 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 14 23:48:49.325681 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 14 23:48:49.325688 kernel: Console: colour dummy device 80x25 May 14 23:48:49.325695 kernel: printk: console [tty1] enabled May 14 23:48:49.325702 kernel: ACPI: Core revision 20230628 May 14 23:48:49.325709 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 14 23:48:49.325716 kernel: pid_max: default: 32768 minimum: 301 May 14 23:48:49.325722 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 14 23:48:49.325729 kernel: landlock: Up and running. May 14 23:48:49.325737 kernel: SELinux: Initializing. May 14 23:48:49.325744 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 23:48:49.325751 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 23:48:49.325758 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 23:48:49.325765 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 23:48:49.325772 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 May 14 23:48:49.325779 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 May 14 23:48:49.325794 kernel: Hyper-V: enabling crash_kexec_post_notifiers May 14 23:48:49.325801 kernel: rcu: Hierarchical SRCU implementation. May 14 23:48:49.325808 kernel: rcu: Max phase no-delay instances is 400. May 14 23:48:49.325815 kernel: Remapping and enabling EFI services. May 14 23:48:49.325822 kernel: smp: Bringing up secondary CPUs ... May 14 23:48:49.325831 kernel: Detected PIPT I-cache on CPU1 May 14 23:48:49.325838 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 May 14 23:48:49.325846 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 23:48:49.325853 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 14 23:48:49.325860 kernel: smp: Brought up 1 node, 2 CPUs May 14 23:48:49.325869 kernel: SMP: Total of 2 processors activated. May 14 23:48:49.325876 kernel: CPU features: detected: 32-bit EL0 Support May 14 23:48:49.325883 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence May 14 23:48:49.325891 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 14 23:48:49.325899 kernel: CPU features: detected: CRC32 instructions May 14 23:48:49.325906 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 14 23:48:49.325913 kernel: CPU features: detected: LSE atomic instructions May 14 23:48:49.325920 kernel: CPU features: detected: Privileged Access Never May 14 23:48:49.325927 kernel: CPU: All CPU(s) started at EL1 May 14 23:48:49.325936 kernel: alternatives: applying system-wide alternatives May 14 23:48:49.325943 kernel: devtmpfs: initialized May 14 23:48:49.325950 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 23:48:49.325958 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 14 23:48:49.325965 kernel: pinctrl core: initialized pinctrl subsystem May 14 23:48:49.325972 kernel: SMBIOS 3.1.0 present. May 14 23:48:49.325979 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 May 14 23:48:49.325986 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 23:48:49.325993 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 14 23:48:49.326002 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 14 23:48:49.326009 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 14 23:48:49.326016 kernel: audit: initializing netlink subsys (disabled) May 14 23:48:49.326024 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 May 14 23:48:49.326031 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 23:48:49.326038 kernel: cpuidle: using governor menu May 14 23:48:49.326045 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 14 23:48:49.326053 kernel: ASID allocator initialised with 32768 entries May 14 23:48:49.326060 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 23:48:49.326068 kernel: Serial: AMBA PL011 UART driver May 14 23:48:49.326076 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 14 23:48:49.326083 kernel: Modules: 0 pages in range for non-PLT usage May 14 23:48:49.326090 kernel: Modules: 509264 pages in range for PLT usage May 14 23:48:49.326097 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 14 23:48:49.326104 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 14 23:48:49.326111 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 14 23:48:49.326118 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 14 23:48:49.326126 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 14 23:48:49.326135 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 14 23:48:49.326142 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 14 23:48:49.326149 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 14 23:48:49.326156 kernel: ACPI: Added _OSI(Module Device) May 14 23:48:49.326174 kernel: ACPI: Added _OSI(Processor Device) May 14 23:48:49.326182 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 23:48:49.326190 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 23:48:49.326197 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 23:48:49.326204 kernel: ACPI: Interpreter enabled May 14 23:48:49.326212 kernel: ACPI: Using GIC for interrupt routing May 14 23:48:49.326220 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA May 14 23:48:49.326227 kernel: printk: console [ttyAMA0] enabled May 14 23:48:49.326234 kernel: printk: bootconsole [pl11] disabled May 14 23:48:49.326241 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA May 14 23:48:49.326248 kernel: iommu: Default domain type: Translated May 14 23:48:49.326255 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 14 23:48:49.326262 kernel: efivars: Registered efivars operations May 14 23:48:49.326269 kernel: vgaarb: loaded May 14 23:48:49.326278 kernel: clocksource: Switched to clocksource arch_sys_counter May 14 23:48:49.326285 kernel: VFS: Disk quotas dquot_6.6.0 May 14 23:48:49.326292 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 23:48:49.326300 kernel: pnp: PnP ACPI init May 14 23:48:49.326307 kernel: pnp: PnP ACPI: found 0 devices May 14 23:48:49.326314 kernel: NET: Registered PF_INET protocol family May 14 23:48:49.326321 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 14 23:48:49.326328 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 14 23:48:49.326335 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 23:48:49.326344 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 23:48:49.326351 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 14 23:48:49.326358 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 14 23:48:49.326366 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 23:48:49.326373 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 23:48:49.326380 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 23:48:49.326387 kernel: PCI: CLS 0 bytes, default 64 May 14 23:48:49.326394 kernel: kvm [1]: HYP mode not available May 14 23:48:49.326401 kernel: Initialise system trusted keyrings May 14 23:48:49.326410 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 14 23:48:49.326417 kernel: Key type asymmetric registered May 14 23:48:49.326425 kernel: Asymmetric key parser 'x509' registered May 14 23:48:49.326432 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 14 23:48:49.326439 kernel: io scheduler mq-deadline registered May 14 23:48:49.326446 kernel: io scheduler kyber registered May 14 23:48:49.326453 kernel: io scheduler bfq registered May 14 23:48:49.326460 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 23:48:49.326467 kernel: thunder_xcv, ver 1.0 May 14 23:48:49.326476 kernel: thunder_bgx, ver 1.0 May 14 23:48:49.326483 kernel: nicpf, ver 1.0 May 14 23:48:49.326490 kernel: nicvf, ver 1.0 May 14 23:48:49.326676 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 14 23:48:49.326750 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-14T23:48:48 UTC (1747266528) May 14 23:48:49.326760 kernel: efifb: probing for efifb May 14 23:48:49.326767 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k May 14 23:48:49.326774 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 May 14 23:48:49.326784 kernel: efifb: scrolling: redraw May 14 23:48:49.326791 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 14 23:48:49.326798 kernel: Console: switching to colour frame buffer device 128x48 May 14 23:48:49.326805 kernel: fb0: EFI VGA frame buffer device May 14 23:48:49.326812 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... May 14 23:48:49.326819 kernel: hid: raw HID events driver (C) Jiri Kosina May 14 23:48:49.326826 kernel: No ACPI PMU IRQ for CPU0 May 14 23:48:49.326833 kernel: No ACPI PMU IRQ for CPU1 May 14 23:48:49.326840 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available May 14 23:48:49.326849 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 14 23:48:49.326856 kernel: watchdog: Hard watchdog permanently disabled May 14 23:48:49.326863 kernel: NET: Registered PF_INET6 protocol family May 14 23:48:49.326870 kernel: Segment Routing with IPv6 May 14 23:48:49.326877 kernel: In-situ OAM (IOAM) with IPv6 May 14 23:48:49.326884 kernel: NET: Registered PF_PACKET protocol family May 14 23:48:49.326891 kernel: Key type dns_resolver registered May 14 23:48:49.326899 kernel: registered taskstats version 1 May 14 23:48:49.326906 kernel: Loading compiled-in X.509 certificates May 14 23:48:49.326914 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: cdb7ce3984a1665183e8a6ab3419833bc5e4e7f4' May 14 23:48:49.326922 kernel: Key type .fscrypt registered May 14 23:48:49.326928 kernel: Key type fscrypt-provisioning registered May 14 23:48:49.326936 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 23:48:49.326943 kernel: ima: Allocated hash algorithm: sha1 May 14 23:48:49.326950 kernel: ima: No architecture policies found May 14 23:48:49.326957 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 14 23:48:49.326964 kernel: clk: Disabling unused clocks May 14 23:48:49.326971 kernel: Freeing unused kernel memory: 38336K May 14 23:48:49.326980 kernel: Run /init as init process May 14 23:48:49.326988 kernel: with arguments: May 14 23:48:49.326995 kernel: /init May 14 23:48:49.327002 kernel: with environment: May 14 23:48:49.327009 kernel: HOME=/ May 14 23:48:49.327016 kernel: TERM=linux May 14 23:48:49.327022 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 23:48:49.327031 systemd[1]: Successfully made /usr/ read-only. May 14 23:48:49.327043 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 23:48:49.327051 systemd[1]: Detected virtualization microsoft. May 14 23:48:49.327058 systemd[1]: Detected architecture arm64. May 14 23:48:49.327066 systemd[1]: Running in initrd. May 14 23:48:49.327073 systemd[1]: No hostname configured, using default hostname. May 14 23:48:49.327081 systemd[1]: Hostname set to . May 14 23:48:49.327088 systemd[1]: Initializing machine ID from random generator. May 14 23:48:49.327096 systemd[1]: Queued start job for default target initrd.target. May 14 23:48:49.327106 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 23:48:49.327113 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 23:48:49.327122 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 14 23:48:49.327130 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 23:48:49.327139 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 14 23:48:49.327147 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 14 23:48:49.327156 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 14 23:48:49.329225 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 14 23:48:49.329244 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 23:48:49.329253 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 23:48:49.329262 systemd[1]: Reached target paths.target - Path Units. May 14 23:48:49.329269 systemd[1]: Reached target slices.target - Slice Units. May 14 23:48:49.329278 systemd[1]: Reached target swap.target - Swaps. May 14 23:48:49.329286 systemd[1]: Reached target timers.target - Timer Units. May 14 23:48:49.329294 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 14 23:48:49.329308 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 23:48:49.329317 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 14 23:48:49.329324 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 14 23:48:49.329332 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 23:48:49.329340 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 23:48:49.329348 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 23:48:49.329356 systemd[1]: Reached target sockets.target - Socket Units. May 14 23:48:49.329363 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 14 23:48:49.329371 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 23:48:49.329380 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 14 23:48:49.329388 systemd[1]: Starting systemd-fsck-usr.service... May 14 23:48:49.329396 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 23:48:49.329403 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 23:48:49.329445 systemd-journald[218]: Collecting audit messages is disabled. May 14 23:48:49.329468 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:48:49.329478 systemd-journald[218]: Journal started May 14 23:48:49.329497 systemd-journald[218]: Runtime Journal (/run/log/journal/471a3f8bc15749568f14a8182203240d) is 8M, max 78.5M, 70.5M free. May 14 23:48:49.338903 systemd-modules-load[220]: Inserted module 'overlay' May 14 23:48:49.354961 systemd[1]: Started systemd-journald.service - Journal Service. May 14 23:48:49.361530 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 14 23:48:49.394336 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 23:48:49.394366 kernel: Bridge firewalling registered May 14 23:48:49.375537 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 23:48:49.388838 systemd-modules-load[220]: Inserted module 'br_netfilter' May 14 23:48:49.397912 systemd[1]: Finished systemd-fsck-usr.service. May 14 23:48:49.409617 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 23:48:49.420931 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:48:49.449459 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 23:48:49.465447 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 23:48:49.474384 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 23:48:49.505343 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 23:48:49.520371 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:48:49.534913 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 23:48:49.541119 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 23:48:49.553189 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 23:48:49.575485 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 14 23:48:49.590372 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 23:48:49.605265 dracut-cmdline[252]: dracut-dracut-053 May 14 23:48:49.607288 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 23:48:49.640816 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=bfa141d6f8686d8fe96245516ecbaee60c938beef41636c397e3939a2c9a6ed9 May 14 23:48:49.632407 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 23:48:49.650771 systemd-resolved[258]: Positive Trust Anchors: May 14 23:48:49.650782 systemd-resolved[258]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 23:48:49.650812 systemd-resolved[258]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 23:48:49.653006 systemd-resolved[258]: Defaulting to hostname 'linux'. May 14 23:48:49.670828 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 23:48:49.682859 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 23:48:49.804199 kernel: SCSI subsystem initialized May 14 23:48:49.812210 kernel: Loading iSCSI transport class v2.0-870. May 14 23:48:49.822291 kernel: iscsi: registered transport (tcp) May 14 23:48:49.840841 kernel: iscsi: registered transport (qla4xxx) May 14 23:48:49.840868 kernel: QLogic iSCSI HBA Driver May 14 23:48:49.881134 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 14 23:48:49.894373 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 14 23:48:49.927050 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 23:48:49.927129 kernel: device-mapper: uevent: version 1.0.3 May 14 23:48:49.927142 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 14 23:48:49.984201 kernel: raid6: neonx8 gen() 15733 MB/s May 14 23:48:50.004185 kernel: raid6: neonx4 gen() 15811 MB/s May 14 23:48:50.024175 kernel: raid6: neonx2 gen() 13201 MB/s May 14 23:48:50.045181 kernel: raid6: neonx1 gen() 10505 MB/s May 14 23:48:50.065175 kernel: raid6: int64x8 gen() 6795 MB/s May 14 23:48:50.085175 kernel: raid6: int64x4 gen() 7357 MB/s May 14 23:48:50.106175 kernel: raid6: int64x2 gen() 6112 MB/s May 14 23:48:50.129533 kernel: raid6: int64x1 gen() 5062 MB/s May 14 23:48:50.129552 kernel: raid6: using algorithm neonx4 gen() 15811 MB/s May 14 23:48:50.153364 kernel: raid6: .... xor() 12492 MB/s, rmw enabled May 14 23:48:50.153392 kernel: raid6: using neon recovery algorithm May 14 23:48:50.162181 kernel: xor: measuring software checksum speed May 14 23:48:50.168899 kernel: 8regs : 19850 MB/sec May 14 23:48:50.168911 kernel: 32regs : 21596 MB/sec May 14 23:48:50.176639 kernel: arm64_neon : 25890 MB/sec May 14 23:48:50.176650 kernel: xor: using function: arm64_neon (25890 MB/sec) May 14 23:48:50.227196 kernel: Btrfs loaded, zoned=no, fsverity=no May 14 23:48:50.237470 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 14 23:48:50.254314 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 23:48:50.278960 systemd-udevd[439]: Using default interface naming scheme 'v255'. May 14 23:48:50.284695 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 23:48:50.319468 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 14 23:48:50.342778 dracut-pre-trigger[452]: rd.md=0: removing MD RAID activation May 14 23:48:50.374229 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 14 23:48:50.389405 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 23:48:50.424626 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 23:48:50.443376 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 14 23:48:50.465406 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 14 23:48:50.480365 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 14 23:48:50.496308 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 23:48:50.510088 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 23:48:50.526374 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 14 23:48:50.545828 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 23:48:50.559443 kernel: hv_vmbus: Vmbus version:5.3 May 14 23:48:50.546084 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:48:50.584719 kernel: hv_vmbus: registering driver hyperv_keyboard May 14 23:48:50.584745 kernel: hv_vmbus: registering driver hid_hyperv May 14 23:48:50.584755 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 May 14 23:48:50.574181 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 23:48:50.622214 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 May 14 23:48:50.622236 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on May 14 23:48:50.622371 kernel: pps_core: LinuxPPS API ver. 1 registered May 14 23:48:50.622381 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 14 23:48:50.602528 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 23:48:50.602794 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:48:50.659571 kernel: PTP clock support registered May 14 23:48:50.652173 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:48:50.686342 kernel: hv_utils: Registering HyperV Utility Driver May 14 23:48:50.686370 kernel: hv_vmbus: registering driver hv_utils May 14 23:48:50.686379 kernel: hv_utils: Heartbeat IC version 3.0 May 14 23:48:50.686388 kernel: hv_utils: Shutdown IC version 3.2 May 14 23:48:51.075848 kernel: hv_utils: TimeSync IC version 4.0 May 14 23:48:51.075899 kernel: hv_vmbus: registering driver hv_netvsc May 14 23:48:51.075909 kernel: hv_vmbus: registering driver hv_storvsc May 14 23:48:51.071169 systemd-resolved[258]: Clock change detected. Flushing caches. May 14 23:48:51.088446 kernel: scsi host0: storvsc_host_t May 14 23:48:51.076329 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:48:51.100604 kernel: scsi host1: storvsc_host_t May 14 23:48:51.107633 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 May 14 23:48:51.109347 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 May 14 23:48:51.125385 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 14 23:48:51.139368 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:48:51.155271 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 23:48:51.155455 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:48:51.174032 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:48:51.190619 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:48:51.224080 kernel: sr 0:0:0:2: [sr0] scsi-1 drive May 14 23:48:51.224298 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 14 23:48:51.224309 kernel: hv_netvsc 000d3a6d-e69d-000d-3a6d-e69d000d3a6d eth0: VF slot 1 added May 14 23:48:51.224451 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 May 14 23:48:51.225527 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:48:51.247718 kernel: hv_vmbus: registering driver hv_pci May 14 23:48:51.251599 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 23:48:51.284688 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) May 14 23:48:51.284858 kernel: hv_pci 480fa282-d992-42e2-96f4-626b05bbfc24: PCI VMBus probing: Using version 0x10004 May 14 23:48:51.284955 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks May 14 23:48:51.302739 kernel: sd 0:0:0:0: [sda] Write Protect is off May 14 23:48:51.308646 kernel: hv_pci 480fa282-d992-42e2-96f4-626b05bbfc24: PCI host bridge to bus d992:00 May 14 23:48:51.308758 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 May 14 23:48:51.308848 kernel: pci_bus d992:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] May 14 23:48:51.308948 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA May 14 23:48:51.309829 kernel: pci_bus d992:00: No busn resource found for root bus, will use [bus 00-ff] May 14 23:48:51.546883 kernel: pci d992:00:02.0: [15b3:1018] type 00 class 0x020000 May 14 23:48:51.546979 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 14 23:48:51.546990 kernel: pci d992:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] May 14 23:48:51.549368 kernel: pci d992:00:02.0: enabling Extended Tags May 14 23:48:51.557052 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 14 23:48:51.579415 kernel: pci d992:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at d992:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) May 14 23:48:51.592940 kernel: pci_bus d992:00: busn_res: [bus 00-ff] end is updated to 00 May 14 23:48:51.593186 kernel: pci d992:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] May 14 23:48:51.597869 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:48:51.648168 kernel: mlx5_core d992:00:02.0: enabling device (0000 -> 0002) May 14 23:48:51.654353 kernel: mlx5_core d992:00:02.0: firmware version: 16.31.2424 May 14 23:48:51.941020 kernel: hv_netvsc 000d3a6d-e69d-000d-3a6d-e69d000d3a6d eth0: VF registering: eth1 May 14 23:48:51.941236 kernel: mlx5_core d992:00:02.0 eth1: joined to eth0 May 14 23:48:51.950390 kernel: mlx5_core d992:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) May 14 23:48:51.962401 kernel: mlx5_core d992:00:02.0 enP55698s1: renamed from eth1 May 14 23:48:52.570628 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. May 14 23:48:52.622284 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. May 14 23:48:52.657551 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (501) May 14 23:48:52.676560 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. May 14 23:48:52.716367 kernel: BTRFS: device fsid 369506fd-904a-45c2-a4ab-2d03e7866799 devid 1 transid 44 /dev/sda3 scanned by (udev-worker) (500) May 14 23:48:52.732777 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. May 14 23:48:52.739831 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. May 14 23:48:52.764569 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 14 23:48:52.791009 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 14 23:48:52.798363 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 14 23:48:53.808440 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 14 23:48:53.808498 disk-uuid[606]: The operation has completed successfully. May 14 23:48:53.875012 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 23:48:53.875135 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 14 23:48:53.929535 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 14 23:48:53.943048 sh[692]: Success May 14 23:48:53.973380 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 14 23:48:54.194505 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 14 23:48:54.207370 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 14 23:48:54.217500 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 14 23:48:54.247188 kernel: BTRFS info (device dm-0): first mount of filesystem 369506fd-904a-45c2-a4ab-2d03e7866799 May 14 23:48:54.247245 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 14 23:48:54.254108 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 14 23:48:54.259013 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 14 23:48:54.263692 kernel: BTRFS info (device dm-0): using free space tree May 14 23:48:54.653139 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 14 23:48:54.658489 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 14 23:48:54.678587 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 14 23:48:54.688652 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 14 23:48:54.730564 kernel: BTRFS info (device sda6): first mount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 14 23:48:54.730621 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 14 23:48:54.735505 kernel: BTRFS info (device sda6): using free space tree May 14 23:48:54.772475 kernel: BTRFS info (device sda6): auto enabling async discard May 14 23:48:54.784382 kernel: BTRFS info (device sda6): last unmount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 14 23:48:54.788710 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 14 23:48:54.806573 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 14 23:48:54.815564 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 23:48:54.834025 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 23:48:54.876263 systemd-networkd[873]: lo: Link UP May 14 23:48:54.876273 systemd-networkd[873]: lo: Gained carrier May 14 23:48:54.881755 systemd-networkd[873]: Enumeration completed May 14 23:48:54.885440 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 23:48:54.885916 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:48:54.885920 systemd-networkd[873]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 23:48:54.892037 systemd[1]: Reached target network.target - Network. May 14 23:48:54.981358 kernel: mlx5_core d992:00:02.0 enP55698s1: Link up May 14 23:48:55.061368 kernel: hv_netvsc 000d3a6d-e69d-000d-3a6d-e69d000d3a6d eth0: Data path switched to VF: enP55698s1 May 14 23:48:55.062467 systemd-networkd[873]: enP55698s1: Link UP May 14 23:48:55.062556 systemd-networkd[873]: eth0: Link UP May 14 23:48:55.062658 systemd-networkd[873]: eth0: Gained carrier May 14 23:48:55.062667 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:48:55.074870 systemd-networkd[873]: enP55698s1: Gained carrier May 14 23:48:55.095387 systemd-networkd[873]: eth0: DHCPv4 address 10.200.20.35/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 14 23:48:55.849102 ignition[868]: Ignition 2.20.0 May 14 23:48:55.849115 ignition[868]: Stage: fetch-offline May 14 23:48:55.853036 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 14 23:48:55.849153 ignition[868]: no configs at "/usr/lib/ignition/base.d" May 14 23:48:55.849161 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 14 23:48:55.873521 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 14 23:48:55.849267 ignition[868]: parsed url from cmdline: "" May 14 23:48:55.849271 ignition[868]: no config URL provided May 14 23:48:55.849275 ignition[868]: reading system config file "/usr/lib/ignition/user.ign" May 14 23:48:55.849284 ignition[868]: no config at "/usr/lib/ignition/user.ign" May 14 23:48:55.849289 ignition[868]: failed to fetch config: resource requires networking May 14 23:48:55.849492 ignition[868]: Ignition finished successfully May 14 23:48:55.893957 ignition[885]: Ignition 2.20.0 May 14 23:48:55.893964 ignition[885]: Stage: fetch May 14 23:48:55.894153 ignition[885]: no configs at "/usr/lib/ignition/base.d" May 14 23:48:55.894162 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 14 23:48:55.894254 ignition[885]: parsed url from cmdline: "" May 14 23:48:55.894257 ignition[885]: no config URL provided May 14 23:48:55.894261 ignition[885]: reading system config file "/usr/lib/ignition/user.ign" May 14 23:48:55.894268 ignition[885]: no config at "/usr/lib/ignition/user.ign" May 14 23:48:55.894298 ignition[885]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 May 14 23:48:56.017680 ignition[885]: GET result: OK May 14 23:48:56.018345 ignition[885]: config has been read from IMDS userdata May 14 23:48:56.018386 ignition[885]: parsing config with SHA512: ad81e79724b34f1f10c249a5a969a005ee7c11870a6a259ce14406aa166424573701ef4cf08a25b683a726013dc7c66b4b9eb425ce7cf09d710896bd1834c56d May 14 23:48:56.022627 unknown[885]: fetched base config from "system" May 14 23:48:56.023402 ignition[885]: fetch: fetch complete May 14 23:48:56.022634 unknown[885]: fetched base config from "system" May 14 23:48:56.023408 ignition[885]: fetch: fetch passed May 14 23:48:56.022639 unknown[885]: fetched user config from "azure" May 14 23:48:56.023483 ignition[885]: Ignition finished successfully May 14 23:48:56.025353 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 14 23:48:56.046659 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 14 23:48:56.074755 ignition[891]: Ignition 2.20.0 May 14 23:48:56.074770 ignition[891]: Stage: kargs May 14 23:48:56.074964 ignition[891]: no configs at "/usr/lib/ignition/base.d" May 14 23:48:56.082110 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 14 23:48:56.074974 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 14 23:48:56.075928 ignition[891]: kargs: kargs passed May 14 23:48:56.075978 ignition[891]: Ignition finished successfully May 14 23:48:56.111626 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 14 23:48:56.127054 ignition[897]: Ignition 2.20.0 May 14 23:48:56.127065 ignition[897]: Stage: disks May 14 23:48:56.131545 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 14 23:48:56.127243 ignition[897]: no configs at "/usr/lib/ignition/base.d" May 14 23:48:56.138724 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 14 23:48:56.127253 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 14 23:48:56.147433 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 14 23:48:56.128160 ignition[897]: disks: disks passed May 14 23:48:56.152858 systemd-networkd[873]: enP55698s1: Gained IPv6LL May 14 23:48:56.128208 ignition[897]: Ignition finished successfully May 14 23:48:56.160449 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 23:48:56.171027 systemd[1]: Reached target sysinit.target - System Initialization. May 14 23:48:56.182817 systemd[1]: Reached target basic.target - Basic System. May 14 23:48:56.215670 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 14 23:48:56.301286 systemd-fsck[905]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks May 14 23:48:56.307660 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 14 23:48:56.323592 systemd[1]: Mounting sysroot.mount - /sysroot... May 14 23:48:56.379578 kernel: EXT4-fs (sda9): mounted filesystem 737cda88-7069-47ce-b2bc-d891099a68fb r/w with ordered data mode. Quota mode: none. May 14 23:48:56.380294 systemd[1]: Mounted sysroot.mount - /sysroot. May 14 23:48:56.389295 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 14 23:48:56.433432 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 23:48:56.442594 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 14 23:48:56.453279 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 14 23:48:56.476667 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (916) May 14 23:48:56.476790 kernel: BTRFS info (device sda6): first mount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 14 23:48:56.476804 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 14 23:48:56.493278 kernel: BTRFS info (device sda6): using free space tree May 14 23:48:56.493799 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 23:48:56.493842 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 14 23:48:56.529936 kernel: BTRFS info (device sda6): auto enabling async discard May 14 23:48:56.514278 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 14 23:48:56.543650 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 14 23:48:56.556669 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 23:48:56.725504 systemd-networkd[873]: eth0: Gained IPv6LL May 14 23:48:57.179722 coreos-metadata[918]: May 14 23:48:57.179 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 14 23:48:57.189891 coreos-metadata[918]: May 14 23:48:57.189 INFO Fetch successful May 14 23:48:57.189891 coreos-metadata[918]: May 14 23:48:57.189 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 May 14 23:48:57.207861 coreos-metadata[918]: May 14 23:48:57.207 INFO Fetch successful May 14 23:48:57.223157 coreos-metadata[918]: May 14 23:48:57.223 INFO wrote hostname ci-4230.1.1-n-76ed3c1841 to /sysroot/etc/hostname May 14 23:48:57.233494 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 14 23:48:57.472207 initrd-setup-root[946]: cut: /sysroot/etc/passwd: No such file or directory May 14 23:48:57.509568 initrd-setup-root[953]: cut: /sysroot/etc/group: No such file or directory May 14 23:48:57.519023 initrd-setup-root[960]: cut: /sysroot/etc/shadow: No such file or directory May 14 23:48:57.527733 initrd-setup-root[967]: cut: /sysroot/etc/gshadow: No such file or directory May 14 23:48:58.532234 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 14 23:48:58.547641 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 14 23:48:58.557557 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 14 23:48:58.577939 kernel: BTRFS info (device sda6): last unmount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 14 23:48:58.579245 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 14 23:48:58.599314 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 14 23:48:58.615283 ignition[1036]: INFO : Ignition 2.20.0 May 14 23:48:58.615283 ignition[1036]: INFO : Stage: mount May 14 23:48:58.624480 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 23:48:58.624480 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 14 23:48:58.624480 ignition[1036]: INFO : mount: mount passed May 14 23:48:58.624480 ignition[1036]: INFO : Ignition finished successfully May 14 23:48:58.621127 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 14 23:48:58.647549 systemd[1]: Starting ignition-files.service - Ignition (files)... May 14 23:48:58.667644 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 23:48:58.701772 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1047) May 14 23:48:58.701846 kernel: BTRFS info (device sda6): first mount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 14 23:48:58.707940 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 14 23:48:58.711962 kernel: BTRFS info (device sda6): using free space tree May 14 23:48:58.721367 kernel: BTRFS info (device sda6): auto enabling async discard May 14 23:48:58.720157 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 23:48:58.747524 ignition[1064]: INFO : Ignition 2.20.0 May 14 23:48:58.753210 ignition[1064]: INFO : Stage: files May 14 23:48:58.753210 ignition[1064]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 23:48:58.753210 ignition[1064]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 14 23:48:58.753210 ignition[1064]: DEBUG : files: compiled without relabeling support, skipping May 14 23:48:58.775585 ignition[1064]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 23:48:58.775585 ignition[1064]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 23:48:58.826409 ignition[1064]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 23:48:58.834122 ignition[1064]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 23:48:58.834122 ignition[1064]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 23:48:58.831917 unknown[1064]: wrote ssh authorized keys file for user: core May 14 23:48:58.853837 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 14 23:48:58.853837 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 14 23:48:58.927019 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 23:48:59.067213 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 14 23:48:59.078199 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 14 23:48:59.078199 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 14 23:48:59.078199 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 23:48:59.078199 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 23:48:59.078199 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 23:48:59.078199 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 23:48:59.078199 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 23:48:59.078199 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 23:48:59.078199 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 23:48:59.078199 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 23:48:59.078199 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 23:48:59.078199 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 23:48:59.078199 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 23:48:59.078199 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 May 14 23:48:59.576589 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 14 23:49:00.414952 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 23:49:00.414952 ignition[1064]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 14 23:49:00.435749 ignition[1064]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 23:49:00.435749 ignition[1064]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 23:49:00.435749 ignition[1064]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 14 23:49:00.435749 ignition[1064]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 14 23:49:00.435749 ignition[1064]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 14 23:49:00.435749 ignition[1064]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 23:49:00.435749 ignition[1064]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 23:49:00.435749 ignition[1064]: INFO : files: files passed May 14 23:49:00.435749 ignition[1064]: INFO : Ignition finished successfully May 14 23:49:00.429233 systemd[1]: Finished ignition-files.service - Ignition (files). May 14 23:49:00.470626 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 14 23:49:00.485893 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 14 23:49:00.509758 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 23:49:00.575042 initrd-setup-root-after-ignition[1091]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 23:49:00.575042 initrd-setup-root-after-ignition[1091]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 14 23:49:00.509858 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 14 23:49:00.607142 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 23:49:00.537704 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 23:49:00.545002 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 14 23:49:00.575595 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 14 23:49:00.625782 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 23:49:00.625987 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 14 23:49:00.636002 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 14 23:49:00.647795 systemd[1]: Reached target initrd.target - Initrd Default Target. May 14 23:49:00.658654 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 14 23:49:00.676526 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 14 23:49:00.705684 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 23:49:00.729659 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 14 23:49:00.746112 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 23:49:00.746396 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 14 23:49:00.758129 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 14 23:49:00.770995 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 23:49:00.784071 systemd[1]: Stopped target timers.target - Timer Units. May 14 23:49:00.795796 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 23:49:00.795871 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 23:49:00.812170 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 14 23:49:00.824286 systemd[1]: Stopped target basic.target - Basic System. May 14 23:49:00.836885 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 14 23:49:00.848118 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 14 23:49:00.860859 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 14 23:49:00.873730 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 14 23:49:00.887487 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 14 23:49:00.901253 systemd[1]: Stopped target sysinit.target - System Initialization. May 14 23:49:00.913907 systemd[1]: Stopped target local-fs.target - Local File Systems. May 14 23:49:00.925226 systemd[1]: Stopped target swap.target - Swaps. May 14 23:49:00.935129 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 23:49:00.935212 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 14 23:49:00.950965 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 14 23:49:00.963545 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 23:49:00.976017 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 14 23:49:00.976066 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 23:49:00.988799 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 23:49:00.988871 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 14 23:49:01.006844 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 23:49:01.006898 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 23:49:01.013591 systemd[1]: ignition-files.service: Deactivated successfully. May 14 23:49:01.013642 systemd[1]: Stopped ignition-files.service - Ignition (files). May 14 23:49:01.023949 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 14 23:49:01.024000 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 14 23:49:01.086552 ignition[1117]: INFO : Ignition 2.20.0 May 14 23:49:01.086552 ignition[1117]: INFO : Stage: umount May 14 23:49:01.086552 ignition[1117]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 23:49:01.086552 ignition[1117]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 14 23:49:01.086552 ignition[1117]: INFO : umount: umount passed May 14 23:49:01.086552 ignition[1117]: INFO : Ignition finished successfully May 14 23:49:01.054545 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 14 23:49:01.072148 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 23:49:01.072231 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 14 23:49:01.106476 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 14 23:49:01.117388 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 23:49:01.117476 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 14 23:49:01.130383 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 23:49:01.130454 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 14 23:49:01.154462 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 23:49:01.155139 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 23:49:01.155252 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 14 23:49:01.165493 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 23:49:01.165595 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 14 23:49:01.178192 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 23:49:01.178262 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 14 23:49:01.189298 systemd[1]: ignition-fetch.service: Deactivated successfully. May 14 23:49:01.189365 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 14 23:49:01.199820 systemd[1]: Stopped target network.target - Network. May 14 23:49:01.210385 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 23:49:01.210451 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 14 23:49:01.221281 systemd[1]: Stopped target paths.target - Path Units. May 14 23:49:01.231330 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 23:49:01.235366 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 23:49:01.243214 systemd[1]: Stopped target slices.target - Slice Units. May 14 23:49:01.253154 systemd[1]: Stopped target sockets.target - Socket Units. May 14 23:49:01.263829 systemd[1]: iscsid.socket: Deactivated successfully. May 14 23:49:01.263884 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 14 23:49:01.274165 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 23:49:01.274203 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 23:49:01.284687 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 23:49:01.284756 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 14 23:49:01.295107 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 14 23:49:01.295160 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 14 23:49:01.305754 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 14 23:49:01.315404 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 14 23:49:01.327795 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 23:49:01.327905 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 14 23:49:01.339479 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 23:49:01.339600 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 14 23:49:01.351900 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 23:49:01.541073 kernel: mlx5_core d992:00:02.0: poll_health:835:(pid 218): device's health compromised - reached miss count May 14 23:49:01.352034 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 14 23:49:01.373420 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 14 23:49:01.373685 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 23:49:01.375377 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 14 23:49:01.386104 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 14 23:49:01.387192 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 23:49:01.387262 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 14 23:49:01.616472 kernel: hv_netvsc 000d3a6d-e69d-000d-3a6d-e69d000d3a6d eth0: Data path switched from VF: enP55698s1 May 14 23:49:01.413536 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 14 23:49:01.423215 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 23:49:01.423296 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 23:49:01.435025 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 23:49:01.435084 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 23:49:01.451450 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 23:49:01.451505 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 14 23:49:01.457593 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 14 23:49:01.457650 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 23:49:01.475994 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 23:49:01.485929 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 23:49:01.486001 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 14 23:49:01.503619 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 23:49:01.503857 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 23:49:01.516074 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 23:49:01.516137 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 14 23:49:01.535901 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 23:49:01.535938 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 14 23:49:01.546950 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 23:49:01.547022 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 14 23:49:01.564730 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 23:49:01.564782 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 14 23:49:01.575641 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 23:49:01.575703 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:49:01.625507 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 14 23:49:01.631915 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 23:49:01.631993 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 23:49:01.657819 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 23:49:01.657895 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:49:01.862029 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). May 14 23:49:01.670273 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 14 23:49:01.670332 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 23:49:01.670658 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 23:49:01.670758 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 14 23:49:01.682432 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 23:49:01.682523 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 14 23:49:01.693422 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 14 23:49:01.728600 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 14 23:49:01.772986 systemd[1]: Switching root. May 14 23:49:01.913183 systemd-journald[218]: Journal stopped May 14 23:49:11.830111 kernel: SELinux: policy capability network_peer_controls=1 May 14 23:49:11.830136 kernel: SELinux: policy capability open_perms=1 May 14 23:49:11.830146 kernel: SELinux: policy capability extended_socket_class=1 May 14 23:49:11.830154 kernel: SELinux: policy capability always_check_network=0 May 14 23:49:11.830163 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 23:49:11.830171 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 23:49:11.830180 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 23:49:11.830188 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 23:49:11.830196 kernel: audit: type=1403 audit(1747266547.735:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 23:49:11.830206 systemd[1]: Successfully loaded SELinux policy in 116.845ms. May 14 23:49:11.830217 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.063ms. May 14 23:49:11.830227 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 23:49:11.830236 systemd[1]: Detected virtualization microsoft. May 14 23:49:11.830244 systemd[1]: Detected architecture arm64. May 14 23:49:11.830253 systemd[1]: Detected first boot. May 14 23:49:11.830265 systemd[1]: Hostname set to . May 14 23:49:11.830275 systemd[1]: Initializing machine ID from random generator. May 14 23:49:11.830284 zram_generator::config[1159]: No configuration found. May 14 23:49:11.830293 kernel: NET: Registered PF_VSOCK protocol family May 14 23:49:11.830302 systemd[1]: Populated /etc with preset unit settings. May 14 23:49:11.830312 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 14 23:49:11.830321 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 23:49:11.830332 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 14 23:49:11.830449 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 23:49:11.830462 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 14 23:49:11.830472 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 14 23:49:11.830482 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 14 23:49:11.830491 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 14 23:49:11.830500 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 14 23:49:11.830513 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 14 23:49:11.830523 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 14 23:49:11.830532 systemd[1]: Created slice user.slice - User and Session Slice. May 14 23:49:11.830541 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 23:49:11.830550 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 23:49:11.830559 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 14 23:49:11.830568 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 14 23:49:11.830578 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 14 23:49:11.830589 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 23:49:11.830599 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 14 23:49:11.830609 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 23:49:11.830620 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 14 23:49:11.830630 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 14 23:49:11.830640 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 14 23:49:11.830649 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 14 23:49:11.830659 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 23:49:11.830670 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 23:49:11.830680 systemd[1]: Reached target slices.target - Slice Units. May 14 23:49:11.830689 systemd[1]: Reached target swap.target - Swaps. May 14 23:49:11.830700 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 14 23:49:11.830709 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 14 23:49:11.830719 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 14 23:49:11.830730 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 23:49:11.830740 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 23:49:11.830749 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 23:49:11.830759 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 14 23:49:11.830768 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 14 23:49:11.830778 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 14 23:49:11.830788 systemd[1]: Mounting media.mount - External Media Directory... May 14 23:49:11.830799 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 14 23:49:11.830809 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 14 23:49:11.830818 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 14 23:49:11.830828 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 23:49:11.830838 systemd[1]: Reached target machines.target - Containers. May 14 23:49:11.830847 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 14 23:49:11.830857 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:49:11.830867 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 23:49:11.830878 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 14 23:49:11.830888 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 23:49:11.830897 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 23:49:11.830908 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 23:49:11.830917 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 14 23:49:11.830927 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 23:49:11.830937 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 23:49:11.830947 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 23:49:11.830958 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 14 23:49:11.830967 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 23:49:11.830976 systemd[1]: Stopped systemd-fsck-usr.service. May 14 23:49:11.830987 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 23:49:11.830996 kernel: fuse: init (API version 7.39) May 14 23:49:11.831005 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 23:49:11.831014 kernel: loop: module loaded May 14 23:49:11.831022 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 23:49:11.831032 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 23:49:11.831043 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 14 23:49:11.831053 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 14 23:49:11.831062 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 23:49:11.831072 systemd[1]: verity-setup.service: Deactivated successfully. May 14 23:49:11.831081 systemd[1]: Stopped verity-setup.service. May 14 23:49:11.831091 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 14 23:49:11.831100 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 14 23:49:11.831111 systemd[1]: Mounted media.mount - External Media Directory. May 14 23:49:11.831122 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 14 23:49:11.831131 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 14 23:49:11.831141 kernel: ACPI: bus type drm_connector registered May 14 23:49:11.831150 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 14 23:49:11.831184 systemd-journald[1249]: Collecting audit messages is disabled. May 14 23:49:11.831207 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 14 23:49:11.831218 systemd-journald[1249]: Journal started May 14 23:49:11.831238 systemd-journald[1249]: Runtime Journal (/run/log/journal/e137f9849827447984e717955eac42d8) is 8M, max 78.5M, 70.5M free. May 14 23:49:10.751051 systemd[1]: Queued start job for default target multi-user.target. May 14 23:49:10.763463 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 14 23:49:10.763969 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 23:49:10.764333 systemd[1]: systemd-journald.service: Consumed 3.213s CPU time. May 14 23:49:11.844179 systemd[1]: Started systemd-journald.service - Journal Service. May 14 23:49:11.845113 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 23:49:11.853616 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 23:49:11.854404 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 14 23:49:11.861496 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 23:49:11.861704 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 23:49:11.868296 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 23:49:11.868558 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 23:49:11.875213 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 23:49:11.876412 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 23:49:11.883418 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 23:49:11.883595 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 14 23:49:11.890739 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 23:49:11.890909 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 23:49:11.898787 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 23:49:11.905986 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 23:49:11.913877 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 14 23:49:11.922319 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 14 23:49:11.930696 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 23:49:11.951010 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 23:49:11.973534 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 14 23:49:11.983125 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 14 23:49:11.991741 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 23:49:11.991791 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 23:49:11.999892 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 14 23:49:12.008265 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 14 23:49:12.016983 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 14 23:49:12.022862 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:49:12.047506 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 14 23:49:12.055840 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 14 23:49:12.064706 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 23:49:12.066105 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 14 23:49:12.072247 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 23:49:12.073479 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 23:49:12.082612 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 14 23:49:12.090671 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 14 23:49:12.100579 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 14 23:49:12.112750 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 14 23:49:12.123501 systemd-journald[1249]: Time spent on flushing to /var/log/journal/e137f9849827447984e717955eac42d8 is 15.560ms for 914 entries. May 14 23:49:12.123501 systemd-journald[1249]: System Journal (/var/log/journal/e137f9849827447984e717955eac42d8) is 8M, max 2.6G, 2.6G free. May 14 23:49:12.798021 systemd-journald[1249]: Received client request to flush runtime journal. May 14 23:49:12.798100 kernel: loop0: detected capacity change from 0 to 113512 May 14 23:49:12.123608 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 14 23:49:12.139423 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 14 23:49:12.146849 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 14 23:49:12.159365 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 14 23:49:12.179042 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 14 23:49:12.187514 udevadm[1302]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 14 23:49:12.657468 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 23:49:12.799814 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 14 23:49:14.105769 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 14 23:49:14.122601 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 23:49:14.404229 systemd-tmpfiles[1315]: ACLs are not supported, ignoring. May 14 23:49:14.404245 systemd-tmpfiles[1315]: ACLs are not supported, ignoring. May 14 23:49:14.409319 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 23:49:14.615375 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 23:49:14.707366 kernel: loop1: detected capacity change from 0 to 28720 May 14 23:49:15.049389 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 23:49:15.051020 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 14 23:49:15.344368 kernel: loop2: detected capacity change from 0 to 189592 May 14 23:49:15.452706 kernel: loop3: detected capacity change from 0 to 123192 May 14 23:49:16.033315 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 14 23:49:16.045528 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 23:49:16.081349 systemd-udevd[1324]: Using default interface naming scheme 'v255'. May 14 23:49:16.452907 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 23:49:16.474805 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 23:49:16.518646 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 14 23:49:16.705229 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 14 23:49:16.760793 kernel: hv_vmbus: registering driver hv_balloon May 14 23:49:16.760907 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 May 14 23:49:16.765531 kernel: hv_balloon: Memory hot add disabled on ARM64 May 14 23:49:16.825150 kernel: mousedev: PS/2 mouse device common for all mice May 14 23:49:16.818503 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 14 23:49:16.834671 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:49:16.864413 kernel: hv_vmbus: registering driver hyperv_fb May 14 23:49:16.875943 kernel: hyperv_fb: Synthvid Version major 3, minor 5 May 14 23:49:16.876062 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 May 14 23:49:16.881553 kernel: Console: switching to colour dummy device 80x25 May 14 23:49:16.883367 kernel: Console: switching to colour frame buffer device 128x48 May 14 23:49:16.892691 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 23:49:16.892892 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:49:16.902752 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 23:49:16.912600 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:49:16.985412 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (1330) May 14 23:49:17.055117 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. May 14 23:49:17.068519 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 14 23:49:17.121091 systemd-networkd[1339]: lo: Link UP May 14 23:49:17.121100 systemd-networkd[1339]: lo: Gained carrier May 14 23:49:17.123121 systemd-networkd[1339]: Enumeration completed May 14 23:49:17.123292 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 23:49:17.123519 systemd-networkd[1339]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:49:17.123522 systemd-networkd[1339]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 23:49:17.136668 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 14 23:49:17.149422 kernel: loop4: detected capacity change from 0 to 113512 May 14 23:49:17.155633 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 14 23:49:17.161425 kernel: loop5: detected capacity change from 0 to 28720 May 14 23:49:17.187383 kernel: loop6: detected capacity change from 0 to 189592 May 14 23:49:17.205394 kernel: loop7: detected capacity change from 0 to 123192 May 14 23:49:17.212462 (sd-merge)[1435]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. May 14 23:49:17.213234 (sd-merge)[1435]: Merged extensions into '/usr'. May 14 23:49:17.219359 kernel: mlx5_core d992:00:02.0 enP55698s1: Link up May 14 23:49:17.222453 systemd[1]: Reload requested from client PID 1299 ('systemd-sysext') (unit systemd-sysext.service)... May 14 23:49:17.222645 systemd[1]: Reloading... May 14 23:49:17.271690 kernel: hv_netvsc 000d3a6d-e69d-000d-3a6d-e69d000d3a6d eth0: Data path switched to VF: enP55698s1 May 14 23:49:17.277497 systemd-networkd[1339]: enP55698s1: Link UP May 14 23:49:17.277686 systemd-networkd[1339]: eth0: Link UP May 14 23:49:17.277690 systemd-networkd[1339]: eth0: Gained carrier May 14 23:49:17.277711 systemd-networkd[1339]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:49:17.284748 systemd-networkd[1339]: enP55698s1: Gained carrier May 14 23:49:17.312662 systemd-networkd[1339]: eth0: DHCPv4 address 10.200.20.35/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 14 23:49:17.344408 zram_generator::config[1493]: No configuration found. May 14 23:49:17.455717 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:49:17.560549 systemd[1]: Reloading finished in 337 ms. May 14 23:49:17.576591 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 14 23:49:17.585803 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 14 23:49:17.597039 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:49:17.605366 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 14 23:49:17.615565 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 14 23:49:17.634826 systemd[1]: Starting ensure-sysext.service... May 14 23:49:17.643697 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 14 23:49:17.654070 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 23:49:17.672505 systemd[1]: Reload requested from client PID 1537 ('systemctl') (unit ensure-sysext.service)... May 14 23:49:17.672524 systemd[1]: Reloading... May 14 23:49:17.677703 systemd-tmpfiles[1539]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 23:49:17.678303 systemd-tmpfiles[1539]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 23:49:17.679018 systemd-tmpfiles[1539]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 23:49:17.679221 systemd-tmpfiles[1539]: ACLs are not supported, ignoring. May 14 23:49:17.679265 systemd-tmpfiles[1539]: ACLs are not supported, ignoring. May 14 23:49:17.738410 lvm[1538]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 23:49:17.744368 zram_generator::config[1569]: No configuration found. May 14 23:49:17.876050 systemd-tmpfiles[1539]: Detected autofs mount point /boot during canonicalization of boot. May 14 23:49:17.876257 systemd-tmpfiles[1539]: Skipping /boot May 14 23:49:17.876671 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:49:17.886081 systemd-tmpfiles[1539]: Detected autofs mount point /boot during canonicalization of boot. May 14 23:49:17.886249 systemd-tmpfiles[1539]: Skipping /boot May 14 23:49:17.974857 systemd[1]: Reloading finished in 302 ms. May 14 23:49:18.001377 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 14 23:49:18.009637 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 23:49:18.025422 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 23:49:18.037631 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 23:49:18.044390 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 14 23:49:18.053713 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 14 23:49:18.066307 lvm[1632]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 23:49:18.070471 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 14 23:49:18.086686 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 23:49:18.099611 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 14 23:49:18.109848 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 14 23:49:18.121605 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:49:18.130086 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 23:49:18.145206 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 23:49:18.160926 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 23:49:18.169959 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:49:18.170549 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 23:49:18.175474 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 23:49:18.176008 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 23:49:18.183766 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 23:49:18.185384 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 23:49:18.194282 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 23:49:18.194645 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 23:49:18.204409 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 14 23:49:18.222015 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:49:18.226609 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 23:49:18.234643 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 23:49:18.242745 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 23:49:18.253635 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 23:49:18.261791 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:49:18.261951 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 23:49:18.262094 systemd[1]: Reached target time-set.target - System Time Set. May 14 23:49:18.270086 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 23:49:18.270349 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 23:49:18.278469 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 23:49:18.279694 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 23:49:18.289110 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 14 23:49:18.299291 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 23:49:18.299672 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 23:49:18.308817 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 23:49:18.309825 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 23:49:18.320072 systemd[1]: Finished ensure-sysext.service. May 14 23:49:18.329606 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 23:49:18.329691 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 23:49:18.347635 systemd-resolved[1635]: Positive Trust Anchors: May 14 23:49:18.347656 systemd-resolved[1635]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 23:49:18.347688 systemd-resolved[1635]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 23:49:18.358492 systemd-networkd[1339]: eth0: Gained IPv6LL May 14 23:49:18.362595 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 14 23:49:18.380220 systemd-resolved[1635]: Using system hostname 'ci-4230.1.1-n-76ed3c1841'. May 14 23:49:18.382099 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 23:49:18.388749 systemd[1]: Reached target network.target - Network. May 14 23:49:18.392295 augenrules[1675]: No rules May 14 23:49:18.394381 systemd[1]: Reached target network-online.target - Network is Online. May 14 23:49:18.400809 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 23:49:18.408212 systemd[1]: audit-rules.service: Deactivated successfully. May 14 23:49:18.410438 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 23:49:18.739546 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 14 23:49:18.747456 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 23:49:19.125529 systemd-networkd[1339]: enP55698s1: Gained IPv6LL May 14 23:49:21.814794 ldconfig[1294]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 23:49:21.832973 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 14 23:49:21.847536 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 14 23:49:21.862448 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 14 23:49:21.869208 systemd[1]: Reached target sysinit.target - System Initialization. May 14 23:49:21.875682 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 14 23:49:21.882923 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 14 23:49:21.891827 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 14 23:49:21.898524 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 14 23:49:21.906579 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 14 23:49:21.914126 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 23:49:21.914168 systemd[1]: Reached target paths.target - Path Units. May 14 23:49:21.919238 systemd[1]: Reached target timers.target - Timer Units. May 14 23:49:21.925691 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 14 23:49:21.933567 systemd[1]: Starting docker.socket - Docker Socket for the API... May 14 23:49:21.942866 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 14 23:49:21.951390 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 14 23:49:21.959397 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 14 23:49:21.976397 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 14 23:49:21.982774 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 14 23:49:21.992030 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 14 23:49:21.999036 systemd[1]: Reached target sockets.target - Socket Units. May 14 23:49:22.005488 systemd[1]: Reached target basic.target - Basic System. May 14 23:49:22.011074 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 14 23:49:22.011108 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 14 23:49:22.025491 systemd[1]: Starting chronyd.service - NTP client/server... May 14 23:49:22.034555 systemd[1]: Starting containerd.service - containerd container runtime... May 14 23:49:22.056569 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 14 23:49:22.065571 (chronyd)[1687]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS May 14 23:49:22.068249 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 14 23:49:22.074875 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 14 23:49:22.083612 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 14 23:49:22.084943 jq[1694]: false May 14 23:49:22.089545 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 23:49:22.089595 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). May 14 23:49:22.097003 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. May 14 23:49:22.103516 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). May 14 23:49:22.105524 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:49:22.116092 KVP[1696]: KVP starting; pid is:1696 May 14 23:49:22.121656 KVP[1696]: KVP LIC Version: 3.1 May 14 23:49:22.122359 kernel: hv_utils: KVP IC version 4.0 May 14 23:49:22.123733 chronyd[1700]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) May 14 23:49:22.130229 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 14 23:49:22.137703 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 14 23:49:22.147105 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 14 23:49:22.155623 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 14 23:49:22.168196 chronyd[1700]: Timezone right/UTC failed leap second check, ignoring May 14 23:49:22.169029 chronyd[1700]: Loaded seccomp filter (level 2) May 14 23:49:22.170541 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 14 23:49:22.185258 extend-filesystems[1695]: Found loop4 May 14 23:49:22.192763 extend-filesystems[1695]: Found loop5 May 14 23:49:22.192763 extend-filesystems[1695]: Found loop6 May 14 23:49:22.192763 extend-filesystems[1695]: Found loop7 May 14 23:49:22.192763 extend-filesystems[1695]: Found sda May 14 23:49:22.192763 extend-filesystems[1695]: Found sda1 May 14 23:49:22.192763 extend-filesystems[1695]: Found sda2 May 14 23:49:22.192763 extend-filesystems[1695]: Found sda3 May 14 23:49:22.192763 extend-filesystems[1695]: Found usr May 14 23:49:22.192763 extend-filesystems[1695]: Found sda4 May 14 23:49:22.192763 extend-filesystems[1695]: Found sda6 May 14 23:49:22.192763 extend-filesystems[1695]: Found sda7 May 14 23:49:22.192763 extend-filesystems[1695]: Found sda9 May 14 23:49:22.192763 extend-filesystems[1695]: Checking size of /dev/sda9 May 14 23:49:22.192759 systemd[1]: Starting systemd-logind.service - User Login Management... May 14 23:49:22.381238 coreos-metadata[1689]: May 14 23:49:22.345 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 14 23:49:22.381238 coreos-metadata[1689]: May 14 23:49:22.345 INFO Fetch successful May 14 23:49:22.381238 coreos-metadata[1689]: May 14 23:49:22.345 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 May 14 23:49:22.381238 coreos-metadata[1689]: May 14 23:49:22.358 INFO Fetch successful May 14 23:49:22.381238 coreos-metadata[1689]: May 14 23:49:22.358 INFO Fetching http://168.63.129.16/machine/de1b214d-8511-42cc-acc8-b64054da8b3e/d7cda257%2Da51f%2D47e5%2D8d82%2Dffd5f98717e8.%5Fci%2D4230.1.1%2Dn%2D76ed3c1841?comp=config&type=sharedConfig&incarnation=1: Attempt #1 May 14 23:49:22.381238 coreos-metadata[1689]: May 14 23:49:22.358 INFO Fetch successful May 14 23:49:22.381238 coreos-metadata[1689]: May 14 23:49:22.358 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 May 14 23:49:22.381238 coreos-metadata[1689]: May 14 23:49:22.376 INFO Fetch successful May 14 23:49:22.381677 extend-filesystems[1695]: Old size kept for /dev/sda9 May 14 23:49:22.381677 extend-filesystems[1695]: Found sr0 May 14 23:49:22.242771 dbus-daemon[1693]: [system] SELinux support is enabled May 14 23:49:22.204780 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 23:49:22.207474 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 23:49:22.476542 update_engine[1715]: I20250514 23:49:22.351178 1715 main.cc:92] Flatcar Update Engine starting May 14 23:49:22.476542 update_engine[1715]: I20250514 23:49:22.353658 1715 update_check_scheduler.cc:74] Next update check in 3m14s May 14 23:49:22.208992 systemd[1]: Starting update-engine.service - Update Engine... May 14 23:49:22.488603 jq[1718]: true May 14 23:49:22.231503 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 14 23:49:22.255406 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 14 23:49:22.267135 systemd[1]: Started chronyd.service - NTP client/server. May 14 23:49:22.302494 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 23:49:22.489158 jq[1736]: true May 14 23:49:22.307189 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 14 23:49:22.308737 systemd[1]: motdgen.service: Deactivated successfully. May 14 23:49:22.497452 dbus-daemon[1693]: [system] Successfully activated service 'org.freedesktop.systemd1' May 14 23:49:22.497870 tar[1730]: linux-arm64/helm May 14 23:49:22.308941 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 14 23:49:22.321183 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 14 23:49:22.339855 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 23:49:22.340052 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 14 23:49:22.380821 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 23:49:22.386249 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 14 23:49:22.415819 systemd-logind[1712]: New seat seat0. May 14 23:49:22.419270 systemd-logind[1712]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) May 14 23:49:22.425586 systemd[1]: Started systemd-logind.service - User Login Management. May 14 23:49:22.473569 (ntainerd)[1737]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 14 23:49:22.496138 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 23:49:22.496194 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 14 23:49:22.516627 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 23:49:22.516652 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 14 23:49:22.527574 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 14 23:49:22.538545 systemd[1]: Started update-engine.service - Update Engine. May 14 23:49:22.546778 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 14 23:49:22.553693 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 14 23:49:22.682466 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (1739) May 14 23:49:22.689983 bash[1784]: Updated "/home/core/.ssh/authorized_keys" May 14 23:49:22.695811 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 14 23:49:22.712882 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 14 23:49:22.944528 sshd_keygen[1716]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 23:49:22.956532 locksmithd[1770]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 23:49:22.968317 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 14 23:49:22.987822 systemd[1]: Starting issuegen.service - Generate /run/issue... May 14 23:49:23.000600 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... May 14 23:49:23.022619 systemd[1]: issuegen.service: Deactivated successfully. May 14 23:49:23.023221 systemd[1]: Finished issuegen.service - Generate /run/issue. May 14 23:49:23.040714 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 14 23:49:23.075717 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. May 14 23:49:23.099585 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 14 23:49:23.120801 systemd[1]: Started getty@tty1.service - Getty on tty1. May 14 23:49:23.142906 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 14 23:49:23.152172 systemd[1]: Reached target getty.target - Login Prompts. May 14 23:49:23.157857 containerd[1737]: time="2025-05-14T23:49:23.157739100Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 14 23:49:23.229482 containerd[1737]: time="2025-05-14T23:49:23.229061460Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 14 23:49:23.231872 containerd[1737]: time="2025-05-14T23:49:23.231737300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 14 23:49:23.231872 containerd[1737]: time="2025-05-14T23:49:23.231788260Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 14 23:49:23.231872 containerd[1737]: time="2025-05-14T23:49:23.231807820Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 14 23:49:23.232431 containerd[1737]: time="2025-05-14T23:49:23.231986740Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 14 23:49:23.232431 containerd[1737]: time="2025-05-14T23:49:23.232009140Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 14 23:49:23.232431 containerd[1737]: time="2025-05-14T23:49:23.232071620Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 14 23:49:23.232431 containerd[1737]: time="2025-05-14T23:49:23.232083900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 14 23:49:23.232431 containerd[1737]: time="2025-05-14T23:49:23.232292020Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 14 23:49:23.232431 containerd[1737]: time="2025-05-14T23:49:23.232313220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 14 23:49:23.232431 containerd[1737]: time="2025-05-14T23:49:23.232325940Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 14 23:49:23.234473 containerd[1737]: time="2025-05-14T23:49:23.234424100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 14 23:49:23.234699 containerd[1737]: time="2025-05-14T23:49:23.234607180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 14 23:49:23.234872 containerd[1737]: time="2025-05-14T23:49:23.234840580Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 14 23:49:23.235233 containerd[1737]: time="2025-05-14T23:49:23.235033260Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 14 23:49:23.235233 containerd[1737]: time="2025-05-14T23:49:23.235054540Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 14 23:49:23.235233 containerd[1737]: time="2025-05-14T23:49:23.235135580Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 14 23:49:23.235233 containerd[1737]: time="2025-05-14T23:49:23.235180140Z" level=info msg="metadata content store policy set" policy=shared May 14 23:49:23.254326 containerd[1737]: time="2025-05-14T23:49:23.254270220Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 14 23:49:23.254485 containerd[1737]: time="2025-05-14T23:49:23.254378020Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 14 23:49:23.254485 containerd[1737]: time="2025-05-14T23:49:23.254406740Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 14 23:49:23.254485 containerd[1737]: time="2025-05-14T23:49:23.254470020Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 14 23:49:23.254534 containerd[1737]: time="2025-05-14T23:49:23.254487260Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 14 23:49:23.254778 containerd[1737]: time="2025-05-14T23:49:23.254674180Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 14 23:49:23.255032 containerd[1737]: time="2025-05-14T23:49:23.254989260Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 14 23:49:23.255172 containerd[1737]: time="2025-05-14T23:49:23.255118700Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 14 23:49:23.255172 containerd[1737]: time="2025-05-14T23:49:23.255142140Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 14 23:49:23.255172 containerd[1737]: time="2025-05-14T23:49:23.255158060Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 14 23:49:23.255172 containerd[1737]: time="2025-05-14T23:49:23.255171700Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 14 23:49:23.255374 containerd[1737]: time="2025-05-14T23:49:23.255183660Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 14 23:49:23.255374 containerd[1737]: time="2025-05-14T23:49:23.255194980Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 14 23:49:23.255374 containerd[1737]: time="2025-05-14T23:49:23.255208580Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 14 23:49:23.255374 containerd[1737]: time="2025-05-14T23:49:23.255222780Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 14 23:49:23.255374 containerd[1737]: time="2025-05-14T23:49:23.255235940Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 14 23:49:23.255374 containerd[1737]: time="2025-05-14T23:49:23.255249740Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 14 23:49:23.255374 containerd[1737]: time="2025-05-14T23:49:23.255262060Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 14 23:49:23.255374 containerd[1737]: time="2025-05-14T23:49:23.255281740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 14 23:49:23.255374 containerd[1737]: time="2025-05-14T23:49:23.255296540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 14 23:49:23.255374 containerd[1737]: time="2025-05-14T23:49:23.255314340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 14 23:49:23.255374 containerd[1737]: time="2025-05-14T23:49:23.255327660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 14 23:49:23.255374 containerd[1737]: time="2025-05-14T23:49:23.255358260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 14 23:49:23.255374 containerd[1737]: time="2025-05-14T23:49:23.255376180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 14 23:49:23.256046 containerd[1737]: time="2025-05-14T23:49:23.255388900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 14 23:49:23.256046 containerd[1737]: time="2025-05-14T23:49:23.255401940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 14 23:49:23.256046 containerd[1737]: time="2025-05-14T23:49:23.255414900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 14 23:49:23.256046 containerd[1737]: time="2025-05-14T23:49:23.255428860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 14 23:49:23.256046 containerd[1737]: time="2025-05-14T23:49:23.255440460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 14 23:49:23.256046 containerd[1737]: time="2025-05-14T23:49:23.255452140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 14 23:49:23.256046 containerd[1737]: time="2025-05-14T23:49:23.255464580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 14 23:49:23.256046 containerd[1737]: time="2025-05-14T23:49:23.255479780Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 14 23:49:23.256046 containerd[1737]: time="2025-05-14T23:49:23.255501660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 14 23:49:23.256046 containerd[1737]: time="2025-05-14T23:49:23.255516180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 14 23:49:23.256046 containerd[1737]: time="2025-05-14T23:49:23.255527500Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 14 23:49:23.257200 containerd[1737]: time="2025-05-14T23:49:23.256658340Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 14 23:49:23.257200 containerd[1737]: time="2025-05-14T23:49:23.257051620Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 14 23:49:23.257200 containerd[1737]: time="2025-05-14T23:49:23.257070460Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 14 23:49:23.257200 containerd[1737]: time="2025-05-14T23:49:23.257083940Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 14 23:49:23.257200 containerd[1737]: time="2025-05-14T23:49:23.257094460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 14 23:49:23.257200 containerd[1737]: time="2025-05-14T23:49:23.257112740Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 14 23:49:23.257200 containerd[1737]: time="2025-05-14T23:49:23.257124580Z" level=info msg="NRI interface is disabled by configuration." May 14 23:49:23.257200 containerd[1737]: time="2025-05-14T23:49:23.257135020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 14 23:49:23.258320 containerd[1737]: time="2025-05-14T23:49:23.257451820Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 14 23:49:23.258320 containerd[1737]: time="2025-05-14T23:49:23.257502900Z" level=info msg="Connect containerd service" May 14 23:49:23.258320 containerd[1737]: time="2025-05-14T23:49:23.257541020Z" level=info msg="using legacy CRI server" May 14 23:49:23.258320 containerd[1737]: time="2025-05-14T23:49:23.257547620Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 14 23:49:23.258320 containerd[1737]: time="2025-05-14T23:49:23.257679540Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 14 23:49:23.260465 containerd[1737]: time="2025-05-14T23:49:23.260397780Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 23:49:23.260587 containerd[1737]: time="2025-05-14T23:49:23.260547700Z" level=info msg="Start subscribing containerd event" May 14 23:49:23.260617 containerd[1737]: time="2025-05-14T23:49:23.260607060Z" level=info msg="Start recovering state" May 14 23:49:23.260751 containerd[1737]: time="2025-05-14T23:49:23.260682820Z" level=info msg="Start event monitor" May 14 23:49:23.260751 containerd[1737]: time="2025-05-14T23:49:23.260700380Z" level=info msg="Start snapshots syncer" May 14 23:49:23.260751 containerd[1737]: time="2025-05-14T23:49:23.260710500Z" level=info msg="Start cni network conf syncer for default" May 14 23:49:23.260751 containerd[1737]: time="2025-05-14T23:49:23.260717660Z" level=info msg="Start streaming server" May 14 23:49:23.261170 containerd[1737]: time="2025-05-14T23:49:23.261136220Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 23:49:23.261203 containerd[1737]: time="2025-05-14T23:49:23.261184260Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 23:49:23.261356 systemd[1]: Started containerd.service - containerd container runtime. May 14 23:49:23.270954 containerd[1737]: time="2025-05-14T23:49:23.270834180Z" level=info msg="containerd successfully booted in 0.127903s" May 14 23:49:23.318665 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:49:23.400557 tar[1730]: linux-arm64/LICENSE May 14 23:49:23.400557 tar[1730]: linux-arm64/README.md May 14 23:49:23.412960 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 14 23:49:23.420654 systemd[1]: Reached target multi-user.target - Multi-User System. May 14 23:49:23.430435 systemd[1]: Startup finished in 690ms (kernel) + 18.490s (initrd) + 15.810s (userspace) = 34.992s. May 14 23:49:23.494848 (kubelet)[1872]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:49:23.798959 login[1862]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 14 23:49:23.802496 login[1863]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 14 23:49:23.812260 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 14 23:49:23.820656 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 14 23:49:23.823612 systemd-logind[1712]: New session 2 of user core. May 14 23:49:23.828869 systemd-logind[1712]: New session 1 of user core. May 14 23:49:23.837918 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 14 23:49:23.848897 systemd[1]: Starting user@500.service - User Manager for UID 500... May 14 23:49:23.853852 (systemd)[1887]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 23:49:23.856874 systemd-logind[1712]: New session c1 of user core. May 14 23:49:23.887754 kubelet[1872]: E0514 23:49:23.887697 1872 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:49:23.890488 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:49:23.890644 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:49:23.890961 systemd[1]: kubelet.service: Consumed 688ms CPU time, 233.7M memory peak. May 14 23:49:24.032424 systemd[1887]: Queued start job for default target default.target. May 14 23:49:24.041760 systemd[1887]: Created slice app.slice - User Application Slice. May 14 23:49:24.041891 systemd[1887]: Reached target paths.target - Paths. May 14 23:49:24.042022 systemd[1887]: Reached target timers.target - Timers. May 14 23:49:24.045537 systemd[1887]: Starting dbus.socket - D-Bus User Message Bus Socket... May 14 23:49:24.055930 systemd[1887]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 14 23:49:24.056006 systemd[1887]: Reached target sockets.target - Sockets. May 14 23:49:24.056054 systemd[1887]: Reached target basic.target - Basic System. May 14 23:49:24.056083 systemd[1887]: Reached target default.target - Main User Target. May 14 23:49:24.056110 systemd[1887]: Startup finished in 190ms. May 14 23:49:24.056172 systemd[1]: Started user@500.service - User Manager for UID 500. May 14 23:49:24.057548 systemd[1]: Started session-1.scope - Session 1 of User core. May 14 23:49:24.058254 systemd[1]: Started session-2.scope - Session 2 of User core. May 14 23:49:24.994666 waagent[1859]: 2025-05-14T23:49:24.988548Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 May 14 23:49:24.995383 waagent[1859]: 2025-05-14T23:49:24.995277Z INFO Daemon Daemon OS: flatcar 4230.1.1 May 14 23:49:25.000366 waagent[1859]: 2025-05-14T23:49:25.000265Z INFO Daemon Daemon Python: 3.11.11 May 14 23:49:25.006496 waagent[1859]: 2025-05-14T23:49:25.006405Z INFO Daemon Daemon Run daemon May 14 23:49:25.011312 waagent[1859]: 2025-05-14T23:49:25.011241Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4230.1.1' May 14 23:49:25.021051 waagent[1859]: 2025-05-14T23:49:25.020959Z INFO Daemon Daemon Using waagent for provisioning May 14 23:49:25.027129 waagent[1859]: 2025-05-14T23:49:25.027066Z INFO Daemon Daemon Activate resource disk May 14 23:49:25.032123 waagent[1859]: 2025-05-14T23:49:25.032042Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb May 14 23:49:25.045781 waagent[1859]: 2025-05-14T23:49:25.045692Z INFO Daemon Daemon Found device: None May 14 23:49:25.050641 waagent[1859]: 2025-05-14T23:49:25.050565Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology May 14 23:49:25.061214 waagent[1859]: 2025-05-14T23:49:25.061137Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 May 14 23:49:25.074536 waagent[1859]: 2025-05-14T23:49:25.074477Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 14 23:49:25.081401 waagent[1859]: 2025-05-14T23:49:25.081311Z INFO Daemon Daemon Running default provisioning handler May 14 23:49:25.094584 waagent[1859]: 2025-05-14T23:49:25.094493Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. May 14 23:49:25.111915 waagent[1859]: 2025-05-14T23:49:25.111838Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' May 14 23:49:25.124114 waagent[1859]: 2025-05-14T23:49:25.124028Z INFO Daemon Daemon cloud-init is enabled: False May 14 23:49:25.129960 waagent[1859]: 2025-05-14T23:49:25.129869Z INFO Daemon Daemon Copying ovf-env.xml May 14 23:49:25.272900 waagent[1859]: 2025-05-14T23:49:25.271251Z INFO Daemon Daemon Successfully mounted dvd May 14 23:49:25.306839 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. May 14 23:49:25.307966 waagent[1859]: 2025-05-14T23:49:25.307872Z INFO Daemon Daemon Detect protocol endpoint May 14 23:49:25.314213 waagent[1859]: 2025-05-14T23:49:25.314130Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 14 23:49:25.321274 waagent[1859]: 2025-05-14T23:49:25.321176Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler May 14 23:49:25.329462 waagent[1859]: 2025-05-14T23:49:25.329364Z INFO Daemon Daemon Test for route to 168.63.129.16 May 14 23:49:25.335705 waagent[1859]: 2025-05-14T23:49:25.335615Z INFO Daemon Daemon Route to 168.63.129.16 exists May 14 23:49:25.343091 waagent[1859]: 2025-05-14T23:49:25.342987Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 May 14 23:49:25.384639 waagent[1859]: 2025-05-14T23:49:25.384586Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 May 14 23:49:25.393555 waagent[1859]: 2025-05-14T23:49:25.393522Z INFO Daemon Daemon Wire protocol version:2012-11-30 May 14 23:49:25.399682 waagent[1859]: 2025-05-14T23:49:25.399614Z INFO Daemon Daemon Server preferred version:2015-04-05 May 14 23:49:25.521538 waagent[1859]: 2025-05-14T23:49:25.521426Z INFO Daemon Daemon Initializing goal state during protocol detection May 14 23:49:25.530221 waagent[1859]: 2025-05-14T23:49:25.530104Z INFO Daemon Daemon Forcing an update of the goal state. May 14 23:49:25.549897 waagent[1859]: 2025-05-14T23:49:25.549835Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] May 14 23:49:25.596458 waagent[1859]: 2025-05-14T23:49:25.596407Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.164 May 14 23:49:25.602560 waagent[1859]: 2025-05-14T23:49:25.602507Z INFO Daemon May 14 23:49:25.605781 waagent[1859]: 2025-05-14T23:49:25.605721Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: aeff04f1-2b04-4812-b8d4-0ca338ee6d16 eTag: 1426696125115916187 source: Fabric] May 14 23:49:25.618852 waagent[1859]: 2025-05-14T23:49:25.618796Z INFO Daemon The vmSettings originated via Fabric; will ignore them. May 14 23:49:25.627315 waagent[1859]: 2025-05-14T23:49:25.627260Z INFO Daemon May 14 23:49:25.630434 waagent[1859]: 2025-05-14T23:49:25.630374Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] May 14 23:49:25.643625 waagent[1859]: 2025-05-14T23:49:25.643580Z INFO Daemon Daemon Downloading artifacts profile blob May 14 23:49:25.823433 waagent[1859]: 2025-05-14T23:49:25.822651Z INFO Daemon Downloaded certificate {'thumbprint': '69122BAA9465EEEC2DFBB82B185A3DBAB0275FA8', 'hasPrivateKey': False} May 14 23:49:25.833360 waagent[1859]: 2025-05-14T23:49:25.833077Z INFO Daemon Downloaded certificate {'thumbprint': '10E4B6F5C61F33B910EF7062432BC5DE843F4F16', 'hasPrivateKey': True} May 14 23:49:25.843265 waagent[1859]: 2025-05-14T23:49:25.843201Z INFO Daemon Fetch goal state completed May 14 23:49:25.893207 waagent[1859]: 2025-05-14T23:49:25.893147Z INFO Daemon Daemon Starting provisioning May 14 23:49:25.899157 waagent[1859]: 2025-05-14T23:49:25.899075Z INFO Daemon Daemon Handle ovf-env.xml. May 14 23:49:25.904141 waagent[1859]: 2025-05-14T23:49:25.904077Z INFO Daemon Daemon Set hostname [ci-4230.1.1-n-76ed3c1841] May 14 23:49:25.926851 waagent[1859]: 2025-05-14T23:49:25.926767Z INFO Daemon Daemon Publish hostname [ci-4230.1.1-n-76ed3c1841] May 14 23:49:25.933674 waagent[1859]: 2025-05-14T23:49:25.933601Z INFO Daemon Daemon Examine /proc/net/route for primary interface May 14 23:49:25.941269 waagent[1859]: 2025-05-14T23:49:25.941203Z INFO Daemon Daemon Primary interface is [eth0] May 14 23:49:25.954889 systemd-networkd[1339]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:49:25.954899 systemd-networkd[1339]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 23:49:25.954931 systemd-networkd[1339]: eth0: DHCP lease lost May 14 23:49:25.956158 waagent[1859]: 2025-05-14T23:49:25.956046Z INFO Daemon Daemon Create user account if not exists May 14 23:49:25.962692 waagent[1859]: 2025-05-14T23:49:25.962616Z INFO Daemon Daemon User core already exists, skip useradd May 14 23:49:25.968737 waagent[1859]: 2025-05-14T23:49:25.968664Z INFO Daemon Daemon Configure sudoer May 14 23:49:25.974090 waagent[1859]: 2025-05-14T23:49:25.974015Z INFO Daemon Daemon Configure sshd May 14 23:49:25.978998 waagent[1859]: 2025-05-14T23:49:25.978928Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. May 14 23:49:25.993416 waagent[1859]: 2025-05-14T23:49:25.993331Z INFO Daemon Daemon Deploy ssh public key. May 14 23:49:26.012413 systemd-networkd[1339]: eth0: DHCPv4 address 10.200.20.35/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 14 23:49:27.128362 waagent[1859]: 2025-05-14T23:49:27.128135Z INFO Daemon Daemon Provisioning complete May 14 23:49:27.145903 waagent[1859]: 2025-05-14T23:49:27.145844Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping May 14 23:49:27.152147 waagent[1859]: 2025-05-14T23:49:27.152073Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. May 14 23:49:27.162530 waagent[1859]: 2025-05-14T23:49:27.162464Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent May 14 23:49:27.304292 waagent[1944]: 2025-05-14T23:49:27.303717Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) May 14 23:49:27.304292 waagent[1944]: 2025-05-14T23:49:27.303888Z INFO ExtHandler ExtHandler OS: flatcar 4230.1.1 May 14 23:49:27.304292 waagent[1944]: 2025-05-14T23:49:27.303942Z INFO ExtHandler ExtHandler Python: 3.11.11 May 14 23:49:27.502226 waagent[1944]: 2025-05-14T23:49:27.502066Z INFO ExtHandler ExtHandler Distro: flatcar-4230.1.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; May 14 23:49:27.502601 waagent[1944]: 2025-05-14T23:49:27.502555Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 14 23:49:27.502812 waagent[1944]: 2025-05-14T23:49:27.502773Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 May 14 23:49:27.513471 waagent[1944]: 2025-05-14T23:49:27.513379Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] May 14 23:49:27.520371 waagent[1944]: 2025-05-14T23:49:27.520072Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 May 14 23:49:27.520734 waagent[1944]: 2025-05-14T23:49:27.520680Z INFO ExtHandler May 14 23:49:27.520813 waagent[1944]: 2025-05-14T23:49:27.520778Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: b448f22e-e028-47ea-a304-263aa9fb915a eTag: 1426696125115916187 source: Fabric] May 14 23:49:27.521124 waagent[1944]: 2025-05-14T23:49:27.521080Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. May 14 23:49:27.521749 waagent[1944]: 2025-05-14T23:49:27.521698Z INFO ExtHandler May 14 23:49:27.521817 waagent[1944]: 2025-05-14T23:49:27.521786Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] May 14 23:49:27.526236 waagent[1944]: 2025-05-14T23:49:27.526190Z INFO ExtHandler ExtHandler Downloading artifacts profile blob May 14 23:49:27.621150 waagent[1944]: 2025-05-14T23:49:27.621044Z INFO ExtHandler Downloaded certificate {'thumbprint': '69122BAA9465EEEC2DFBB82B185A3DBAB0275FA8', 'hasPrivateKey': False} May 14 23:49:27.621637 waagent[1944]: 2025-05-14T23:49:27.621588Z INFO ExtHandler Downloaded certificate {'thumbprint': '10E4B6F5C61F33B910EF7062432BC5DE843F4F16', 'hasPrivateKey': True} May 14 23:49:27.622059 waagent[1944]: 2025-05-14T23:49:27.622013Z INFO ExtHandler Fetch goal state completed May 14 23:49:27.638615 waagent[1944]: 2025-05-14T23:49:27.638540Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1944 May 14 23:49:27.638778 waagent[1944]: 2025-05-14T23:49:27.638736Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** May 14 23:49:27.640530 waagent[1944]: 2025-05-14T23:49:27.640475Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4230.1.1', '', 'Flatcar Container Linux by Kinvolk'] May 14 23:49:27.640928 waagent[1944]: 2025-05-14T23:49:27.640887Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules May 14 23:49:27.676023 waagent[1944]: 2025-05-14T23:49:27.675970Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service May 14 23:49:27.676237 waagent[1944]: 2025-05-14T23:49:27.676196Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup May 14 23:49:27.683039 waagent[1944]: 2025-05-14T23:49:27.682447Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now May 14 23:49:27.689768 systemd[1]: Reload requested from client PID 1959 ('systemctl') (unit waagent.service)... May 14 23:49:27.689787 systemd[1]: Reloading... May 14 23:49:27.788426 zram_generator::config[2001]: No configuration found. May 14 23:49:27.901866 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:49:28.005628 systemd[1]: Reloading finished in 315 ms. May 14 23:49:28.021855 waagent[1944]: 2025-05-14T23:49:28.021040Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service May 14 23:49:28.028071 systemd[1]: Reload requested from client PID 2052 ('systemctl') (unit waagent.service)... May 14 23:49:28.028088 systemd[1]: Reloading... May 14 23:49:28.136396 zram_generator::config[2094]: No configuration found. May 14 23:49:28.229681 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:49:28.332949 systemd[1]: Reloading finished in 304 ms. May 14 23:49:28.347329 waagent[1944]: 2025-05-14T23:49:28.347161Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service May 14 23:49:28.620508 waagent[1944]: 2025-05-14T23:49:28.619565Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully May 14 23:49:29.462380 waagent[1944]: 2025-05-14T23:49:29.461843Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. May 14 23:49:29.462674 waagent[1944]: 2025-05-14T23:49:29.462501Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] May 14 23:49:29.463444 waagent[1944]: 2025-05-14T23:49:29.463346Z INFO ExtHandler ExtHandler Starting env monitor service. May 14 23:49:29.463938 waagent[1944]: 2025-05-14T23:49:29.463837Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. May 14 23:49:29.465001 waagent[1944]: 2025-05-14T23:49:29.464176Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 14 23:49:29.465001 waagent[1944]: 2025-05-14T23:49:29.464277Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 May 14 23:49:29.465001 waagent[1944]: 2025-05-14T23:49:29.464513Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. May 14 23:49:29.465001 waagent[1944]: 2025-05-14T23:49:29.464709Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: May 14 23:49:29.465001 waagent[1944]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT May 14 23:49:29.465001 waagent[1944]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 May 14 23:49:29.465001 waagent[1944]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 May 14 23:49:29.465001 waagent[1944]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 May 14 23:49:29.465001 waagent[1944]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 14 23:49:29.465001 waagent[1944]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 14 23:49:29.465416 waagent[1944]: 2025-05-14T23:49:29.465293Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread May 14 23:49:29.465548 waagent[1944]: 2025-05-14T23:49:29.465494Z INFO ExtHandler ExtHandler Start Extension Telemetry service. May 14 23:49:29.465719 waagent[1944]: 2025-05-14T23:49:29.465682Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 14 23:49:29.466079 waagent[1944]: 2025-05-14T23:49:29.466033Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 May 14 23:49:29.466202 waagent[1944]: 2025-05-14T23:49:29.466142Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True May 14 23:49:29.466446 waagent[1944]: 2025-05-14T23:49:29.466289Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. May 14 23:49:29.466530 waagent[1944]: 2025-05-14T23:49:29.466487Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread May 14 23:49:29.466925 waagent[1944]: 2025-05-14T23:49:29.466739Z INFO EnvHandler ExtHandler Configure routes May 14 23:49:29.468133 waagent[1944]: 2025-05-14T23:49:29.468073Z INFO EnvHandler ExtHandler Gateway:None May 14 23:49:29.468398 waagent[1944]: 2025-05-14T23:49:29.468321Z INFO EnvHandler ExtHandler Routes:None May 14 23:49:29.478590 waagent[1944]: 2025-05-14T23:49:29.478535Z INFO ExtHandler ExtHandler May 14 23:49:29.478860 waagent[1944]: 2025-05-14T23:49:29.478812Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 183ce20c-430d-4bfc-aefe-155e6d4415e8 correlation 257626e3-4cfe-4334-b561-4023b4e9d5ee created: 2025-05-14T23:48:01.912437Z] May 14 23:49:29.479445 waagent[1944]: 2025-05-14T23:49:29.479378Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. May 14 23:49:29.480172 waagent[1944]: 2025-05-14T23:49:29.480120Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] May 14 23:49:29.520547 waagent[1944]: 2025-05-14T23:49:29.520483Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 1A9573AB-5874-4BF5-8285-7BB818C211DD;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] May 14 23:49:29.539045 waagent[1944]: 2025-05-14T23:49:29.538899Z INFO MonitorHandler ExtHandler Network interfaces: May 14 23:49:29.539045 waagent[1944]: Executing ['ip', '-a', '-o', 'link']: May 14 23:49:29.539045 waagent[1944]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 May 14 23:49:29.539045 waagent[1944]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:6d:e6:9d brd ff:ff:ff:ff:ff:ff May 14 23:49:29.539045 waagent[1944]: 3: enP55698s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:6d:e6:9d brd ff:ff:ff:ff:ff:ff\ altname enP55698p0s2 May 14 23:49:29.539045 waagent[1944]: Executing ['ip', '-4', '-a', '-o', 'address']: May 14 23:49:29.539045 waagent[1944]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever May 14 23:49:29.539045 waagent[1944]: 2: eth0 inet 10.200.20.35/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever May 14 23:49:29.539045 waagent[1944]: Executing ['ip', '-6', '-a', '-o', 'address']: May 14 23:49:29.539045 waagent[1944]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever May 14 23:49:29.539045 waagent[1944]: 2: eth0 inet6 fe80::20d:3aff:fe6d:e69d/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever May 14 23:49:29.539045 waagent[1944]: 3: enP55698s1 inet6 fe80::20d:3aff:fe6d:e69d/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever May 14 23:49:29.581041 waagent[1944]: 2025-05-14T23:49:29.580941Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: May 14 23:49:29.581041 waagent[1944]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 14 23:49:29.581041 waagent[1944]: pkts bytes target prot opt in out source destination May 14 23:49:29.581041 waagent[1944]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 14 23:49:29.581041 waagent[1944]: pkts bytes target prot opt in out source destination May 14 23:49:29.581041 waagent[1944]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 14 23:49:29.581041 waagent[1944]: pkts bytes target prot opt in out source destination May 14 23:49:29.581041 waagent[1944]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 14 23:49:29.581041 waagent[1944]: 5 467 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 14 23:49:29.581041 waagent[1944]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 14 23:49:29.584785 waagent[1944]: 2025-05-14T23:49:29.584608Z INFO EnvHandler ExtHandler Current Firewall rules: May 14 23:49:29.584785 waagent[1944]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 14 23:49:29.584785 waagent[1944]: pkts bytes target prot opt in out source destination May 14 23:49:29.584785 waagent[1944]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 14 23:49:29.584785 waagent[1944]: pkts bytes target prot opt in out source destination May 14 23:49:29.584785 waagent[1944]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 14 23:49:29.584785 waagent[1944]: pkts bytes target prot opt in out source destination May 14 23:49:29.584785 waagent[1944]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 14 23:49:29.584785 waagent[1944]: 5 467 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 14 23:49:29.584785 waagent[1944]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 14 23:49:29.585043 waagent[1944]: 2025-05-14T23:49:29.584969Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 May 14 23:49:34.141409 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 23:49:34.148542 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:49:34.510838 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:49:34.523779 (kubelet)[2186]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:49:34.567812 kubelet[2186]: E0514 23:49:34.567731 2186 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:49:34.571159 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:49:34.571318 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:49:34.571813 systemd[1]: kubelet.service: Consumed 131ms CPU time, 97.6M memory peak. May 14 23:49:44.808884 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 14 23:49:44.817607 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:49:45.150439 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:49:45.155041 (kubelet)[2201]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:49:45.193535 kubelet[2201]: E0514 23:49:45.193453 2201 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:49:45.196088 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:49:45.196291 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:49:45.196812 systemd[1]: kubelet.service: Consumed 134ms CPU time, 94.1M memory peak. May 14 23:49:45.962278 chronyd[1700]: Selected source PHC0 May 14 23:49:48.933608 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 14 23:49:48.935304 systemd[1]: Started sshd@0-10.200.20.35:22-10.200.16.10:48542.service - OpenSSH per-connection server daemon (10.200.16.10:48542). May 14 23:49:49.474250 sshd[2209]: Accepted publickey for core from 10.200.16.10 port 48542 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:49:49.475834 sshd-session[2209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:49:49.480756 systemd-logind[1712]: New session 3 of user core. May 14 23:49:49.488516 systemd[1]: Started session-3.scope - Session 3 of User core. May 14 23:49:49.856606 systemd[1]: Started sshd@1-10.200.20.35:22-10.200.16.10:48552.service - OpenSSH per-connection server daemon (10.200.16.10:48552). May 14 23:49:50.308227 sshd[2214]: Accepted publickey for core from 10.200.16.10 port 48552 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:49:50.309576 sshd-session[2214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:49:50.314541 systemd-logind[1712]: New session 4 of user core. May 14 23:49:50.323510 systemd[1]: Started session-4.scope - Session 4 of User core. May 14 23:49:50.646137 sshd[2216]: Connection closed by 10.200.16.10 port 48552 May 14 23:49:50.646938 sshd-session[2214]: pam_unix(sshd:session): session closed for user core May 14 23:49:50.650148 systemd[1]: sshd@1-10.200.20.35:22-10.200.16.10:48552.service: Deactivated successfully. May 14 23:49:50.651724 systemd[1]: session-4.scope: Deactivated successfully. May 14 23:49:50.652473 systemd-logind[1712]: Session 4 logged out. Waiting for processes to exit. May 14 23:49:50.653651 systemd-logind[1712]: Removed session 4. May 14 23:49:50.725586 systemd[1]: Started sshd@2-10.200.20.35:22-10.200.16.10:48566.service - OpenSSH per-connection server daemon (10.200.16.10:48566). May 14 23:49:51.136017 sshd[2222]: Accepted publickey for core from 10.200.16.10 port 48566 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:49:51.137453 sshd-session[2222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:49:51.143078 systemd-logind[1712]: New session 5 of user core. May 14 23:49:51.149531 systemd[1]: Started session-5.scope - Session 5 of User core. May 14 23:49:51.454984 sshd[2224]: Connection closed by 10.200.16.10 port 48566 May 14 23:49:51.455661 sshd-session[2222]: pam_unix(sshd:session): session closed for user core May 14 23:49:51.458979 systemd[1]: sshd@2-10.200.20.35:22-10.200.16.10:48566.service: Deactivated successfully. May 14 23:49:51.461951 systemd[1]: session-5.scope: Deactivated successfully. May 14 23:49:51.462654 systemd-logind[1712]: Session 5 logged out. Waiting for processes to exit. May 14 23:49:51.463449 systemd-logind[1712]: Removed session 5. May 14 23:49:51.548665 systemd[1]: Started sshd@3-10.200.20.35:22-10.200.16.10:48570.service - OpenSSH per-connection server daemon (10.200.16.10:48570). May 14 23:49:51.999615 sshd[2230]: Accepted publickey for core from 10.200.16.10 port 48570 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:49:52.000929 sshd-session[2230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:49:52.005134 systemd-logind[1712]: New session 6 of user core. May 14 23:49:52.014515 systemd[1]: Started session-6.scope - Session 6 of User core. May 14 23:49:52.337803 sshd[2232]: Connection closed by 10.200.16.10 port 48570 May 14 23:49:52.338391 sshd-session[2230]: pam_unix(sshd:session): session closed for user core May 14 23:49:52.341075 systemd-logind[1712]: Session 6 logged out. Waiting for processes to exit. May 14 23:49:52.342828 systemd[1]: sshd@3-10.200.20.35:22-10.200.16.10:48570.service: Deactivated successfully. May 14 23:49:52.344904 systemd[1]: session-6.scope: Deactivated successfully. May 14 23:49:52.347197 systemd-logind[1712]: Removed session 6. May 14 23:49:52.419794 systemd[1]: Started sshd@4-10.200.20.35:22-10.200.16.10:48582.service - OpenSSH per-connection server daemon (10.200.16.10:48582). May 14 23:49:52.870086 sshd[2238]: Accepted publickey for core from 10.200.16.10 port 48582 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:49:52.871401 sshd-session[2238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:49:52.877027 systemd-logind[1712]: New session 7 of user core. May 14 23:49:52.885596 systemd[1]: Started session-7.scope - Session 7 of User core. May 14 23:49:53.238700 sudo[2241]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 23:49:53.238997 sudo[2241]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:49:55.118606 systemd[1]: Starting docker.service - Docker Application Container Engine... May 14 23:49:55.118743 (dockerd)[2257]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 23:49:55.308580 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 14 23:49:55.316539 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:49:55.844165 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:49:55.857645 (kubelet)[2270]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:49:55.895492 kubelet[2270]: E0514 23:49:55.895423 2270 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:49:55.898192 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:49:55.898508 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:49:55.899030 systemd[1]: kubelet.service: Consumed 127ms CPU time, 94.1M memory peak. May 14 23:49:56.567672 dockerd[2257]: time="2025-05-14T23:49:56.567607959Z" level=info msg="Starting up" May 14 23:49:57.130652 dockerd[2257]: time="2025-05-14T23:49:57.130602003Z" level=info msg="Loading containers: start." May 14 23:49:57.395367 kernel: Initializing XFRM netlink socket May 14 23:49:57.502971 systemd-networkd[1339]: docker0: Link UP May 14 23:49:57.576446 dockerd[2257]: time="2025-05-14T23:49:57.576400640Z" level=info msg="Loading containers: done." May 14 23:49:58.034345 dockerd[2257]: time="2025-05-14T23:49:58.034281494Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 23:49:58.034520 dockerd[2257]: time="2025-05-14T23:49:58.034432574Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 14 23:49:58.034598 dockerd[2257]: time="2025-05-14T23:49:58.034572654Z" level=info msg="Daemon has completed initialization" May 14 23:49:58.244990 dockerd[2257]: time="2025-05-14T23:49:58.244876514Z" level=info msg="API listen on /run/docker.sock" May 14 23:49:58.245452 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 23:49:59.038133 containerd[1737]: time="2025-05-14T23:49:59.037841727Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 14 23:50:01.171040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1913415757.mount: Deactivated successfully. May 14 23:50:04.878010 kernel: hv_balloon: Max. dynamic memory size: 4096 MB May 14 23:50:06.058646 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 14 23:50:06.068580 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:50:06.161040 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:50:06.169674 (kubelet)[2494]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:50:06.207365 kubelet[2494]: E0514 23:50:06.207292 2494 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:50:06.210104 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:50:06.210256 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:50:06.210822 systemd[1]: kubelet.service: Consumed 128ms CPU time, 94.1M memory peak. May 14 23:50:09.288798 update_engine[1715]: I20250514 23:50:07.449506 1715 update_attempter.cc:509] Updating boot flags... May 14 23:50:09.497400 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (2515) May 14 23:50:09.636804 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (2514) May 14 23:50:09.738378 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (2514) May 14 23:50:11.447316 containerd[1737]: time="2025-05-14T23:50:11.447218454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:11.493960 containerd[1737]: time="2025-05-14T23:50:11.493910647Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=25554608" May 14 23:50:11.497539 containerd[1737]: time="2025-05-14T23:50:11.497447736Z" level=info msg="ImageCreate event name:\"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:11.535609 containerd[1737]: time="2025-05-14T23:50:11.535548668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:11.537268 containerd[1737]: time="2025-05-14T23:50:11.537073032Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"25551408\" in 12.499192065s" May 14 23:50:11.537268 containerd[1737]: time="2025-05-14T23:50:11.537115792Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\"" May 14 23:50:11.538010 containerd[1737]: time="2025-05-14T23:50:11.537875114Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 14 23:50:14.248610 containerd[1737]: time="2025-05-14T23:50:14.247502154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:14.252472 containerd[1737]: time="2025-05-14T23:50:14.252423846Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=22458978" May 14 23:50:14.256193 containerd[1737]: time="2025-05-14T23:50:14.256151895Z" level=info msg="ImageCreate event name:\"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:14.262961 containerd[1737]: time="2025-05-14T23:50:14.262923192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:14.264134 containerd[1737]: time="2025-05-14T23:50:14.264094475Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"23900539\" in 2.726188761s" May 14 23:50:14.264134 containerd[1737]: time="2025-05-14T23:50:14.264131235Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\"" May 14 23:50:14.264708 containerd[1737]: time="2025-05-14T23:50:14.264663116Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 14 23:50:15.440024 containerd[1737]: time="2025-05-14T23:50:15.439961202Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:15.444383 containerd[1737]: time="2025-05-14T23:50:15.444088012Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=17125813" May 14 23:50:15.450844 containerd[1737]: time="2025-05-14T23:50:15.450783428Z" level=info msg="ImageCreate event name:\"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:15.456740 containerd[1737]: time="2025-05-14T23:50:15.456664162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:15.457902 containerd[1737]: time="2025-05-14T23:50:15.457768165Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"18567392\" in 1.193071009s" May 14 23:50:15.457902 containerd[1737]: time="2025-05-14T23:50:15.457807965Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\"" May 14 23:50:15.458697 containerd[1737]: time="2025-05-14T23:50:15.458513727Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 14 23:50:16.307846 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 14 23:50:16.314578 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:50:16.424680 (kubelet)[2710]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:50:16.425506 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:50:16.473504 kubelet[2710]: E0514 23:50:16.473372 2710 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:50:16.476120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:50:16.476872 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:50:16.477439 systemd[1]: kubelet.service: Consumed 120ms CPU time, 94.5M memory peak. May 14 23:50:16.661558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2600937234.mount: Deactivated successfully. May 14 23:50:17.320255 containerd[1737]: time="2025-05-14T23:50:17.320195464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:17.322789 containerd[1737]: time="2025-05-14T23:50:17.322743468Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=26871917" May 14 23:50:17.327297 containerd[1737]: time="2025-05-14T23:50:17.327243436Z" level=info msg="ImageCreate event name:\"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:17.332086 containerd[1737]: time="2025-05-14T23:50:17.332018965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:17.332827 containerd[1737]: time="2025-05-14T23:50:17.332692286Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"26870936\" in 1.874134559s" May 14 23:50:17.332827 containerd[1737]: time="2025-05-14T23:50:17.332728206Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\"" May 14 23:50:17.333575 containerd[1737]: time="2025-05-14T23:50:17.333321967Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 14 23:50:18.100825 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount28261109.mount: Deactivated successfully. May 14 23:50:19.942372 containerd[1737]: time="2025-05-14T23:50:19.941990363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:19.945420 containerd[1737]: time="2025-05-14T23:50:19.945355770Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" May 14 23:50:19.949320 containerd[1737]: time="2025-05-14T23:50:19.949248697Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:19.956126 containerd[1737]: time="2025-05-14T23:50:19.956072231Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:19.956962 containerd[1737]: time="2025-05-14T23:50:19.956822752Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 2.623453185s" May 14 23:50:19.956962 containerd[1737]: time="2025-05-14T23:50:19.956855632Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 14 23:50:19.956962 containerd[1737]: time="2025-05-14T23:50:19.957437553Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 14 23:50:22.857478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4188583746.mount: Deactivated successfully. May 14 23:50:23.100310 containerd[1737]: time="2025-05-14T23:50:23.099522125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:23.102416 containerd[1737]: time="2025-05-14T23:50:23.102363771Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" May 14 23:50:23.149364 containerd[1737]: time="2025-05-14T23:50:23.149242901Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:23.195322 containerd[1737]: time="2025-05-14T23:50:23.195258191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:23.196431 containerd[1737]: time="2025-05-14T23:50:23.196022392Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 3.238557959s" May 14 23:50:23.196431 containerd[1737]: time="2025-05-14T23:50:23.196057272Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 14 23:50:23.196675 containerd[1737]: time="2025-05-14T23:50:23.196638633Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 14 23:50:24.807510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1398348364.mount: Deactivated successfully. May 14 23:50:26.558619 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. May 14 23:50:26.563531 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:50:34.946126 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:50:34.950560 (kubelet)[2781]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:50:34.986688 kubelet[2781]: E0514 23:50:34.986629 2781 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:50:34.989147 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:50:34.989440 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:50:34.990033 systemd[1]: kubelet.service: Consumed 121ms CPU time, 92.4M memory peak. May 14 23:50:42.119204 containerd[1737]: time="2025-05-14T23:50:42.119097712Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:42.121589 containerd[1737]: time="2025-05-14T23:50:42.121520755Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406465" May 14 23:50:42.127253 containerd[1737]: time="2025-05-14T23:50:42.127199884Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:42.133456 containerd[1737]: time="2025-05-14T23:50:42.133420893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:42.134896 containerd[1737]: time="2025-05-14T23:50:42.134754095Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 18.938083702s" May 14 23:50:42.134896 containerd[1737]: time="2025-05-14T23:50:42.134792255Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 14 23:50:45.058563 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. May 14 23:50:45.064576 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:50:46.119772 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:50:46.124871 (kubelet)[2866]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:50:46.184884 kubelet[2866]: E0514 23:50:46.184821 2866 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:50:46.186527 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:50:46.186663 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:50:46.187158 systemd[1]: kubelet.service: Consumed 123ms CPU time, 92.4M memory peak. May 14 23:50:48.461358 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:50:48.461696 systemd[1]: kubelet.service: Consumed 123ms CPU time, 92.4M memory peak. May 14 23:50:48.473928 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:50:48.506522 systemd[1]: Reload requested from client PID 2880 ('systemctl') (unit session-7.scope)... May 14 23:50:48.506541 systemd[1]: Reloading... May 14 23:50:48.628484 zram_generator::config[2927]: No configuration found. May 14 23:50:48.747209 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:50:48.850604 systemd[1]: Reloading finished in 343 ms. May 14 23:50:48.895506 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:50:48.900619 (kubelet)[2984]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 23:50:48.903522 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:50:48.904489 systemd[1]: kubelet.service: Deactivated successfully. May 14 23:50:48.905422 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:50:48.905475 systemd[1]: kubelet.service: Consumed 85ms CPU time, 82.4M memory peak. May 14 23:50:48.916975 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:50:52.104691 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:50:52.109673 (kubelet)[2996]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 23:50:52.147637 kubelet[2996]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:50:52.147637 kubelet[2996]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 23:50:52.147637 kubelet[2996]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:50:52.148062 kubelet[2996]: I0514 23:50:52.147647 2996 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 23:50:54.743215 kubelet[2996]: I0514 23:50:54.742464 2996 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 14 23:50:54.743215 kubelet[2996]: I0514 23:50:54.742499 2996 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 23:50:54.743215 kubelet[2996]: I0514 23:50:54.742737 2996 server.go:929] "Client rotation is on, will bootstrap in background" May 14 23:50:54.762955 kubelet[2996]: E0514 23:50:54.762902 2996 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" May 14 23:50:54.764160 kubelet[2996]: I0514 23:50:54.764126 2996 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 23:50:54.775684 kubelet[2996]: E0514 23:50:54.775627 2996 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 14 23:50:54.775684 kubelet[2996]: I0514 23:50:54.775681 2996 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 14 23:50:54.779943 kubelet[2996]: I0514 23:50:54.779911 2996 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 23:50:54.780666 kubelet[2996]: I0514 23:50:54.780640 2996 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 14 23:50:54.780853 kubelet[2996]: I0514 23:50:54.780817 2996 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 23:50:54.781062 kubelet[2996]: I0514 23:50:54.780851 2996 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.1-n-76ed3c1841","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 23:50:54.781165 kubelet[2996]: I0514 23:50:54.781071 2996 topology_manager.go:138] "Creating topology manager with none policy" May 14 23:50:54.781165 kubelet[2996]: I0514 23:50:54.781081 2996 container_manager_linux.go:300] "Creating device plugin manager" May 14 23:50:54.781234 kubelet[2996]: I0514 23:50:54.781213 2996 state_mem.go:36] "Initialized new in-memory state store" May 14 23:50:54.783184 kubelet[2996]: I0514 23:50:54.782880 2996 kubelet.go:408] "Attempting to sync node with API server" May 14 23:50:54.783184 kubelet[2996]: I0514 23:50:54.782915 2996 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 23:50:54.783184 kubelet[2996]: I0514 23:50:54.782942 2996 kubelet.go:314] "Adding apiserver pod source" May 14 23:50:54.783184 kubelet[2996]: I0514 23:50:54.782954 2996 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 23:50:54.788378 kubelet[2996]: W0514 23:50:54.788095 2996 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-n-76ed3c1841&limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused May 14 23:50:54.788378 kubelet[2996]: E0514 23:50:54.788170 2996 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-n-76ed3c1841&limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" May 14 23:50:54.788641 kubelet[2996]: W0514 23:50:54.788594 2996 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused May 14 23:50:54.788686 kubelet[2996]: E0514 23:50:54.788645 2996 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" May 14 23:50:54.789407 kubelet[2996]: I0514 23:50:54.789382 2996 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 14 23:50:54.791247 kubelet[2996]: I0514 23:50:54.791150 2996 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 23:50:54.792025 kubelet[2996]: W0514 23:50:54.791839 2996 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 23:50:54.793197 kubelet[2996]: I0514 23:50:54.793177 2996 server.go:1269] "Started kubelet" May 14 23:50:54.795295 kubelet[2996]: I0514 23:50:54.795217 2996 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 23:50:54.797007 kubelet[2996]: E0514 23:50:54.795324 2996 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.35:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.35:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.1.1-n-76ed3c1841.183f89c3ded493d9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.1.1-n-76ed3c1841,UID:ci-4230.1.1-n-76ed3c1841,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.1.1-n-76ed3c1841,},FirstTimestamp:2025-05-14 23:50:54.793143257 +0000 UTC m=+2.680126814,LastTimestamp:2025-05-14 23:50:54.793143257 +0000 UTC m=+2.680126814,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.1.1-n-76ed3c1841,}" May 14 23:50:54.798717 kubelet[2996]: I0514 23:50:54.798617 2996 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 23:50:54.800014 kubelet[2996]: I0514 23:50:54.799539 2996 server.go:460] "Adding debug handlers to kubelet server" May 14 23:50:54.800458 kubelet[2996]: I0514 23:50:54.800397 2996 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 23:50:54.800776 kubelet[2996]: I0514 23:50:54.800663 2996 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 23:50:54.800939 kubelet[2996]: I0514 23:50:54.800911 2996 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 23:50:54.802564 kubelet[2996]: I0514 23:50:54.801694 2996 volume_manager.go:289] "Starting Kubelet Volume Manager" May 14 23:50:54.802564 kubelet[2996]: I0514 23:50:54.801822 2996 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 14 23:50:54.802564 kubelet[2996]: I0514 23:50:54.801914 2996 reconciler.go:26] "Reconciler: start to sync state" May 14 23:50:54.802564 kubelet[2996]: W0514 23:50:54.802450 2996 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused May 14 23:50:54.802564 kubelet[2996]: E0514 23:50:54.802534 2996 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" May 14 23:50:54.803547 kubelet[2996]: E0514 23:50:54.803516 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:54.804047 kubelet[2996]: I0514 23:50:54.804019 2996 factory.go:221] Registration of the systemd container factory successfully May 14 23:50:54.804137 kubelet[2996]: I0514 23:50:54.804113 2996 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 23:50:54.805292 kubelet[2996]: E0514 23:50:54.805165 2996 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-n-76ed3c1841?timeout=10s\": dial tcp 10.200.20.35:6443: connect: connection refused" interval="200ms" May 14 23:50:54.807186 kubelet[2996]: E0514 23:50:54.806692 2996 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 23:50:54.808374 kubelet[2996]: I0514 23:50:54.808307 2996 factory.go:221] Registration of the containerd container factory successfully May 14 23:50:54.840162 kubelet[2996]: I0514 23:50:54.840098 2996 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 23:50:54.845739 kubelet[2996]: I0514 23:50:54.845144 2996 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 23:50:54.845739 kubelet[2996]: I0514 23:50:54.845177 2996 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 23:50:54.845739 kubelet[2996]: I0514 23:50:54.845193 2996 kubelet.go:2321] "Starting kubelet main sync loop" May 14 23:50:54.845739 kubelet[2996]: E0514 23:50:54.845232 2996 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 23:50:54.846725 kubelet[2996]: W0514 23:50:54.846677 2996 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused May 14 23:50:54.846860 kubelet[2996]: E0514 23:50:54.846838 2996 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" May 14 23:50:54.847864 kubelet[2996]: I0514 23:50:54.847846 2996 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 23:50:54.847945 kubelet[2996]: I0514 23:50:54.847935 2996 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 23:50:54.848016 kubelet[2996]: I0514 23:50:54.848007 2996 state_mem.go:36] "Initialized new in-memory state store" May 14 23:50:54.904603 kubelet[2996]: E0514 23:50:54.904564 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:54.945937 kubelet[2996]: E0514 23:50:54.945892 2996 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 23:50:55.005418 kubelet[2996]: E0514 23:50:55.005303 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:55.006433 kubelet[2996]: E0514 23:50:55.006386 2996 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-n-76ed3c1841?timeout=10s\": dial tcp 10.200.20.35:6443: connect: connection refused" interval="400ms" May 14 23:50:55.105852 kubelet[2996]: E0514 23:50:55.105812 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:55.146112 kubelet[2996]: E0514 23:50:55.146040 2996 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 23:50:55.206618 kubelet[2996]: E0514 23:50:55.206557 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:55.307752 kubelet[2996]: E0514 23:50:55.307613 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:55.407524 kubelet[2996]: E0514 23:50:55.407468 2996 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-n-76ed3c1841?timeout=10s\": dial tcp 10.200.20.35:6443: connect: connection refused" interval="800ms" May 14 23:50:55.407710 kubelet[2996]: E0514 23:50:55.407689 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:55.508029 kubelet[2996]: E0514 23:50:55.507998 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:55.547165 kubelet[2996]: E0514 23:50:55.547130 2996 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 23:50:55.608668 kubelet[2996]: E0514 23:50:55.608637 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:55.709193 kubelet[2996]: E0514 23:50:55.709154 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:55.809593 kubelet[2996]: E0514 23:50:55.809563 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:55.828176 kubelet[2996]: W0514 23:50:55.828071 2996 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused May 14 23:50:55.828176 kubelet[2996]: E0514 23:50:55.828118 2996 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" May 14 23:50:55.909957 kubelet[2996]: E0514 23:50:55.909834 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:56.010436 kubelet[2996]: E0514 23:50:56.010365 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:56.015018 kubelet[2996]: W0514 23:50:56.014962 2996 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-n-76ed3c1841&limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused May 14 23:50:56.015085 kubelet[2996]: E0514 23:50:56.015033 2996 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-n-76ed3c1841&limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" May 14 23:50:56.110752 kubelet[2996]: E0514 23:50:56.110713 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:56.158386 kubelet[2996]: W0514 23:50:56.158287 2996 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused May 14 23:50:56.158557 kubelet[2996]: E0514 23:50:56.158394 2996 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" May 14 23:50:56.208514 kubelet[2996]: E0514 23:50:56.208388 2996 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-n-76ed3c1841?timeout=10s\": dial tcp 10.200.20.35:6443: connect: connection refused" interval="1.6s" May 14 23:50:56.212084 kubelet[2996]: E0514 23:50:56.211512 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:56.229879 kubelet[2996]: W0514 23:50:56.229821 2996 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused May 14 23:50:56.229879 kubelet[2996]: E0514 23:50:56.229885 2996 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" May 14 23:50:56.312096 kubelet[2996]: E0514 23:50:56.312057 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:56.348226 kubelet[2996]: E0514 23:50:56.348199 2996 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 23:50:56.412689 kubelet[2996]: E0514 23:50:56.412657 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:58.369287 kubelet[2996]: E0514 23:50:56.513103 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:58.369287 kubelet[2996]: E0514 23:50:56.613584 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:58.369287 kubelet[2996]: E0514 23:50:56.714059 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:58.369287 kubelet[2996]: E0514 23:50:56.814279 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:58.369287 kubelet[2996]: E0514 23:50:56.837687 2996 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" May 14 23:50:58.369287 kubelet[2996]: E0514 23:50:56.914523 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:58.369287 kubelet[2996]: E0514 23:50:57.015172 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:58.369287 kubelet[2996]: E0514 23:50:57.115721 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:58.369287 kubelet[2996]: E0514 23:50:57.216394 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:58.369287 kubelet[2996]: E0514 23:50:57.317351 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:58.369287 kubelet[2996]: E0514 23:50:57.417833 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:58.369959 kubelet[2996]: E0514 23:50:57.518266 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:58.369959 kubelet[2996]: E0514 23:50:57.618750 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:58.369959 kubelet[2996]: E0514 23:50:57.719307 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:58.369959 kubelet[2996]: E0514 23:50:57.808780 2996 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-n-76ed3c1841?timeout=10s\": dial tcp 10.200.20.35:6443: connect: connection refused" interval="3.2s" May 14 23:50:58.369959 kubelet[2996]: E0514 23:50:57.820096 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:58.369959 kubelet[2996]: W0514 23:50:57.834274 2996 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused May 14 23:50:58.369959 kubelet[2996]: E0514 23:50:57.834459 2996 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" May 14 23:50:58.369959 kubelet[2996]: E0514 23:50:57.921164 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:58.369959 kubelet[2996]: E0514 23:50:57.949309 2996 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 23:50:58.369959 kubelet[2996]: E0514 23:50:58.021806 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:58.370161 kubelet[2996]: E0514 23:50:58.122363 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:58.370161 kubelet[2996]: E0514 23:50:58.222894 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:58.370161 kubelet[2996]: E0514 23:50:58.323886 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:58.377996 kubelet[2996]: I0514 23:50:58.377954 2996 policy_none.go:49] "None policy: Start" May 14 23:50:58.378852 kubelet[2996]: I0514 23:50:58.378740 2996 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 23:50:58.379071 kubelet[2996]: I0514 23:50:58.378938 2996 state_mem.go:35] "Initializing new in-memory state store" May 14 23:50:58.424287 kubelet[2996]: E0514 23:50:58.424236 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:58.495860 kubelet[2996]: W0514 23:50:58.495787 2996 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused May 14 23:50:58.495860 kubelet[2996]: E0514 23:50:58.495835 2996 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" May 14 23:50:58.525334 kubelet[2996]: E0514 23:50:58.525294 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:58.626149 kubelet[2996]: E0514 23:50:58.625722 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:58.726266 kubelet[2996]: E0514 23:50:58.726224 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:58.826541 kubelet[2996]: E0514 23:50:58.826499 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:58.918643 kubelet[2996]: W0514 23:50:58.918464 2996 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-n-76ed3c1841&limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused May 14 23:50:58.918643 kubelet[2996]: E0514 23:50:58.918520 2996 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-n-76ed3c1841&limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" May 14 23:50:58.924439 kubelet[2996]: W0514 23:50:58.924330 2996 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused May 14 23:50:58.924439 kubelet[2996]: E0514 23:50:58.924402 2996 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" May 14 23:50:58.926635 kubelet[2996]: E0514 23:50:58.926599 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:59.027119 kubelet[2996]: E0514 23:50:59.027087 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:59.128013 kubelet[2996]: E0514 23:50:59.127951 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:59.228638 kubelet[2996]: E0514 23:50:59.228537 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:59.329639 kubelet[2996]: E0514 23:50:59.329596 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:59.739626 kubelet[2996]: E0514 23:50:59.430092 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:59.739626 kubelet[2996]: E0514 23:50:59.530566 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:59.739626 kubelet[2996]: E0514 23:50:59.631096 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:59.739626 kubelet[2996]: E0514 23:50:59.731543 2996 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:59.746696 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 23:50:59.755717 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 23:50:59.759015 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 23:50:59.769452 kubelet[2996]: I0514 23:50:59.769423 2996 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 23:50:59.769794 kubelet[2996]: I0514 23:50:59.769776 2996 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 23:50:59.769896 kubelet[2996]: I0514 23:50:59.769862 2996 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 23:50:59.770276 kubelet[2996]: I0514 23:50:59.770258 2996 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 23:50:59.772621 kubelet[2996]: E0514 23:50:59.772592 2996 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:50:59.872612 kubelet[2996]: I0514 23:50:59.872531 2996 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.1-n-76ed3c1841" May 14 23:50:59.873030 kubelet[2996]: E0514 23:50:59.872997 2996 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.35:6443/api/v1/nodes\": dial tcp 10.200.20.35:6443: connect: connection refused" node="ci-4230.1.1-n-76ed3c1841" May 14 23:51:00.075350 kubelet[2996]: I0514 23:51:00.075303 2996 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.1-n-76ed3c1841" May 14 23:51:00.075742 kubelet[2996]: E0514 23:51:00.075709 2996 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.35:6443/api/v1/nodes\": dial tcp 10.200.20.35:6443: connect: connection refused" node="ci-4230.1.1-n-76ed3c1841" May 14 23:51:00.478176 kubelet[2996]: I0514 23:51:00.478087 2996 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.1-n-76ed3c1841" May 14 23:51:00.478627 kubelet[2996]: E0514 23:51:00.478508 2996 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.35:6443/api/v1/nodes\": dial tcp 10.200.20.35:6443: connect: connection refused" node="ci-4230.1.1-n-76ed3c1841" May 14 23:51:01.010065 kubelet[2996]: E0514 23:51:01.010013 2996 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-n-76ed3c1841?timeout=10s\": dial tcp 10.200.20.35:6443: connect: connection refused" interval="6.4s" May 14 23:51:01.158908 systemd[1]: Created slice kubepods-burstable-pod340f0de8567f3bdc4e243f1759576cbd.slice - libcontainer container kubepods-burstable-pod340f0de8567f3bdc4e243f1759576cbd.slice. May 14 23:51:01.178895 systemd[1]: Created slice kubepods-burstable-podaf031105c50d48b4cf0c7a4d359882fc.slice - libcontainer container kubepods-burstable-podaf031105c50d48b4cf0c7a4d359882fc.slice. May 14 23:51:01.192600 systemd[1]: Created slice kubepods-burstable-pod6d179735bb9072d4ee31ea5bf1dbedb4.slice - libcontainer container kubepods-burstable-pod6d179735bb9072d4ee31ea5bf1dbedb4.slice. May 14 23:51:01.222304 kubelet[2996]: E0514 23:51:01.222254 2996 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" May 14 23:51:01.234576 kubelet[2996]: I0514 23:51:01.234537 2996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/340f0de8567f3bdc4e243f1759576cbd-kubeconfig\") pod \"kube-scheduler-ci-4230.1.1-n-76ed3c1841\" (UID: \"340f0de8567f3bdc4e243f1759576cbd\") " pod="kube-system/kube-scheduler-ci-4230.1.1-n-76ed3c1841" May 14 23:51:01.234658 kubelet[2996]: I0514 23:51:01.234589 2996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af031105c50d48b4cf0c7a4d359882fc-ca-certs\") pod \"kube-apiserver-ci-4230.1.1-n-76ed3c1841\" (UID: \"af031105c50d48b4cf0c7a4d359882fc\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-76ed3c1841" May 14 23:51:01.234658 kubelet[2996]: I0514 23:51:01.234620 2996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af031105c50d48b4cf0c7a4d359882fc-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.1-n-76ed3c1841\" (UID: \"af031105c50d48b4cf0c7a4d359882fc\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-76ed3c1841" May 14 23:51:01.234738 kubelet[2996]: I0514 23:51:01.234659 2996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6d179735bb9072d4ee31ea5bf1dbedb4-ca-certs\") pod \"kube-controller-manager-ci-4230.1.1-n-76ed3c1841\" (UID: \"6d179735bb9072d4ee31ea5bf1dbedb4\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-76ed3c1841" May 14 23:51:01.234738 kubelet[2996]: I0514 23:51:01.234692 2996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6d179735bb9072d4ee31ea5bf1dbedb4-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.1-n-76ed3c1841\" (UID: \"6d179735bb9072d4ee31ea5bf1dbedb4\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-76ed3c1841" May 14 23:51:01.234738 kubelet[2996]: I0514 23:51:01.234719 2996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6d179735bb9072d4ee31ea5bf1dbedb4-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.1-n-76ed3c1841\" (UID: \"6d179735bb9072d4ee31ea5bf1dbedb4\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-76ed3c1841" May 14 23:51:01.234810 kubelet[2996]: I0514 23:51:01.234738 2996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af031105c50d48b4cf0c7a4d359882fc-k8s-certs\") pod \"kube-apiserver-ci-4230.1.1-n-76ed3c1841\" (UID: \"af031105c50d48b4cf0c7a4d359882fc\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-76ed3c1841" May 14 23:51:01.234810 kubelet[2996]: I0514 23:51:01.234759 2996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6d179735bb9072d4ee31ea5bf1dbedb4-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.1-n-76ed3c1841\" (UID: \"6d179735bb9072d4ee31ea5bf1dbedb4\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-76ed3c1841" May 14 23:51:01.234810 kubelet[2996]: I0514 23:51:01.234777 2996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6d179735bb9072d4ee31ea5bf1dbedb4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.1-n-76ed3c1841\" (UID: \"6d179735bb9072d4ee31ea5bf1dbedb4\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-76ed3c1841" May 14 23:51:01.281004 kubelet[2996]: I0514 23:51:01.280912 2996 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.1-n-76ed3c1841" May 14 23:51:01.281359 kubelet[2996]: E0514 23:51:01.281313 2996 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.35:6443/api/v1/nodes\": dial tcp 10.200.20.35:6443: connect: connection refused" node="ci-4230.1.1-n-76ed3c1841" May 14 23:51:01.477479 containerd[1737]: time="2025-05-14T23:51:01.477422121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.1-n-76ed3c1841,Uid:340f0de8567f3bdc4e243f1759576cbd,Namespace:kube-system,Attempt:0,}" May 14 23:51:01.491116 containerd[1737]: time="2025-05-14T23:51:01.491028900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.1-n-76ed3c1841,Uid:af031105c50d48b4cf0c7a4d359882fc,Namespace:kube-system,Attempt:0,}" May 14 23:51:01.496111 containerd[1737]: time="2025-05-14T23:51:01.495810867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.1-n-76ed3c1841,Uid:6d179735bb9072d4ee31ea5bf1dbedb4,Namespace:kube-system,Attempt:0,}" May 14 23:51:02.051055 kubelet[2996]: W0514 23:51:02.050967 2996 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused May 14 23:51:02.051055 kubelet[2996]: E0514 23:51:02.051019 2996 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" May 14 23:51:02.153092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2914999388.mount: Deactivated successfully. May 14 23:51:02.183310 containerd[1737]: time="2025-05-14T23:51:02.182419031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:51:02.188493 containerd[1737]: time="2025-05-14T23:51:02.188427120Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" May 14 23:51:02.202825 containerd[1737]: time="2025-05-14T23:51:02.202749340Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:51:02.207218 containerd[1737]: time="2025-05-14T23:51:02.207159826Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:51:02.209880 containerd[1737]: time="2025-05-14T23:51:02.209821510Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 14 23:51:02.220931 containerd[1737]: time="2025-05-14T23:51:02.220880445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:51:02.221906 containerd[1737]: time="2025-05-14T23:51:02.221519246Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 743.991525ms" May 14 23:51:02.228209 containerd[1737]: time="2025-05-14T23:51:02.227400254Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:51:02.230465 containerd[1737]: time="2025-05-14T23:51:02.230416459Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 14 23:51:02.241004 containerd[1737]: time="2025-05-14T23:51:02.240948033Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 749.834453ms" May 14 23:51:02.242719 containerd[1737]: time="2025-05-14T23:51:02.242550316Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 746.649769ms" May 14 23:51:02.346239 kubelet[2996]: W0514 23:51:02.346193 2996 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused May 14 23:51:02.346432 kubelet[2996]: E0514 23:51:02.346250 2996 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" May 14 23:51:02.885508 kubelet[2996]: I0514 23:51:02.885391 2996 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.1-n-76ed3c1841" May 14 23:51:02.885919 kubelet[2996]: E0514 23:51:02.885882 2996 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.35:6443/api/v1/nodes\": dial tcp 10.200.20.35:6443: connect: connection refused" node="ci-4230.1.1-n-76ed3c1841" May 14 23:51:03.221084 containerd[1737]: time="2025-05-14T23:51:03.220891970Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:51:03.221084 containerd[1737]: time="2025-05-14T23:51:03.220967210Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:51:03.221954 containerd[1737]: time="2025-05-14T23:51:03.220983250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:51:03.222490 containerd[1737]: time="2025-05-14T23:51:03.221780811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:51:03.225827 containerd[1737]: time="2025-05-14T23:51:03.225573136Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:51:03.225827 containerd[1737]: time="2025-05-14T23:51:03.225665776Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:51:03.225827 containerd[1737]: time="2025-05-14T23:51:03.225677496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:51:03.226577 containerd[1737]: time="2025-05-14T23:51:03.226495977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:51:03.238978 containerd[1737]: time="2025-05-14T23:51:03.238618994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:51:03.238978 containerd[1737]: time="2025-05-14T23:51:03.238701035Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:51:03.238978 containerd[1737]: time="2025-05-14T23:51:03.238719515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:51:03.238978 containerd[1737]: time="2025-05-14T23:51:03.238819635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:51:03.269554 systemd[1]: Started cri-containerd-0c5ff981ee76f10f42fcedf60f7a988a826f780bee4a11f1adda8f3008105731.scope - libcontainer container 0c5ff981ee76f10f42fcedf60f7a988a826f780bee4a11f1adda8f3008105731. May 14 23:51:03.270802 systemd[1]: Started cri-containerd-bc015c1bf2a0f40c57ce502a70fe55aa388a174ea7b97df2903c9e452d26cdc2.scope - libcontainer container bc015c1bf2a0f40c57ce502a70fe55aa388a174ea7b97df2903c9e452d26cdc2. May 14 23:51:03.275291 systemd[1]: Started cri-containerd-a75186c3fef8a8660c7320d349e261d7a03a36a385ab414c30c1fc78f600b994.scope - libcontainer container a75186c3fef8a8660c7320d349e261d7a03a36a385ab414c30c1fc78f600b994. May 14 23:51:03.325428 containerd[1737]: time="2025-05-14T23:51:03.325246316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.1-n-76ed3c1841,Uid:6d179735bb9072d4ee31ea5bf1dbedb4,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c5ff981ee76f10f42fcedf60f7a988a826f780bee4a11f1adda8f3008105731\"" May 14 23:51:03.332272 containerd[1737]: time="2025-05-14T23:51:03.332053846Z" level=info msg="CreateContainer within sandbox \"0c5ff981ee76f10f42fcedf60f7a988a826f780bee4a11f1adda8f3008105731\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 23:51:03.337676 containerd[1737]: time="2025-05-14T23:51:03.336747052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.1-n-76ed3c1841,Uid:340f0de8567f3bdc4e243f1759576cbd,Namespace:kube-system,Attempt:0,} returns sandbox id \"a75186c3fef8a8660c7320d349e261d7a03a36a385ab414c30c1fc78f600b994\"" May 14 23:51:03.341325 containerd[1737]: time="2025-05-14T23:51:03.341256259Z" level=info msg="CreateContainer within sandbox \"a75186c3fef8a8660c7320d349e261d7a03a36a385ab414c30c1fc78f600b994\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 23:51:03.343682 containerd[1737]: time="2025-05-14T23:51:03.343638102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.1-n-76ed3c1841,Uid:af031105c50d48b4cf0c7a4d359882fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc015c1bf2a0f40c57ce502a70fe55aa388a174ea7b97df2903c9e452d26cdc2\"" May 14 23:51:03.346741 containerd[1737]: time="2025-05-14T23:51:03.346671266Z" level=info msg="CreateContainer within sandbox \"bc015c1bf2a0f40c57ce502a70fe55aa388a174ea7b97df2903c9e452d26cdc2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 23:51:03.403570 containerd[1737]: time="2025-05-14T23:51:03.403513666Z" level=info msg="CreateContainer within sandbox \"0c5ff981ee76f10f42fcedf60f7a988a826f780bee4a11f1adda8f3008105731\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"de2db414d1294bdd80b00f1c57ee2131cb95fa39c38b75c55428f179f22667f6\"" May 14 23:51:03.404254 containerd[1737]: time="2025-05-14T23:51:03.404222907Z" level=info msg="StartContainer for \"de2db414d1294bdd80b00f1c57ee2131cb95fa39c38b75c55428f179f22667f6\"" May 14 23:51:03.426843 containerd[1737]: time="2025-05-14T23:51:03.426687899Z" level=info msg="CreateContainer within sandbox \"a75186c3fef8a8660c7320d349e261d7a03a36a385ab414c30c1fc78f600b994\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9216c99f77ff61980edebc5e99f9d7f532f241c8f8c8e327b976772d6a407f4d\"" May 14 23:51:03.428653 containerd[1737]: time="2025-05-14T23:51:03.428453781Z" level=info msg="StartContainer for \"9216c99f77ff61980edebc5e99f9d7f532f241c8f8c8e327b976772d6a407f4d\"" May 14 23:51:03.429550 systemd[1]: Started cri-containerd-de2db414d1294bdd80b00f1c57ee2131cb95fa39c38b75c55428f179f22667f6.scope - libcontainer container de2db414d1294bdd80b00f1c57ee2131cb95fa39c38b75c55428f179f22667f6. May 14 23:51:03.433305 containerd[1737]: time="2025-05-14T23:51:03.433039588Z" level=info msg="CreateContainer within sandbox \"bc015c1bf2a0f40c57ce502a70fe55aa388a174ea7b97df2903c9e452d26cdc2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8f597b003ca917e9bad993302e61cceb073c57a4ab050f0887215a04c06f94be\"" May 14 23:51:03.437677 containerd[1737]: time="2025-05-14T23:51:03.435411631Z" level=info msg="StartContainer for \"8f597b003ca917e9bad993302e61cceb073c57a4ab050f0887215a04c06f94be\"" May 14 23:51:03.474682 systemd[1]: Started cri-containerd-9216c99f77ff61980edebc5e99f9d7f532f241c8f8c8e327b976772d6a407f4d.scope - libcontainer container 9216c99f77ff61980edebc5e99f9d7f532f241c8f8c8e327b976772d6a407f4d. May 14 23:51:03.485170 systemd[1]: Started cri-containerd-8f597b003ca917e9bad993302e61cceb073c57a4ab050f0887215a04c06f94be.scope - libcontainer container 8f597b003ca917e9bad993302e61cceb073c57a4ab050f0887215a04c06f94be. May 14 23:51:03.497781 containerd[1737]: time="2025-05-14T23:51:03.497724358Z" level=info msg="StartContainer for \"de2db414d1294bdd80b00f1c57ee2131cb95fa39c38b75c55428f179f22667f6\" returns successfully" May 14 23:51:03.531926 containerd[1737]: time="2025-05-14T23:51:03.531786806Z" level=info msg="StartContainer for \"9216c99f77ff61980edebc5e99f9d7f532f241c8f8c8e327b976772d6a407f4d\" returns successfully" May 14 23:51:03.557011 containerd[1737]: time="2025-05-14T23:51:03.556945762Z" level=info msg="StartContainer for \"8f597b003ca917e9bad993302e61cceb073c57a4ab050f0887215a04c06f94be\" returns successfully" May 14 23:51:06.088920 kubelet[2996]: I0514 23:51:06.088805 2996 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.1-n-76ed3c1841" May 14 23:51:06.116165 kubelet[2996]: I0514 23:51:06.116089 2996 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230.1.1-n-76ed3c1841" May 14 23:51:06.794960 kubelet[2996]: I0514 23:51:06.794694 2996 apiserver.go:52] "Watching apiserver" May 14 23:51:06.802307 kubelet[2996]: I0514 23:51:06.802271 2996 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 14 23:51:08.058257 systemd[1]: Reload requested from client PID 3272 ('systemctl') (unit session-7.scope)... May 14 23:51:08.058275 systemd[1]: Reloading... May 14 23:51:08.182520 zram_generator::config[3322]: No configuration found. May 14 23:51:08.298329 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:51:08.418445 systemd[1]: Reloading finished in 359 ms. May 14 23:51:08.441824 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:51:08.457593 systemd[1]: kubelet.service: Deactivated successfully. May 14 23:51:08.457859 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:51:08.457915 systemd[1]: kubelet.service: Consumed 1.186s CPU time, 116.9M memory peak. May 14 23:51:08.463725 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:51:08.824277 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:51:08.833718 (kubelet)[3383]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 23:51:08.886931 kubelet[3383]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:51:08.886931 kubelet[3383]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 23:51:08.886931 kubelet[3383]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:51:08.886931 kubelet[3383]: I0514 23:51:08.886208 3383 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 23:51:08.900029 kubelet[3383]: I0514 23:51:08.899992 3383 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 14 23:51:08.901420 kubelet[3383]: I0514 23:51:08.900174 3383 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 23:51:08.901420 kubelet[3383]: I0514 23:51:08.900452 3383 server.go:929] "Client rotation is on, will bootstrap in background" May 14 23:51:08.902288 kubelet[3383]: I0514 23:51:08.902228 3383 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 23:51:08.904490 kubelet[3383]: I0514 23:51:08.904453 3383 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 23:51:08.915364 kubelet[3383]: E0514 23:51:08.913606 3383 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 14 23:51:08.915364 kubelet[3383]: I0514 23:51:08.913661 3383 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 14 23:51:08.918079 kubelet[3383]: I0514 23:51:08.918051 3383 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 23:51:08.918386 kubelet[3383]: I0514 23:51:08.918370 3383 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 14 23:51:08.918617 kubelet[3383]: I0514 23:51:08.918575 3383 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 23:51:08.918938 kubelet[3383]: I0514 23:51:08.918707 3383 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.1-n-76ed3c1841","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 23:51:08.919091 kubelet[3383]: I0514 23:51:08.919076 3383 topology_manager.go:138] "Creating topology manager with none policy" May 14 23:51:08.919148 kubelet[3383]: I0514 23:51:08.919140 3383 container_manager_linux.go:300] "Creating device plugin manager" May 14 23:51:08.919266 kubelet[3383]: I0514 23:51:08.919255 3383 state_mem.go:36] "Initialized new in-memory state store" May 14 23:51:08.919484 kubelet[3383]: I0514 23:51:08.919469 3383 kubelet.go:408] "Attempting to sync node with API server" May 14 23:51:08.919569 kubelet[3383]: I0514 23:51:08.919558 3383 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 23:51:08.919642 kubelet[3383]: I0514 23:51:08.919630 3383 kubelet.go:314] "Adding apiserver pod source" May 14 23:51:08.919711 kubelet[3383]: I0514 23:51:08.919700 3383 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 23:51:08.925529 kubelet[3383]: I0514 23:51:08.925470 3383 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 14 23:51:08.926599 kubelet[3383]: I0514 23:51:08.926573 3383 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 23:51:08.944835 kubelet[3383]: I0514 23:51:08.944076 3383 server.go:1269] "Started kubelet" May 14 23:51:08.951040 kubelet[3383]: I0514 23:51:08.951005 3383 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 23:51:08.963522 kubelet[3383]: I0514 23:51:08.963446 3383 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 23:51:08.966108 kubelet[3383]: I0514 23:51:08.966042 3383 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 23:51:08.966625 kubelet[3383]: I0514 23:51:08.966592 3383 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 23:51:08.972544 kubelet[3383]: I0514 23:51:08.967495 3383 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 23:51:08.974253 kubelet[3383]: I0514 23:51:08.974215 3383 factory.go:221] Registration of the systemd container factory successfully May 14 23:51:08.974609 kubelet[3383]: I0514 23:51:08.967716 3383 volume_manager.go:289] "Starting Kubelet Volume Manager" May 14 23:51:08.975099 kubelet[3383]: E0514 23:51:08.967887 3383 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-76ed3c1841\" not found" May 14 23:51:08.975263 kubelet[3383]: I0514 23:51:08.972081 3383 server.go:460] "Adding debug handlers to kubelet server" May 14 23:51:08.975428 kubelet[3383]: I0514 23:51:08.975387 3383 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 23:51:08.975949 kubelet[3383]: I0514 23:51:08.967729 3383 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 14 23:51:08.978272 kubelet[3383]: I0514 23:51:08.978239 3383 reconciler.go:26] "Reconciler: start to sync state" May 14 23:51:08.981763 kubelet[3383]: I0514 23:51:08.981706 3383 factory.go:221] Registration of the containerd container factory successfully May 14 23:51:08.997073 kubelet[3383]: E0514 23:51:08.995427 3383 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 23:51:08.998999 kubelet[3383]: I0514 23:51:08.998948 3383 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 23:51:09.001261 kubelet[3383]: I0514 23:51:09.001171 3383 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 23:51:09.001261 kubelet[3383]: I0514 23:51:09.001204 3383 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 23:51:09.001261 kubelet[3383]: I0514 23:51:09.001225 3383 kubelet.go:2321] "Starting kubelet main sync loop" May 14 23:51:09.001589 kubelet[3383]: E0514 23:51:09.001281 3383 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 23:51:09.070923 kubelet[3383]: I0514 23:51:09.070582 3383 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 23:51:09.070923 kubelet[3383]: I0514 23:51:09.070608 3383 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 23:51:09.070923 kubelet[3383]: I0514 23:51:09.070632 3383 state_mem.go:36] "Initialized new in-memory state store" May 14 23:51:09.070923 kubelet[3383]: I0514 23:51:09.070801 3383 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 23:51:09.070923 kubelet[3383]: I0514 23:51:09.070812 3383 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 23:51:09.070923 kubelet[3383]: I0514 23:51:09.070830 3383 policy_none.go:49] "None policy: Start" May 14 23:51:09.072310 kubelet[3383]: I0514 23:51:09.072226 3383 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 23:51:09.072779 kubelet[3383]: I0514 23:51:09.072292 3383 state_mem.go:35] "Initializing new in-memory state store" May 14 23:51:09.072979 kubelet[3383]: I0514 23:51:09.072771 3383 state_mem.go:75] "Updated machine memory state" May 14 23:51:09.083489 kubelet[3383]: I0514 23:51:09.081772 3383 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 23:51:09.083489 kubelet[3383]: I0514 23:51:09.081964 3383 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 23:51:09.083489 kubelet[3383]: I0514 23:51:09.081976 3383 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 23:51:09.083489 kubelet[3383]: I0514 23:51:09.082407 3383 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 23:51:09.122007 kubelet[3383]: W0514 23:51:09.121534 3383 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 23:51:09.128321 kubelet[3383]: W0514 23:51:09.128163 3383 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 23:51:09.129120 kubelet[3383]: W0514 23:51:09.128912 3383 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 23:51:09.179669 kubelet[3383]: I0514 23:51:09.179614 3383 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af031105c50d48b4cf0c7a4d359882fc-ca-certs\") pod \"kube-apiserver-ci-4230.1.1-n-76ed3c1841\" (UID: \"af031105c50d48b4cf0c7a4d359882fc\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-76ed3c1841" May 14 23:51:09.179669 kubelet[3383]: I0514 23:51:09.179671 3383 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af031105c50d48b4cf0c7a4d359882fc-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.1-n-76ed3c1841\" (UID: \"af031105c50d48b4cf0c7a4d359882fc\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-76ed3c1841" May 14 23:51:09.179853 kubelet[3383]: I0514 23:51:09.179699 3383 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6d179735bb9072d4ee31ea5bf1dbedb4-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.1-n-76ed3c1841\" (UID: \"6d179735bb9072d4ee31ea5bf1dbedb4\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-76ed3c1841" May 14 23:51:09.179853 kubelet[3383]: I0514 23:51:09.179717 3383 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6d179735bb9072d4ee31ea5bf1dbedb4-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.1-n-76ed3c1841\" (UID: \"6d179735bb9072d4ee31ea5bf1dbedb4\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-76ed3c1841" May 14 23:51:09.179853 kubelet[3383]: I0514 23:51:09.179734 3383 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/340f0de8567f3bdc4e243f1759576cbd-kubeconfig\") pod \"kube-scheduler-ci-4230.1.1-n-76ed3c1841\" (UID: \"340f0de8567f3bdc4e243f1759576cbd\") " pod="kube-system/kube-scheduler-ci-4230.1.1-n-76ed3c1841" May 14 23:51:09.179853 kubelet[3383]: I0514 23:51:09.179753 3383 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af031105c50d48b4cf0c7a4d359882fc-k8s-certs\") pod \"kube-apiserver-ci-4230.1.1-n-76ed3c1841\" (UID: \"af031105c50d48b4cf0c7a4d359882fc\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-76ed3c1841" May 14 23:51:09.179853 kubelet[3383]: I0514 23:51:09.179770 3383 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6d179735bb9072d4ee31ea5bf1dbedb4-ca-certs\") pod \"kube-controller-manager-ci-4230.1.1-n-76ed3c1841\" (UID: \"6d179735bb9072d4ee31ea5bf1dbedb4\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-76ed3c1841" May 14 23:51:09.179973 kubelet[3383]: I0514 23:51:09.179789 3383 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6d179735bb9072d4ee31ea5bf1dbedb4-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.1-n-76ed3c1841\" (UID: \"6d179735bb9072d4ee31ea5bf1dbedb4\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-76ed3c1841" May 14 23:51:09.179973 kubelet[3383]: I0514 23:51:09.179805 3383 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6d179735bb9072d4ee31ea5bf1dbedb4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.1-n-76ed3c1841\" (UID: \"6d179735bb9072d4ee31ea5bf1dbedb4\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-76ed3c1841" May 14 23:51:09.201630 kubelet[3383]: I0514 23:51:09.201283 3383 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.1-n-76ed3c1841" May 14 23:51:09.221812 kubelet[3383]: I0514 23:51:09.221771 3383 kubelet_node_status.go:111] "Node was previously registered" node="ci-4230.1.1-n-76ed3c1841" May 14 23:51:09.222010 kubelet[3383]: I0514 23:51:09.221868 3383 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230.1.1-n-76ed3c1841" May 14 23:51:09.897395 sudo[2241]: pam_unix(sudo:session): session closed for user root May 14 23:51:09.920928 kubelet[3383]: I0514 23:51:09.920652 3383 apiserver.go:52] "Watching apiserver" May 14 23:51:09.975946 kubelet[3383]: I0514 23:51:09.975887 3383 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 14 23:51:09.978937 sshd[2240]: Connection closed by 10.200.16.10 port 48582 May 14 23:51:09.979556 sshd-session[2238]: pam_unix(sshd:session): session closed for user core May 14 23:51:09.983486 systemd[1]: sshd@4-10.200.20.35:22-10.200.16.10:48582.service: Deactivated successfully. May 14 23:51:09.985877 systemd[1]: session-7.scope: Deactivated successfully. May 14 23:51:09.986729 systemd[1]: session-7.scope: Consumed 6.512s CPU time, 218.4M memory peak. May 14 23:51:09.988249 systemd-logind[1712]: Session 7 logged out. Waiting for processes to exit. May 14 23:51:09.989860 systemd-logind[1712]: Removed session 7. May 14 23:51:10.048532 kubelet[3383]: I0514 23:51:10.048151 3383 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.1.1-n-76ed3c1841" podStartSLOduration=1.048135159 podStartE2EDuration="1.048135159s" podCreationTimestamp="2025-05-14 23:51:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:51:10.048130199 +0000 UTC m=+1.209544419" watchObservedRunningTime="2025-05-14 23:51:10.048135159 +0000 UTC m=+1.209549379" May 14 23:51:10.083276 kubelet[3383]: I0514 23:51:10.083200 3383 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.1.1-n-76ed3c1841" podStartSLOduration=1.083165101 podStartE2EDuration="1.083165101s" podCreationTimestamp="2025-05-14 23:51:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:51:10.063426266 +0000 UTC m=+1.224840486" watchObservedRunningTime="2025-05-14 23:51:10.083165101 +0000 UTC m=+1.244579321" May 14 23:51:10.083605 kubelet[3383]: I0514 23:51:10.083325 3383 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.1.1-n-76ed3c1841" podStartSLOduration=1.083319622 podStartE2EDuration="1.083319622s" podCreationTimestamp="2025-05-14 23:51:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:51:10.083146301 +0000 UTC m=+1.244560521" watchObservedRunningTime="2025-05-14 23:51:10.083319622 +0000 UTC m=+1.244733842" May 14 23:51:13.571600 kubelet[3383]: I0514 23:51:13.571406 3383 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 23:51:13.571976 containerd[1737]: time="2025-05-14T23:51:13.571743870Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 23:51:13.572278 kubelet[3383]: I0514 23:51:13.572238 3383 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 23:51:14.294305 systemd[1]: Created slice kubepods-besteffort-pod967e4ff9_4ea5_48f0_9f66_b33d7d1950a2.slice - libcontainer container kubepods-besteffort-pod967e4ff9_4ea5_48f0_9f66_b33d7d1950a2.slice. May 14 23:51:14.313867 kubelet[3383]: I0514 23:51:14.313729 3383 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/967e4ff9-4ea5-48f0-9f66-b33d7d1950a2-kube-proxy\") pod \"kube-proxy-qh6hq\" (UID: \"967e4ff9-4ea5-48f0-9f66-b33d7d1950a2\") " pod="kube-system/kube-proxy-qh6hq" May 14 23:51:14.313867 kubelet[3383]: I0514 23:51:14.313781 3383 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/967e4ff9-4ea5-48f0-9f66-b33d7d1950a2-xtables-lock\") pod \"kube-proxy-qh6hq\" (UID: \"967e4ff9-4ea5-48f0-9f66-b33d7d1950a2\") " pod="kube-system/kube-proxy-qh6hq" May 14 23:51:14.313867 kubelet[3383]: I0514 23:51:14.313810 3383 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52frr\" (UniqueName: \"kubernetes.io/projected/967e4ff9-4ea5-48f0-9f66-b33d7d1950a2-kube-api-access-52frr\") pod \"kube-proxy-qh6hq\" (UID: \"967e4ff9-4ea5-48f0-9f66-b33d7d1950a2\") " pod="kube-system/kube-proxy-qh6hq" May 14 23:51:14.313867 kubelet[3383]: I0514 23:51:14.313843 3383 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/967e4ff9-4ea5-48f0-9f66-b33d7d1950a2-lib-modules\") pod \"kube-proxy-qh6hq\" (UID: \"967e4ff9-4ea5-48f0-9f66-b33d7d1950a2\") " pod="kube-system/kube-proxy-qh6hq" May 14 23:51:14.332691 systemd[1]: Created slice kubepods-burstable-podcfe7e69f_85c5_4597_af04_8895f8db6478.slice - libcontainer container kubepods-burstable-podcfe7e69f_85c5_4597_af04_8895f8db6478.slice. May 14 23:51:14.414325 kubelet[3383]: I0514 23:51:14.414271 3383 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cfe7e69f-85c5-4597-af04-8895f8db6478-xtables-lock\") pod \"kube-flannel-ds-sntb2\" (UID: \"cfe7e69f-85c5-4597-af04-8895f8db6478\") " pod="kube-flannel/kube-flannel-ds-sntb2" May 14 23:51:14.414325 kubelet[3383]: I0514 23:51:14.414323 3383 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/cfe7e69f-85c5-4597-af04-8895f8db6478-cni-plugin\") pod \"kube-flannel-ds-sntb2\" (UID: \"cfe7e69f-85c5-4597-af04-8895f8db6478\") " pod="kube-flannel/kube-flannel-ds-sntb2" May 14 23:51:14.414509 kubelet[3383]: I0514 23:51:14.414355 3383 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/cfe7e69f-85c5-4597-af04-8895f8db6478-cni\") pod \"kube-flannel-ds-sntb2\" (UID: \"cfe7e69f-85c5-4597-af04-8895f8db6478\") " pod="kube-flannel/kube-flannel-ds-sntb2" May 14 23:51:14.414509 kubelet[3383]: I0514 23:51:14.414391 3383 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/cfe7e69f-85c5-4597-af04-8895f8db6478-flannel-cfg\") pod \"kube-flannel-ds-sntb2\" (UID: \"cfe7e69f-85c5-4597-af04-8895f8db6478\") " pod="kube-flannel/kube-flannel-ds-sntb2" May 14 23:51:14.414509 kubelet[3383]: I0514 23:51:14.414406 3383 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztpxq\" (UniqueName: \"kubernetes.io/projected/cfe7e69f-85c5-4597-af04-8895f8db6478-kube-api-access-ztpxq\") pod \"kube-flannel-ds-sntb2\" (UID: \"cfe7e69f-85c5-4597-af04-8895f8db6478\") " pod="kube-flannel/kube-flannel-ds-sntb2" May 14 23:51:14.414509 kubelet[3383]: I0514 23:51:14.414425 3383 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cfe7e69f-85c5-4597-af04-8895f8db6478-run\") pod \"kube-flannel-ds-sntb2\" (UID: \"cfe7e69f-85c5-4597-af04-8895f8db6478\") " pod="kube-flannel/kube-flannel-ds-sntb2" May 14 23:51:14.427212 kubelet[3383]: E0514 23:51:14.427099 3383 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 14 23:51:14.427212 kubelet[3383]: E0514 23:51:14.427138 3383 projected.go:194] Error preparing data for projected volume kube-api-access-52frr for pod kube-system/kube-proxy-qh6hq: configmap "kube-root-ca.crt" not found May 14 23:51:14.427412 kubelet[3383]: E0514 23:51:14.427213 3383 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/967e4ff9-4ea5-48f0-9f66-b33d7d1950a2-kube-api-access-52frr podName:967e4ff9-4ea5-48f0-9f66-b33d7d1950a2 nodeName:}" failed. No retries permitted until 2025-05-14 23:51:14.927189945 +0000 UTC m=+6.088604165 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-52frr" (UniqueName: "kubernetes.io/projected/967e4ff9-4ea5-48f0-9f66-b33d7d1950a2-kube-api-access-52frr") pod "kube-proxy-qh6hq" (UID: "967e4ff9-4ea5-48f0-9f66-b33d7d1950a2") : configmap "kube-root-ca.crt" not found May 14 23:51:14.638871 containerd[1737]: time="2025-05-14T23:51:14.638538772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-sntb2,Uid:cfe7e69f-85c5-4597-af04-8895f8db6478,Namespace:kube-flannel,Attempt:0,}" May 14 23:51:14.905557 containerd[1737]: time="2025-05-14T23:51:14.905264070Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:51:14.905557 containerd[1737]: time="2025-05-14T23:51:14.905369030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:51:14.905557 containerd[1737]: time="2025-05-14T23:51:14.905381750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:51:14.905557 containerd[1737]: time="2025-05-14T23:51:14.905482990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:51:14.932567 systemd[1]: Started cri-containerd-24f69b56d3d128d2573f61f5f2f7de0ded51212dc6bc2a8b0f1461c10111a609.scope - libcontainer container 24f69b56d3d128d2573f61f5f2f7de0ded51212dc6bc2a8b0f1461c10111a609. May 14 23:51:14.963001 containerd[1737]: time="2025-05-14T23:51:14.962936866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-sntb2,Uid:cfe7e69f-85c5-4597-af04-8895f8db6478,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"24f69b56d3d128d2573f61f5f2f7de0ded51212dc6bc2a8b0f1461c10111a609\"" May 14 23:51:14.966373 containerd[1737]: time="2025-05-14T23:51:14.966209953Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" May 14 23:51:15.207608 containerd[1737]: time="2025-05-14T23:51:15.206819559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qh6hq,Uid:967e4ff9-4ea5-48f0-9f66-b33d7d1950a2,Namespace:kube-system,Attempt:0,}" May 14 23:51:15.452629 containerd[1737]: time="2025-05-14T23:51:15.452389694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:51:15.452629 containerd[1737]: time="2025-05-14T23:51:15.452461254Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:51:15.452629 containerd[1737]: time="2025-05-14T23:51:15.452473374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:51:15.452629 containerd[1737]: time="2025-05-14T23:51:15.452574494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:51:15.470594 systemd[1]: Started cri-containerd-2786b15b3a1fe42326ff41859fcecd94309aa6cd1866fcba481547a7408d59a4.scope - libcontainer container 2786b15b3a1fe42326ff41859fcecd94309aa6cd1866fcba481547a7408d59a4. May 14 23:51:15.493218 containerd[1737]: time="2025-05-14T23:51:15.493175656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qh6hq,Uid:967e4ff9-4ea5-48f0-9f66-b33d7d1950a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"2786b15b3a1fe42326ff41859fcecd94309aa6cd1866fcba481547a7408d59a4\"" May 14 23:51:15.497784 containerd[1737]: time="2025-05-14T23:51:15.497731586Z" level=info msg="CreateContainer within sandbox \"2786b15b3a1fe42326ff41859fcecd94309aa6cd1866fcba481547a7408d59a4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 23:51:15.945841 containerd[1737]: time="2025-05-14T23:51:15.945788450Z" level=info msg="CreateContainer within sandbox \"2786b15b3a1fe42326ff41859fcecd94309aa6cd1866fcba481547a7408d59a4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fd4f577903ba7bd0f588ec56f304abc833046f413594d469a8b1fe15a48e8736\"" May 14 23:51:15.946603 containerd[1737]: time="2025-05-14T23:51:15.946569011Z" level=info msg="StartContainer for \"fd4f577903ba7bd0f588ec56f304abc833046f413594d469a8b1fe15a48e8736\"" May 14 23:51:15.977629 systemd[1]: Started cri-containerd-fd4f577903ba7bd0f588ec56f304abc833046f413594d469a8b1fe15a48e8736.scope - libcontainer container fd4f577903ba7bd0f588ec56f304abc833046f413594d469a8b1fe15a48e8736. May 14 23:51:16.011269 containerd[1737]: time="2025-05-14T23:51:16.010922901Z" level=info msg="StartContainer for \"fd4f577903ba7bd0f588ec56f304abc833046f413594d469a8b1fe15a48e8736\" returns successfully" May 14 23:51:17.827030 kubelet[3383]: I0514 23:51:17.826807 3383 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qh6hq" podStartSLOduration=3.8267863650000002 podStartE2EDuration="3.826786365s" podCreationTimestamp="2025-05-14 23:51:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:51:16.087177975 +0000 UTC m=+7.248592195" watchObservedRunningTime="2025-05-14 23:51:17.826786365 +0000 UTC m=+8.988200585" May 14 23:51:18.068688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1123607253.mount: Deactivated successfully. May 14 23:51:18.590149 containerd[1737]: time="2025-05-14T23:51:18.590080225Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:51:18.636878 containerd[1737]: time="2025-05-14T23:51:18.636573919Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" May 14 23:51:18.642530 containerd[1737]: time="2025-05-14T23:51:18.642470971Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:51:18.699866 containerd[1737]: time="2025-05-14T23:51:18.699814327Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:51:18.700956 containerd[1737]: time="2025-05-14T23:51:18.700801329Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 3.734551856s" May 14 23:51:18.700956 containerd[1737]: time="2025-05-14T23:51:18.700848649Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" May 14 23:51:18.703592 containerd[1737]: time="2025-05-14T23:51:18.703541854Z" level=info msg="CreateContainer within sandbox \"24f69b56d3d128d2573f61f5f2f7de0ded51212dc6bc2a8b0f1461c10111a609\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" May 14 23:51:18.941515 containerd[1737]: time="2025-05-14T23:51:18.940961373Z" level=info msg="CreateContainer within sandbox \"24f69b56d3d128d2573f61f5f2f7de0ded51212dc6bc2a8b0f1461c10111a609\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"e6c9d49274de53cd0463ea52d2d51953b3c91c2ddcc81467af70e6127eec1a9f\"" May 14 23:51:18.942727 containerd[1737]: time="2025-05-14T23:51:18.942276136Z" level=info msg="StartContainer for \"e6c9d49274de53cd0463ea52d2d51953b3c91c2ddcc81467af70e6127eec1a9f\"" May 14 23:51:18.977598 systemd[1]: Started cri-containerd-e6c9d49274de53cd0463ea52d2d51953b3c91c2ddcc81467af70e6127eec1a9f.scope - libcontainer container e6c9d49274de53cd0463ea52d2d51953b3c91c2ddcc81467af70e6127eec1a9f. May 14 23:51:19.007718 systemd[1]: cri-containerd-e6c9d49274de53cd0463ea52d2d51953b3c91c2ddcc81467af70e6127eec1a9f.scope: Deactivated successfully. May 14 23:51:19.009539 containerd[1737]: time="2025-05-14T23:51:19.008677510Z" level=info msg="StartContainer for \"e6c9d49274de53cd0463ea52d2d51953b3c91c2ddcc81467af70e6127eec1a9f\" returns successfully" May 14 23:51:19.032227 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e6c9d49274de53cd0463ea52d2d51953b3c91c2ddcc81467af70e6127eec1a9f-rootfs.mount: Deactivated successfully. May 14 23:51:21.937814 containerd[1737]: time="2025-05-14T23:51:21.937626766Z" level=info msg="shim disconnected" id=e6c9d49274de53cd0463ea52d2d51953b3c91c2ddcc81467af70e6127eec1a9f namespace=k8s.io May 14 23:51:21.937814 containerd[1737]: time="2025-05-14T23:51:21.937684766Z" level=warning msg="cleaning up after shim disconnected" id=e6c9d49274de53cd0463ea52d2d51953b3c91c2ddcc81467af70e6127eec1a9f namespace=k8s.io May 14 23:51:21.937814 containerd[1737]: time="2025-05-14T23:51:21.937692527Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:51:22.063454 containerd[1737]: time="2025-05-14T23:51:22.063093065Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" May 14 23:51:25.528277 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1352684964.mount: Deactivated successfully. May 14 23:51:29.049442 containerd[1737]: time="2025-05-14T23:51:29.048833025Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:51:29.052174 containerd[1737]: time="2025-05-14T23:51:29.051893510Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" May 14 23:51:29.056963 containerd[1737]: time="2025-05-14T23:51:29.056867439Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:51:29.062681 containerd[1737]: time="2025-05-14T23:51:29.062083847Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:51:29.063456 containerd[1737]: time="2025-05-14T23:51:29.063409930Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 7.000194385s" May 14 23:51:29.063456 containerd[1737]: time="2025-05-14T23:51:29.063445050Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" May 14 23:51:29.067257 containerd[1737]: time="2025-05-14T23:51:29.067198296Z" level=info msg="CreateContainer within sandbox \"24f69b56d3d128d2573f61f5f2f7de0ded51212dc6bc2a8b0f1461c10111a609\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 14 23:51:29.106748 containerd[1737]: time="2025-05-14T23:51:29.106689604Z" level=info msg="CreateContainer within sandbox \"24f69b56d3d128d2573f61f5f2f7de0ded51212dc6bc2a8b0f1461c10111a609\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d00d017f3ebaa9b9dc5bab340f170e2c815b8bcd2a06e292a45a1f5c0f3ad0cb\"" May 14 23:51:29.107548 containerd[1737]: time="2025-05-14T23:51:29.107505686Z" level=info msg="StartContainer for \"d00d017f3ebaa9b9dc5bab340f170e2c815b8bcd2a06e292a45a1f5c0f3ad0cb\"" May 14 23:51:29.141544 systemd[1]: Started cri-containerd-d00d017f3ebaa9b9dc5bab340f170e2c815b8bcd2a06e292a45a1f5c0f3ad0cb.scope - libcontainer container d00d017f3ebaa9b9dc5bab340f170e2c815b8bcd2a06e292a45a1f5c0f3ad0cb. May 14 23:51:29.175152 systemd[1]: cri-containerd-d00d017f3ebaa9b9dc5bab340f170e2c815b8bcd2a06e292a45a1f5c0f3ad0cb.scope: Deactivated successfully. May 14 23:51:29.181657 containerd[1737]: time="2025-05-14T23:51:29.181484733Z" level=info msg="StartContainer for \"d00d017f3ebaa9b9dc5bab340f170e2c815b8bcd2a06e292a45a1f5c0f3ad0cb\" returns successfully" May 14 23:51:29.253554 kubelet[3383]: I0514 23:51:29.253514 3383 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 14 23:51:29.314410 systemd[1]: Created slice kubepods-burstable-pod5cfbe6aa_5696_4185_b4f4_15b544908d45.slice - libcontainer container kubepods-burstable-pod5cfbe6aa_5696_4185_b4f4_15b544908d45.slice. May 14 23:51:29.326107 systemd[1]: Created slice kubepods-burstable-pod18de6696_ccef_40db_a22e_5d0b1cc9ff95.slice - libcontainer container kubepods-burstable-pod18de6696_ccef_40db_a22e_5d0b1cc9ff95.slice. May 14 23:51:29.403251 kubelet[3383]: I0514 23:51:29.403202 3383 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpz75\" (UniqueName: \"kubernetes.io/projected/5cfbe6aa-5696-4185-b4f4-15b544908d45-kube-api-access-mpz75\") pod \"coredns-6f6b679f8f-srnlq\" (UID: \"5cfbe6aa-5696-4185-b4f4-15b544908d45\") " pod="kube-system/coredns-6f6b679f8f-srnlq" May 14 23:51:29.403251 kubelet[3383]: I0514 23:51:29.403257 3383 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5cfbe6aa-5696-4185-b4f4-15b544908d45-config-volume\") pod \"coredns-6f6b679f8f-srnlq\" (UID: \"5cfbe6aa-5696-4185-b4f4-15b544908d45\") " pod="kube-system/coredns-6f6b679f8f-srnlq" May 14 23:51:29.503905 kubelet[3383]: I0514 23:51:29.503507 3383 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18de6696-ccef-40db-a22e-5d0b1cc9ff95-config-volume\") pod \"coredns-6f6b679f8f-4cnzp\" (UID: \"18de6696-ccef-40db-a22e-5d0b1cc9ff95\") " pod="kube-system/coredns-6f6b679f8f-4cnzp" May 14 23:51:29.503905 kubelet[3383]: I0514 23:51:29.503559 3383 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47kpm\" (UniqueName: \"kubernetes.io/projected/18de6696-ccef-40db-a22e-5d0b1cc9ff95-kube-api-access-47kpm\") pod \"coredns-6f6b679f8f-4cnzp\" (UID: \"18de6696-ccef-40db-a22e-5d0b1cc9ff95\") " pod="kube-system/coredns-6f6b679f8f-4cnzp" May 14 23:51:29.620999 containerd[1737]: time="2025-05-14T23:51:29.620816007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-srnlq,Uid:5cfbe6aa-5696-4185-b4f4-15b544908d45,Namespace:kube-system,Attempt:0,}" May 14 23:51:29.631402 containerd[1737]: time="2025-05-14T23:51:29.631106545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4cnzp,Uid:18de6696-ccef-40db-a22e-5d0b1cc9ff95,Namespace:kube-system,Attempt:0,}" May 14 23:51:29.724666 containerd[1737]: time="2025-05-14T23:51:29.724577746Z" level=info msg="shim disconnected" id=d00d017f3ebaa9b9dc5bab340f170e2c815b8bcd2a06e292a45a1f5c0f3ad0cb namespace=k8s.io May 14 23:51:29.724666 containerd[1737]: time="2025-05-14T23:51:29.724655866Z" level=warning msg="cleaning up after shim disconnected" id=d00d017f3ebaa9b9dc5bab340f170e2c815b8bcd2a06e292a45a1f5c0f3ad0cb namespace=k8s.io May 14 23:51:29.724666 containerd[1737]: time="2025-05-14T23:51:29.724666586Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:51:29.802298 containerd[1737]: time="2025-05-14T23:51:29.802142639Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-srnlq,Uid:5cfbe6aa-5696-4185-b4f4-15b544908d45,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"657ec957897f17ecaf4c28d89d1dd83e50bbd58fd7407e7ce33c7b94e5ce8d2e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 14 23:51:29.802546 kubelet[3383]: E0514 23:51:29.802444 3383 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"657ec957897f17ecaf4c28d89d1dd83e50bbd58fd7407e7ce33c7b94e5ce8d2e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 14 23:51:29.802546 kubelet[3383]: E0514 23:51:29.802514 3383 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"657ec957897f17ecaf4c28d89d1dd83e50bbd58fd7407e7ce33c7b94e5ce8d2e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-srnlq" May 14 23:51:29.802546 kubelet[3383]: E0514 23:51:29.802533 3383 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"657ec957897f17ecaf4c28d89d1dd83e50bbd58fd7407e7ce33c7b94e5ce8d2e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-srnlq" May 14 23:51:29.802700 kubelet[3383]: E0514 23:51:29.802579 3383 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-srnlq_kube-system(5cfbe6aa-5696-4185-b4f4-15b544908d45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-srnlq_kube-system(5cfbe6aa-5696-4185-b4f4-15b544908d45)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"657ec957897f17ecaf4c28d89d1dd83e50bbd58fd7407e7ce33c7b94e5ce8d2e\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-srnlq" podUID="5cfbe6aa-5696-4185-b4f4-15b544908d45" May 14 23:51:29.805739 containerd[1737]: time="2025-05-14T23:51:29.805680725Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4cnzp,Uid:18de6696-ccef-40db-a22e-5d0b1cc9ff95,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8c439092967ddcc09dbd291d91dba787864cf6b19da7f8fa27520617da59eff4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 14 23:51:29.806161 kubelet[3383]: E0514 23:51:29.805919 3383 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c439092967ddcc09dbd291d91dba787864cf6b19da7f8fa27520617da59eff4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 14 23:51:29.806161 kubelet[3383]: E0514 23:51:29.805976 3383 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c439092967ddcc09dbd291d91dba787864cf6b19da7f8fa27520617da59eff4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-4cnzp" May 14 23:51:29.806161 kubelet[3383]: E0514 23:51:29.805994 3383 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c439092967ddcc09dbd291d91dba787864cf6b19da7f8fa27520617da59eff4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-4cnzp" May 14 23:51:29.806161 kubelet[3383]: E0514 23:51:29.806029 3383 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-4cnzp_kube-system(18de6696-ccef-40db-a22e-5d0b1cc9ff95)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-4cnzp_kube-system(18de6696-ccef-40db-a22e-5d0b1cc9ff95)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c439092967ddcc09dbd291d91dba787864cf6b19da7f8fa27520617da59eff4\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-4cnzp" podUID="18de6696-ccef-40db-a22e-5d0b1cc9ff95" May 14 23:51:30.093369 containerd[1737]: time="2025-05-14T23:51:30.090543734Z" level=info msg="CreateContainer within sandbox \"24f69b56d3d128d2573f61f5f2f7de0ded51212dc6bc2a8b0f1461c10111a609\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" May 14 23:51:30.098222 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d00d017f3ebaa9b9dc5bab340f170e2c815b8bcd2a06e292a45a1f5c0f3ad0cb-rootfs.mount: Deactivated successfully. May 14 23:51:30.131241 containerd[1737]: time="2025-05-14T23:51:30.131129364Z" level=info msg="CreateContainer within sandbox \"24f69b56d3d128d2573f61f5f2f7de0ded51212dc6bc2a8b0f1461c10111a609\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"df905a96e108c02883d447a97af32aeef9a2492709d3d623934f61ffbb25570c\"" May 14 23:51:30.132209 containerd[1737]: time="2025-05-14T23:51:30.132160726Z" level=info msg="StartContainer for \"df905a96e108c02883d447a97af32aeef9a2492709d3d623934f61ffbb25570c\"" May 14 23:51:30.164565 systemd[1]: Started cri-containerd-df905a96e108c02883d447a97af32aeef9a2492709d3d623934f61ffbb25570c.scope - libcontainer container df905a96e108c02883d447a97af32aeef9a2492709d3d623934f61ffbb25570c. May 14 23:51:30.194118 containerd[1737]: time="2025-05-14T23:51:30.193983431Z" level=info msg="StartContainer for \"df905a96e108c02883d447a97af32aeef9a2492709d3d623934f61ffbb25570c\" returns successfully" May 14 23:51:31.105511 kubelet[3383]: I0514 23:51:31.105433 3383 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-sntb2" podStartSLOduration=3.005349315 podStartE2EDuration="17.105414257s" podCreationTimestamp="2025-05-14 23:51:14 +0000 UTC" firstStartedPulling="2025-05-14 23:51:14.965145551 +0000 UTC m=+6.126559771" lastFinishedPulling="2025-05-14 23:51:29.065210533 +0000 UTC m=+20.226624713" observedRunningTime="2025-05-14 23:51:31.105132576 +0000 UTC m=+22.266546796" watchObservedRunningTime="2025-05-14 23:51:31.105414257 +0000 UTC m=+22.266828477" May 14 23:51:31.367814 systemd-networkd[1339]: flannel.1: Link UP May 14 23:51:31.367825 systemd-networkd[1339]: flannel.1: Gained carrier May 14 23:51:33.397537 systemd-networkd[1339]: flannel.1: Gained IPv6LL May 14 23:51:41.003279 containerd[1737]: time="2025-05-14T23:51:41.003211455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-srnlq,Uid:5cfbe6aa-5696-4185-b4f4-15b544908d45,Namespace:kube-system,Attempt:0,}" May 14 23:51:41.199537 systemd-networkd[1339]: cni0: Link UP May 14 23:51:41.199545 systemd-networkd[1339]: cni0: Gained carrier May 14 23:51:41.202956 systemd-networkd[1339]: cni0: Lost carrier May 14 23:51:41.234144 systemd-networkd[1339]: vethaa3d9c78: Link UP May 14 23:51:41.245661 kernel: cni0: port 1(vethaa3d9c78) entered blocking state May 14 23:51:41.245755 kernel: cni0: port 1(vethaa3d9c78) entered disabled state May 14 23:51:41.249478 kernel: vethaa3d9c78: entered allmulticast mode May 14 23:51:41.253453 kernel: vethaa3d9c78: entered promiscuous mode May 14 23:51:41.257858 kernel: cni0: port 1(vethaa3d9c78) entered blocking state May 14 23:51:41.257960 kernel: cni0: port 1(vethaa3d9c78) entered forwarding state May 14 23:51:41.264384 kernel: cni0: port 1(vethaa3d9c78) entered disabled state May 14 23:51:41.279636 kernel: cni0: port 1(vethaa3d9c78) entered blocking state May 14 23:51:41.279770 kernel: cni0: port 1(vethaa3d9c78) entered forwarding state May 14 23:51:41.279956 systemd-networkd[1339]: vethaa3d9c78: Gained carrier May 14 23:51:41.280325 systemd-networkd[1339]: cni0: Gained carrier May 14 23:51:41.282580 containerd[1737]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000928e8), "name":"cbr0", "type":"bridge"} May 14 23:51:41.282580 containerd[1737]: delegateAdd: netconf sent to delegate plugin: May 14 23:51:41.302934 containerd[1737]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-14T23:51:41.302297679Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:51:41.302934 containerd[1737]: time="2025-05-14T23:51:41.302430559Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:51:41.302934 containerd[1737]: time="2025-05-14T23:51:41.302447839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:51:41.302934 containerd[1737]: time="2025-05-14T23:51:41.302536839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:51:41.331552 systemd[1]: Started cri-containerd-4cd7770a5c09e9567e64eb98941ef43cfe4e3f139445fb34368fa7cd0b86783c.scope - libcontainer container 4cd7770a5c09e9567e64eb98941ef43cfe4e3f139445fb34368fa7cd0b86783c. May 14 23:51:41.362697 containerd[1737]: time="2025-05-14T23:51:41.362647381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-srnlq,Uid:5cfbe6aa-5696-4185-b4f4-15b544908d45,Namespace:kube-system,Attempt:0,} returns sandbox id \"4cd7770a5c09e9567e64eb98941ef43cfe4e3f139445fb34368fa7cd0b86783c\"" May 14 23:51:41.368136 containerd[1737]: time="2025-05-14T23:51:41.367882510Z" level=info msg="CreateContainer within sandbox \"4cd7770a5c09e9567e64eb98941ef43cfe4e3f139445fb34368fa7cd0b86783c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 23:51:41.539528 containerd[1737]: time="2025-05-14T23:51:41.539409758Z" level=info msg="CreateContainer within sandbox \"4cd7770a5c09e9567e64eb98941ef43cfe4e3f139445fb34368fa7cd0b86783c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"16987738022756ae76f638a3c59a6f0c9374ce7db07bea4fd3d5041863254d68\"" May 14 23:51:41.540208 containerd[1737]: time="2025-05-14T23:51:41.540156880Z" level=info msg="StartContainer for \"16987738022756ae76f638a3c59a6f0c9374ce7db07bea4fd3d5041863254d68\"" May 14 23:51:41.570516 systemd[1]: Started cri-containerd-16987738022756ae76f638a3c59a6f0c9374ce7db07bea4fd3d5041863254d68.scope - libcontainer container 16987738022756ae76f638a3c59a6f0c9374ce7db07bea4fd3d5041863254d68. May 14 23:51:41.604984 containerd[1737]: time="2025-05-14T23:51:41.604935589Z" level=info msg="StartContainer for \"16987738022756ae76f638a3c59a6f0c9374ce7db07bea4fd3d5041863254d68\" returns successfully" May 14 23:51:42.126433 kubelet[3383]: I0514 23:51:42.126354 3383 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-srnlq" podStartSLOduration=28.125603706 podStartE2EDuration="28.125603706s" podCreationTimestamp="2025-05-14 23:51:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:51:42.123394702 +0000 UTC m=+33.284808922" watchObservedRunningTime="2025-05-14 23:51:42.125603706 +0000 UTC m=+33.287017926" May 14 23:51:42.357486 systemd-networkd[1339]: cni0: Gained IPv6LL May 14 23:51:42.421492 systemd-networkd[1339]: vethaa3d9c78: Gained IPv6LL May 14 23:51:44.002631 containerd[1737]: time="2025-05-14T23:51:44.002586948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4cnzp,Uid:18de6696-ccef-40db-a22e-5d0b1cc9ff95,Namespace:kube-system,Attempt:0,}" May 14 23:51:44.262456 systemd-networkd[1339]: veth5ff7723f: Link UP May 14 23:51:44.273713 kernel: cni0: port 2(veth5ff7723f) entered blocking state May 14 23:51:44.273829 kernel: cni0: port 2(veth5ff7723f) entered disabled state May 14 23:51:44.278489 kernel: veth5ff7723f: entered allmulticast mode May 14 23:51:44.278590 kernel: veth5ff7723f: entered promiscuous mode May 14 23:51:44.293739 kernel: cni0: port 2(veth5ff7723f) entered blocking state May 14 23:51:44.293851 kernel: cni0: port 2(veth5ff7723f) entered forwarding state May 14 23:51:44.293979 systemd-networkd[1339]: veth5ff7723f: Gained carrier May 14 23:51:44.296459 containerd[1737]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000928e8), "name":"cbr0", "type":"bridge"} May 14 23:51:44.296459 containerd[1737]: delegateAdd: netconf sent to delegate plugin: May 14 23:51:44.356978 containerd[1737]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-14T23:51:44.356886905Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:51:44.356978 containerd[1737]: time="2025-05-14T23:51:44.356944825Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:51:44.357383 containerd[1737]: time="2025-05-14T23:51:44.356955825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:51:44.357383 containerd[1737]: time="2025-05-14T23:51:44.357034505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:51:44.381639 systemd[1]: Started cri-containerd-b74c5a752be80c8a8918a6509153c0d1ac0769c880e5afb0b6021057f7cc909f.scope - libcontainer container b74c5a752be80c8a8918a6509153c0d1ac0769c880e5afb0b6021057f7cc909f. May 14 23:51:44.413396 containerd[1737]: time="2025-05-14T23:51:44.413349440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4cnzp,Uid:18de6696-ccef-40db-a22e-5d0b1cc9ff95,Namespace:kube-system,Attempt:0,} returns sandbox id \"b74c5a752be80c8a8918a6509153c0d1ac0769c880e5afb0b6021057f7cc909f\"" May 14 23:51:44.420230 containerd[1737]: time="2025-05-14T23:51:44.419969651Z" level=info msg="CreateContainer within sandbox \"b74c5a752be80c8a8918a6509153c0d1ac0769c880e5afb0b6021057f7cc909f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 23:51:45.493462 systemd-networkd[1339]: veth5ff7723f: Gained IPv6LL May 14 23:51:45.954724 containerd[1737]: time="2025-05-14T23:51:45.954633477Z" level=info msg="CreateContainer within sandbox \"b74c5a752be80c8a8918a6509153c0d1ac0769c880e5afb0b6021057f7cc909f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"97438d7fab1686509f9c9ceef6069ac11cbeef9797ba2f3578367461f85b1d57\"" May 14 23:51:45.955979 containerd[1737]: time="2025-05-14T23:51:45.955918879Z" level=info msg="StartContainer for \"97438d7fab1686509f9c9ceef6069ac11cbeef9797ba2f3578367461f85b1d57\"" May 14 23:51:45.986530 systemd[1]: Started cri-containerd-97438d7fab1686509f9c9ceef6069ac11cbeef9797ba2f3578367461f85b1d57.scope - libcontainer container 97438d7fab1686509f9c9ceef6069ac11cbeef9797ba2f3578367461f85b1d57. May 14 23:51:46.017267 containerd[1737]: time="2025-05-14T23:51:46.017208582Z" level=info msg="StartContainer for \"97438d7fab1686509f9c9ceef6069ac11cbeef9797ba2f3578367461f85b1d57\" returns successfully" May 14 23:51:46.140355 kubelet[3383]: I0514 23:51:46.138184 3383 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-4cnzp" podStartSLOduration=32.138168386 podStartE2EDuration="32.138168386s" podCreationTimestamp="2025-05-14 23:51:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:51:46.138003946 +0000 UTC m=+37.299418166" watchObservedRunningTime="2025-05-14 23:51:46.138168386 +0000 UTC m=+37.299582606" May 14 23:52:36.485571 update_engine[1715]: I20250514 23:52:36.485428 1715 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 14 23:52:36.485571 update_engine[1715]: I20250514 23:52:36.485482 1715 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 14 23:52:36.485994 update_engine[1715]: I20250514 23:52:36.485662 1715 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 14 23:52:36.486020 update_engine[1715]: I20250514 23:52:36.486006 1715 omaha_request_params.cc:62] Current group set to beta May 14 23:52:36.486309 update_engine[1715]: I20250514 23:52:36.486096 1715 update_attempter.cc:499] Already updated boot flags. Skipping. May 14 23:52:36.486309 update_engine[1715]: I20250514 23:52:36.486111 1715 update_attempter.cc:643] Scheduling an action processor start. May 14 23:52:36.486309 update_engine[1715]: I20250514 23:52:36.486126 1715 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 14 23:52:36.486309 update_engine[1715]: I20250514 23:52:36.486155 1715 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 14 23:52:36.486309 update_engine[1715]: I20250514 23:52:36.486214 1715 omaha_request_action.cc:271] Posting an Omaha request to disabled May 14 23:52:36.486309 update_engine[1715]: I20250514 23:52:36.486222 1715 omaha_request_action.cc:272] Request: May 14 23:52:36.486309 update_engine[1715]: May 14 23:52:36.486309 update_engine[1715]: May 14 23:52:36.486309 update_engine[1715]: May 14 23:52:36.486309 update_engine[1715]: May 14 23:52:36.486309 update_engine[1715]: May 14 23:52:36.486309 update_engine[1715]: May 14 23:52:36.486309 update_engine[1715]: May 14 23:52:36.486309 update_engine[1715]: May 14 23:52:36.486309 update_engine[1715]: I20250514 23:52:36.486229 1715 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 23:52:36.486831 locksmithd[1770]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 14 23:52:36.487279 update_engine[1715]: I20250514 23:52:36.487244 1715 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 23:52:36.487635 update_engine[1715]: I20250514 23:52:36.487591 1715 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 23:52:36.574773 update_engine[1715]: E20250514 23:52:36.574700 1715 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 23:52:36.574970 update_engine[1715]: I20250514 23:52:36.574815 1715 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 14 23:52:46.442905 update_engine[1715]: I20250514 23:52:46.442826 1715 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 23:52:46.443278 update_engine[1715]: I20250514 23:52:46.443133 1715 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 23:52:46.443433 update_engine[1715]: I20250514 23:52:46.443378 1715 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 23:52:46.453205 update_engine[1715]: E20250514 23:52:46.453155 1715 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 23:52:46.453280 update_engine[1715]: I20250514 23:52:46.453235 1715 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 14 23:52:56.442916 update_engine[1715]: I20250514 23:52:56.442799 1715 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 23:52:56.443306 update_engine[1715]: I20250514 23:52:56.443081 1715 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 23:52:56.443429 update_engine[1715]: I20250514 23:52:56.443369 1715 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 23:52:56.482739 update_engine[1715]: E20250514 23:52:56.482651 1715 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 23:52:56.482879 update_engine[1715]: I20250514 23:52:56.482769 1715 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 14 23:53:06.443219 update_engine[1715]: I20250514 23:53:06.443116 1715 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 23:53:06.443672 update_engine[1715]: I20250514 23:53:06.443480 1715 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 23:53:06.443771 update_engine[1715]: I20250514 23:53:06.443738 1715 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 23:53:06.484467 update_engine[1715]: E20250514 23:53:06.484385 1715 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 23:53:06.484628 update_engine[1715]: I20250514 23:53:06.484508 1715 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 14 23:53:06.484628 update_engine[1715]: I20250514 23:53:06.484522 1715 omaha_request_action.cc:617] Omaha request response: May 14 23:53:06.484628 update_engine[1715]: E20250514 23:53:06.484613 1715 omaha_request_action.cc:636] Omaha request network transfer failed. May 14 23:53:06.484693 update_engine[1715]: I20250514 23:53:06.484633 1715 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 14 23:53:06.484693 update_engine[1715]: I20250514 23:53:06.484639 1715 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 14 23:53:06.484693 update_engine[1715]: I20250514 23:53:06.484644 1715 update_attempter.cc:306] Processing Done. May 14 23:53:06.484693 update_engine[1715]: E20250514 23:53:06.484661 1715 update_attempter.cc:619] Update failed. May 14 23:53:06.484693 update_engine[1715]: I20250514 23:53:06.484666 1715 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 14 23:53:06.484693 update_engine[1715]: I20250514 23:53:06.484671 1715 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 14 23:53:06.484693 update_engine[1715]: I20250514 23:53:06.484678 1715 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 14 23:53:06.484841 update_engine[1715]: I20250514 23:53:06.484752 1715 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 14 23:53:06.484841 update_engine[1715]: I20250514 23:53:06.484776 1715 omaha_request_action.cc:271] Posting an Omaha request to disabled May 14 23:53:06.484841 update_engine[1715]: I20250514 23:53:06.484782 1715 omaha_request_action.cc:272] Request: May 14 23:53:06.484841 update_engine[1715]: May 14 23:53:06.484841 update_engine[1715]: May 14 23:53:06.484841 update_engine[1715]: May 14 23:53:06.484841 update_engine[1715]: May 14 23:53:06.484841 update_engine[1715]: May 14 23:53:06.484841 update_engine[1715]: May 14 23:53:06.484841 update_engine[1715]: I20250514 23:53:06.484788 1715 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 23:53:06.485008 update_engine[1715]: I20250514 23:53:06.484935 1715 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 23:53:06.485305 update_engine[1715]: I20250514 23:53:06.485167 1715 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 23:53:06.485391 locksmithd[1770]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 14 23:53:06.493409 update_engine[1715]: E20250514 23:53:06.493355 1715 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 23:53:06.493533 update_engine[1715]: I20250514 23:53:06.493441 1715 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 14 23:53:06.493533 update_engine[1715]: I20250514 23:53:06.493451 1715 omaha_request_action.cc:617] Omaha request response: May 14 23:53:06.493533 update_engine[1715]: I20250514 23:53:06.493458 1715 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 14 23:53:06.493533 update_engine[1715]: I20250514 23:53:06.493463 1715 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 14 23:53:06.493533 update_engine[1715]: I20250514 23:53:06.493468 1715 update_attempter.cc:306] Processing Done. May 14 23:53:06.493533 update_engine[1715]: I20250514 23:53:06.493474 1715 update_attempter.cc:310] Error event sent. May 14 23:53:06.493533 update_engine[1715]: I20250514 23:53:06.493484 1715 update_check_scheduler.cc:74] Next update check in 47m24s May 14 23:53:06.493834 locksmithd[1770]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 14 23:53:09.209574 systemd[1]: Started sshd@5-10.200.20.35:22-10.200.16.10:37102.service - OpenSSH per-connection server daemon (10.200.16.10:37102). May 14 23:53:09.629584 sshd[4645]: Accepted publickey for core from 10.200.16.10 port 37102 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:53:09.630940 sshd-session[4645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:53:09.637941 systemd-logind[1712]: New session 8 of user core. May 14 23:53:09.641822 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 23:53:10.054333 sshd[4647]: Connection closed by 10.200.16.10 port 37102 May 14 23:53:10.053804 sshd-session[4645]: pam_unix(sshd:session): session closed for user core May 14 23:53:10.056804 systemd[1]: sshd@5-10.200.20.35:22-10.200.16.10:37102.service: Deactivated successfully. May 14 23:53:10.058677 systemd[1]: session-8.scope: Deactivated successfully. May 14 23:53:10.060605 systemd-logind[1712]: Session 8 logged out. Waiting for processes to exit. May 14 23:53:10.061842 systemd-logind[1712]: Removed session 8. May 14 23:53:15.133644 systemd[1]: Started sshd@6-10.200.20.35:22-10.200.16.10:37110.service - OpenSSH per-connection server daemon (10.200.16.10:37110). May 14 23:53:15.554895 sshd[4682]: Accepted publickey for core from 10.200.16.10 port 37110 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:53:15.556168 sshd-session[4682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:53:15.561390 systemd-logind[1712]: New session 9 of user core. May 14 23:53:15.568529 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 23:53:15.917384 sshd[4684]: Connection closed by 10.200.16.10 port 37110 May 14 23:53:15.917953 sshd-session[4682]: pam_unix(sshd:session): session closed for user core May 14 23:53:15.921968 systemd[1]: sshd@6-10.200.20.35:22-10.200.16.10:37110.service: Deactivated successfully. May 14 23:53:15.923910 systemd[1]: session-9.scope: Deactivated successfully. May 14 23:53:15.925003 systemd-logind[1712]: Session 9 logged out. Waiting for processes to exit. May 14 23:53:15.926122 systemd-logind[1712]: Removed session 9. May 14 23:53:20.997603 systemd[1]: Started sshd@7-10.200.20.35:22-10.200.16.10:37986.service - OpenSSH per-connection server daemon (10.200.16.10:37986). May 14 23:53:21.413138 sshd[4720]: Accepted publickey for core from 10.200.16.10 port 37986 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:53:21.414653 sshd-session[4720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:53:21.419433 systemd-logind[1712]: New session 10 of user core. May 14 23:53:21.427574 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 23:53:21.799636 sshd[4722]: Connection closed by 10.200.16.10 port 37986 May 14 23:53:21.800123 sshd-session[4720]: pam_unix(sshd:session): session closed for user core May 14 23:53:21.804046 systemd[1]: sshd@7-10.200.20.35:22-10.200.16.10:37986.service: Deactivated successfully. May 14 23:53:21.806573 systemd[1]: session-10.scope: Deactivated successfully. May 14 23:53:21.808026 systemd-logind[1712]: Session 10 logged out. Waiting for processes to exit. May 14 23:53:21.809489 systemd-logind[1712]: Removed session 10. May 14 23:53:21.882618 systemd[1]: Started sshd@8-10.200.20.35:22-10.200.16.10:37992.service - OpenSSH per-connection server daemon (10.200.16.10:37992). May 14 23:53:22.297785 sshd[4756]: Accepted publickey for core from 10.200.16.10 port 37992 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:53:22.299806 sshd-session[4756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:53:22.304885 systemd-logind[1712]: New session 11 of user core. May 14 23:53:22.314514 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 23:53:22.711794 sshd[4758]: Connection closed by 10.200.16.10 port 37992 May 14 23:53:22.712579 sshd-session[4756]: pam_unix(sshd:session): session closed for user core May 14 23:53:22.716211 systemd[1]: sshd@8-10.200.20.35:22-10.200.16.10:37992.service: Deactivated successfully. May 14 23:53:22.717985 systemd[1]: session-11.scope: Deactivated successfully. May 14 23:53:22.718735 systemd-logind[1712]: Session 11 logged out. Waiting for processes to exit. May 14 23:53:22.719950 systemd-logind[1712]: Removed session 11. May 14 23:53:22.787966 systemd[1]: Started sshd@9-10.200.20.35:22-10.200.16.10:37996.service - OpenSSH per-connection server daemon (10.200.16.10:37996). May 14 23:53:23.206916 sshd[4768]: Accepted publickey for core from 10.200.16.10 port 37996 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:53:23.207818 sshd-session[4768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:53:23.218450 systemd-logind[1712]: New session 12 of user core. May 14 23:53:23.224543 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 23:53:23.591404 sshd[4770]: Connection closed by 10.200.16.10 port 37996 May 14 23:53:23.591951 sshd-session[4768]: pam_unix(sshd:session): session closed for user core May 14 23:53:23.595509 systemd-logind[1712]: Session 12 logged out. Waiting for processes to exit. May 14 23:53:23.595752 systemd[1]: sshd@9-10.200.20.35:22-10.200.16.10:37996.service: Deactivated successfully. May 14 23:53:23.597877 systemd[1]: session-12.scope: Deactivated successfully. May 14 23:53:23.599086 systemd-logind[1712]: Removed session 12. May 14 23:53:28.675635 systemd[1]: Started sshd@10-10.200.20.35:22-10.200.16.10:56302.service - OpenSSH per-connection server daemon (10.200.16.10:56302). May 14 23:53:29.125857 sshd[4803]: Accepted publickey for core from 10.200.16.10 port 56302 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:53:29.127225 sshd-session[4803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:53:29.131424 systemd-logind[1712]: New session 13 of user core. May 14 23:53:29.137600 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 23:53:29.510555 sshd[4805]: Connection closed by 10.200.16.10 port 56302 May 14 23:53:29.511596 sshd-session[4803]: pam_unix(sshd:session): session closed for user core May 14 23:53:29.515363 systemd[1]: sshd@10-10.200.20.35:22-10.200.16.10:56302.service: Deactivated successfully. May 14 23:53:29.517861 systemd[1]: session-13.scope: Deactivated successfully. May 14 23:53:29.519125 systemd-logind[1712]: Session 13 logged out. Waiting for processes to exit. May 14 23:53:29.520064 systemd-logind[1712]: Removed session 13. May 14 23:53:34.597617 systemd[1]: Started sshd@11-10.200.20.35:22-10.200.16.10:56316.service - OpenSSH per-connection server daemon (10.200.16.10:56316). May 14 23:53:35.043607 sshd[4838]: Accepted publickey for core from 10.200.16.10 port 56316 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:53:35.045550 sshd-session[4838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:53:35.050553 systemd-logind[1712]: New session 14 of user core. May 14 23:53:35.053534 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 23:53:35.439515 sshd[4840]: Connection closed by 10.200.16.10 port 56316 May 14 23:53:35.439971 sshd-session[4838]: pam_unix(sshd:session): session closed for user core May 14 23:53:35.444333 systemd[1]: sshd@11-10.200.20.35:22-10.200.16.10:56316.service: Deactivated successfully. May 14 23:53:35.446848 systemd[1]: session-14.scope: Deactivated successfully. May 14 23:53:35.448097 systemd-logind[1712]: Session 14 logged out. Waiting for processes to exit. May 14 23:53:35.449017 systemd-logind[1712]: Removed session 14. May 14 23:53:40.523675 systemd[1]: Started sshd@12-10.200.20.35:22-10.200.16.10:36162.service - OpenSSH per-connection server daemon (10.200.16.10:36162). May 14 23:53:40.938775 sshd[4873]: Accepted publickey for core from 10.200.16.10 port 36162 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:53:40.940395 sshd-session[4873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:53:40.945330 systemd-logind[1712]: New session 15 of user core. May 14 23:53:40.952569 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 23:53:41.310097 sshd[4875]: Connection closed by 10.200.16.10 port 36162 May 14 23:53:41.310574 sshd-session[4873]: pam_unix(sshd:session): session closed for user core May 14 23:53:41.314007 systemd[1]: sshd@12-10.200.20.35:22-10.200.16.10:36162.service: Deactivated successfully. May 14 23:53:41.316314 systemd[1]: session-15.scope: Deactivated successfully. May 14 23:53:41.317982 systemd-logind[1712]: Session 15 logged out. Waiting for processes to exit. May 14 23:53:41.318989 systemd-logind[1712]: Removed session 15. May 14 23:53:41.403664 systemd[1]: Started sshd@13-10.200.20.35:22-10.200.16.10:36164.service - OpenSSH per-connection server daemon (10.200.16.10:36164). May 14 23:53:41.850000 sshd[4887]: Accepted publickey for core from 10.200.16.10 port 36164 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:53:41.851649 sshd-session[4887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:53:41.856800 systemd-logind[1712]: New session 16 of user core. May 14 23:53:41.865532 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 23:53:42.299206 sshd[4895]: Connection closed by 10.200.16.10 port 36164 May 14 23:53:42.300032 sshd-session[4887]: pam_unix(sshd:session): session closed for user core May 14 23:53:42.303941 systemd[1]: sshd@13-10.200.20.35:22-10.200.16.10:36164.service: Deactivated successfully. May 14 23:53:42.305865 systemd[1]: session-16.scope: Deactivated successfully. May 14 23:53:42.306739 systemd-logind[1712]: Session 16 logged out. Waiting for processes to exit. May 14 23:53:42.308291 systemd-logind[1712]: Removed session 16. May 14 23:53:42.381326 systemd[1]: Started sshd@14-10.200.20.35:22-10.200.16.10:36174.service - OpenSSH per-connection server daemon (10.200.16.10:36174). May 14 23:53:42.830110 sshd[4919]: Accepted publickey for core from 10.200.16.10 port 36174 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:53:42.831428 sshd-session[4919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:53:42.835988 systemd-logind[1712]: New session 17 of user core. May 14 23:53:42.842545 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 23:53:44.816157 sshd[4921]: Connection closed by 10.200.16.10 port 36174 May 14 23:53:44.816843 sshd-session[4919]: pam_unix(sshd:session): session closed for user core May 14 23:53:44.820245 systemd-logind[1712]: Session 17 logged out. Waiting for processes to exit. May 14 23:53:44.820247 systemd[1]: session-17.scope: Deactivated successfully. May 14 23:53:44.821588 systemd[1]: sshd@14-10.200.20.35:22-10.200.16.10:36174.service: Deactivated successfully. May 14 23:53:44.825241 systemd-logind[1712]: Removed session 17. May 14 23:53:44.891463 systemd[1]: Started sshd@15-10.200.20.35:22-10.200.16.10:36178.service - OpenSSH per-connection server daemon (10.200.16.10:36178). May 14 23:53:45.310859 sshd[4938]: Accepted publickey for core from 10.200.16.10 port 36178 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:53:45.340924 sshd-session[4938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:53:45.345654 systemd-logind[1712]: New session 18 of user core. May 14 23:53:45.355525 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 23:53:45.812234 sshd[4940]: Connection closed by 10.200.16.10 port 36178 May 14 23:53:45.812877 sshd-session[4938]: pam_unix(sshd:session): session closed for user core May 14 23:53:45.816171 systemd-logind[1712]: Session 18 logged out. Waiting for processes to exit. May 14 23:53:45.817518 systemd[1]: sshd@15-10.200.20.35:22-10.200.16.10:36178.service: Deactivated successfully. May 14 23:53:45.820066 systemd[1]: session-18.scope: Deactivated successfully. May 14 23:53:45.821720 systemd-logind[1712]: Removed session 18. May 14 23:53:45.910642 systemd[1]: Started sshd@16-10.200.20.35:22-10.200.16.10:36186.service - OpenSSH per-connection server daemon (10.200.16.10:36186). May 14 23:53:46.357318 sshd[4950]: Accepted publickey for core from 10.200.16.10 port 36186 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:53:46.358651 sshd-session[4950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:53:46.363253 systemd-logind[1712]: New session 19 of user core. May 14 23:53:46.376605 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 23:53:46.739185 sshd[4954]: Connection closed by 10.200.16.10 port 36186 May 14 23:53:46.739774 sshd-session[4950]: pam_unix(sshd:session): session closed for user core May 14 23:53:46.743689 systemd[1]: sshd@16-10.200.20.35:22-10.200.16.10:36186.service: Deactivated successfully. May 14 23:53:46.746194 systemd[1]: session-19.scope: Deactivated successfully. May 14 23:53:46.747690 systemd-logind[1712]: Session 19 logged out. Waiting for processes to exit. May 14 23:53:46.749237 systemd-logind[1712]: Removed session 19. May 14 23:53:51.820624 systemd[1]: Started sshd@17-10.200.20.35:22-10.200.16.10:45186.service - OpenSSH per-connection server daemon (10.200.16.10:45186). May 14 23:53:52.233750 sshd[4996]: Accepted publickey for core from 10.200.16.10 port 45186 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:53:52.235141 sshd-session[4996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:53:52.239861 systemd-logind[1712]: New session 20 of user core. May 14 23:53:52.243525 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 23:53:52.596377 sshd[5013]: Connection closed by 10.200.16.10 port 45186 May 14 23:53:52.596947 sshd-session[4996]: pam_unix(sshd:session): session closed for user core May 14 23:53:52.601164 systemd[1]: sshd@17-10.200.20.35:22-10.200.16.10:45186.service: Deactivated successfully. May 14 23:53:52.604103 systemd[1]: session-20.scope: Deactivated successfully. May 14 23:53:52.605017 systemd-logind[1712]: Session 20 logged out. Waiting for processes to exit. May 14 23:53:52.605940 systemd-logind[1712]: Removed session 20. May 14 23:53:57.684637 systemd[1]: Started sshd@18-10.200.20.35:22-10.200.16.10:45192.service - OpenSSH per-connection server daemon (10.200.16.10:45192). May 14 23:53:58.100843 sshd[5045]: Accepted publickey for core from 10.200.16.10 port 45192 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:53:58.102233 sshd-session[5045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:53:58.106297 systemd-logind[1712]: New session 21 of user core. May 14 23:53:58.113599 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 23:53:58.462638 sshd[5047]: Connection closed by 10.200.16.10 port 45192 May 14 23:53:58.462418 sshd-session[5045]: pam_unix(sshd:session): session closed for user core May 14 23:53:58.466547 systemd[1]: sshd@18-10.200.20.35:22-10.200.16.10:45192.service: Deactivated successfully. May 14 23:53:58.469056 systemd[1]: session-21.scope: Deactivated successfully. May 14 23:53:58.469877 systemd-logind[1712]: Session 21 logged out. Waiting for processes to exit. May 14 23:53:58.470938 systemd-logind[1712]: Removed session 21. May 14 23:54:03.556591 systemd[1]: Started sshd@19-10.200.20.35:22-10.200.16.10:34124.service - OpenSSH per-connection server daemon (10.200.16.10:34124). May 14 23:54:04.005412 sshd[5080]: Accepted publickey for core from 10.200.16.10 port 34124 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:54:04.006771 sshd-session[5080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:04.012749 systemd-logind[1712]: New session 22 of user core. May 14 23:54:04.014494 systemd[1]: Started session-22.scope - Session 22 of User core. May 14 23:54:04.400947 sshd[5082]: Connection closed by 10.200.16.10 port 34124 May 14 23:54:04.401565 sshd-session[5080]: pam_unix(sshd:session): session closed for user core May 14 23:54:04.405674 systemd[1]: sshd@19-10.200.20.35:22-10.200.16.10:34124.service: Deactivated successfully. May 14 23:54:04.405715 systemd-logind[1712]: Session 22 logged out. Waiting for processes to exit. May 14 23:54:04.408204 systemd[1]: session-22.scope: Deactivated successfully. May 14 23:54:04.409990 systemd-logind[1712]: Removed session 22.