May 14 23:49:23.390957 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 14 23:49:23.390981 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Wed May 14 22:22:56 -00 2025 May 14 23:49:23.390989 kernel: KASLR enabled May 14 23:49:23.390994 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') May 14 23:49:23.391001 kernel: printk: bootconsole [pl11] enabled May 14 23:49:23.391007 kernel: efi: EFI v2.7 by EDK II May 14 23:49:23.391013 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20e698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 May 14 23:49:23.391019 kernel: random: crng init done May 14 23:49:23.391025 kernel: secureboot: Secure boot disabled May 14 23:49:23.391031 kernel: ACPI: Early table checksum verification disabled May 14 23:49:23.391037 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) May 14 23:49:23.391042 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 14 23:49:23.391048 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) May 14 23:49:23.391056 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) May 14 23:49:23.391063 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) May 14 23:49:23.391069 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 14 23:49:23.391075 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 14 23:49:23.391083 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 14 23:49:23.391089 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) May 14 23:49:23.391095 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) May 14 23:49:23.391101 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) May 14 23:49:23.391107 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 14 23:49:23.391113 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 May 14 23:49:23.391119 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] May 14 23:49:23.391125 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] May 14 23:49:23.391132 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] May 14 23:49:23.391138 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] May 14 23:49:23.391144 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] May 14 23:49:23.391152 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] May 14 23:49:23.391158 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] May 14 23:49:23.391164 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] May 14 23:49:23.391170 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] May 14 23:49:23.391176 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] May 14 23:49:23.391182 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] May 14 23:49:23.391188 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] May 14 23:49:23.391194 kernel: NUMA: NODE_DATA [mem 0x1bf7f0800-0x1bf7f5fff] May 14 23:49:23.391200 kernel: Zone ranges: May 14 23:49:23.391206 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] May 14 23:49:23.391212 kernel: DMA32 empty May 14 23:49:23.391219 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] May 14 23:49:23.391229 kernel: Movable zone start for each node May 14 23:49:23.391235 kernel: Early memory node ranges May 14 23:49:23.391241 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] May 14 23:49:23.391248 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] May 14 23:49:23.391254 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] May 14 23:49:23.391262 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] May 14 23:49:23.391269 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] May 14 23:49:23.391275 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] May 14 23:49:23.391282 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] May 14 23:49:23.391288 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] May 14 23:49:23.391294 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] May 14 23:49:23.391301 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] May 14 23:49:23.391308 kernel: On node 0, zone DMA: 36 pages in unavailable ranges May 14 23:49:23.391314 kernel: psci: probing for conduit method from ACPI. May 14 23:49:23.391320 kernel: psci: PSCIv1.1 detected in firmware. May 14 23:49:23.391327 kernel: psci: Using standard PSCI v0.2 function IDs May 14 23:49:23.391333 kernel: psci: MIGRATE_INFO_TYPE not supported. May 14 23:49:23.391341 kernel: psci: SMC Calling Convention v1.4 May 14 23:49:23.391348 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 May 14 23:49:23.391354 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 May 14 23:49:23.391361 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 14 23:49:23.391379 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 14 23:49:23.391386 kernel: pcpu-alloc: [0] 0 [0] 1 May 14 23:49:23.391393 kernel: Detected PIPT I-cache on CPU0 May 14 23:49:23.391399 kernel: CPU features: detected: GIC system register CPU interface May 14 23:49:23.391406 kernel: CPU features: detected: Hardware dirty bit management May 14 23:49:23.391412 kernel: CPU features: detected: Spectre-BHB May 14 23:49:23.391419 kernel: CPU features: kernel page table isolation forced ON by KASLR May 14 23:49:23.391427 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 14 23:49:23.391434 kernel: CPU features: detected: ARM erratum 1418040 May 14 23:49:23.391440 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) May 14 23:49:23.391447 kernel: CPU features: detected: SSBS not fully self-synchronizing May 14 23:49:23.391453 kernel: alternatives: applying boot alternatives May 14 23:49:23.391461 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=bfa141d6f8686d8fe96245516ecbaee60c938beef41636c397e3939a2c9a6ed9 May 14 23:49:23.391468 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 23:49:23.391474 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 14 23:49:23.391481 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 23:49:23.391487 kernel: Fallback order for Node 0: 0 May 14 23:49:23.391494 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 May 14 23:49:23.391502 kernel: Policy zone: Normal May 14 23:49:23.391508 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 23:49:23.391514 kernel: software IO TLB: area num 2. May 14 23:49:23.391521 kernel: software IO TLB: mapped [mem 0x0000000036540000-0x000000003a540000] (64MB) May 14 23:49:23.391528 kernel: Memory: 3983596K/4194160K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38336K init, 897K bss, 210564K reserved, 0K cma-reserved) May 14 23:49:23.391534 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 14 23:49:23.391540 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 23:49:23.391548 kernel: rcu: RCU event tracing is enabled. May 14 23:49:23.391554 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 14 23:49:23.391561 kernel: Trampoline variant of Tasks RCU enabled. May 14 23:49:23.391568 kernel: Tracing variant of Tasks RCU enabled. May 14 23:49:23.391576 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 23:49:23.391582 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 14 23:49:23.391589 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 14 23:49:23.391595 kernel: GICv3: 960 SPIs implemented May 14 23:49:23.391601 kernel: GICv3: 0 Extended SPIs implemented May 14 23:49:23.391608 kernel: Root IRQ handler: gic_handle_irq May 14 23:49:23.391614 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 14 23:49:23.391620 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 May 14 23:49:23.391627 kernel: ITS: No ITS available, not enabling LPIs May 14 23:49:23.391634 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 14 23:49:23.391640 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 23:49:23.391647 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 14 23:49:23.391655 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 14 23:49:23.391662 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 14 23:49:23.391669 kernel: Console: colour dummy device 80x25 May 14 23:49:23.391675 kernel: printk: console [tty1] enabled May 14 23:49:23.391682 kernel: ACPI: Core revision 20230628 May 14 23:49:23.391689 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 14 23:49:23.391695 kernel: pid_max: default: 32768 minimum: 301 May 14 23:49:23.391702 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 14 23:49:23.391709 kernel: landlock: Up and running. May 14 23:49:23.391716 kernel: SELinux: Initializing. May 14 23:49:23.391723 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 23:49:23.391730 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 23:49:23.391737 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 23:49:23.391743 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 23:49:23.391750 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 May 14 23:49:23.391757 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 May 14 23:49:23.391770 kernel: Hyper-V: enabling crash_kexec_post_notifiers May 14 23:49:23.391777 kernel: rcu: Hierarchical SRCU implementation. May 14 23:49:23.391784 kernel: rcu: Max phase no-delay instances is 400. May 14 23:49:23.391791 kernel: Remapping and enabling EFI services. May 14 23:49:23.391798 kernel: smp: Bringing up secondary CPUs ... May 14 23:49:23.391806 kernel: Detected PIPT I-cache on CPU1 May 14 23:49:23.391813 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 May 14 23:49:23.391820 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 23:49:23.391827 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 14 23:49:23.391834 kernel: smp: Brought up 1 node, 2 CPUs May 14 23:49:23.391842 kernel: SMP: Total of 2 processors activated. May 14 23:49:23.391849 kernel: CPU features: detected: 32-bit EL0 Support May 14 23:49:23.391856 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence May 14 23:49:23.391864 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 14 23:49:23.391871 kernel: CPU features: detected: CRC32 instructions May 14 23:49:23.391878 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 14 23:49:23.391885 kernel: CPU features: detected: LSE atomic instructions May 14 23:49:23.391892 kernel: CPU features: detected: Privileged Access Never May 14 23:49:23.391899 kernel: CPU: All CPU(s) started at EL1 May 14 23:49:23.391907 kernel: alternatives: applying system-wide alternatives May 14 23:49:23.391914 kernel: devtmpfs: initialized May 14 23:49:23.391921 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 23:49:23.391928 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 14 23:49:23.391935 kernel: pinctrl core: initialized pinctrl subsystem May 14 23:49:23.391942 kernel: SMBIOS 3.1.0 present. May 14 23:49:23.391949 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 May 14 23:49:23.391956 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 23:49:23.391963 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 14 23:49:23.391971 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 14 23:49:23.391978 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 14 23:49:23.391985 kernel: audit: initializing netlink subsys (disabled) May 14 23:49:23.391992 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 May 14 23:49:23.391999 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 23:49:23.392006 kernel: cpuidle: using governor menu May 14 23:49:23.392013 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 14 23:49:23.392020 kernel: ASID allocator initialised with 32768 entries May 14 23:49:23.392027 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 23:49:23.392036 kernel: Serial: AMBA PL011 UART driver May 14 23:49:23.392043 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 14 23:49:23.392051 kernel: Modules: 0 pages in range for non-PLT usage May 14 23:49:23.392058 kernel: Modules: 509264 pages in range for PLT usage May 14 23:49:23.392065 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 14 23:49:23.392072 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 14 23:49:23.392079 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 14 23:49:23.392086 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 14 23:49:23.392093 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 14 23:49:23.392101 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 14 23:49:23.392109 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 14 23:49:23.392115 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 14 23:49:23.392123 kernel: ACPI: Added _OSI(Module Device) May 14 23:49:23.392130 kernel: ACPI: Added _OSI(Processor Device) May 14 23:49:23.392136 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 23:49:23.392143 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 23:49:23.392150 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 23:49:23.392157 kernel: ACPI: Interpreter enabled May 14 23:49:23.392166 kernel: ACPI: Using GIC for interrupt routing May 14 23:49:23.392173 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA May 14 23:49:23.392180 kernel: printk: console [ttyAMA0] enabled May 14 23:49:23.392187 kernel: printk: bootconsole [pl11] disabled May 14 23:49:23.392194 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA May 14 23:49:23.392201 kernel: iommu: Default domain type: Translated May 14 23:49:23.392208 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 14 23:49:23.392215 kernel: efivars: Registered efivars operations May 14 23:49:23.392221 kernel: vgaarb: loaded May 14 23:49:23.392230 kernel: clocksource: Switched to clocksource arch_sys_counter May 14 23:49:23.392237 kernel: VFS: Disk quotas dquot_6.6.0 May 14 23:49:23.392244 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 23:49:23.392251 kernel: pnp: PnP ACPI init May 14 23:49:23.392258 kernel: pnp: PnP ACPI: found 0 devices May 14 23:49:23.392265 kernel: NET: Registered PF_INET protocol family May 14 23:49:23.392272 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 14 23:49:23.392280 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 14 23:49:23.392287 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 23:49:23.392295 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 23:49:23.392303 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 14 23:49:23.392310 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 14 23:49:23.392317 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 23:49:23.392324 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 23:49:23.392331 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 23:49:23.392338 kernel: PCI: CLS 0 bytes, default 64 May 14 23:49:23.392345 kernel: kvm [1]: HYP mode not available May 14 23:49:23.392352 kernel: Initialise system trusted keyrings May 14 23:49:23.392361 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 14 23:49:23.392375 kernel: Key type asymmetric registered May 14 23:49:23.392382 kernel: Asymmetric key parser 'x509' registered May 14 23:49:23.392389 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 14 23:49:23.392397 kernel: io scheduler mq-deadline registered May 14 23:49:23.392404 kernel: io scheduler kyber registered May 14 23:49:23.392411 kernel: io scheduler bfq registered May 14 23:49:23.392418 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 23:49:23.392425 kernel: thunder_xcv, ver 1.0 May 14 23:49:23.392434 kernel: thunder_bgx, ver 1.0 May 14 23:49:23.392441 kernel: nicpf, ver 1.0 May 14 23:49:23.392448 kernel: nicvf, ver 1.0 May 14 23:49:23.392598 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 14 23:49:23.392671 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-14T23:49:22 UTC (1747266562) May 14 23:49:23.392681 kernel: efifb: probing for efifb May 14 23:49:23.392688 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k May 14 23:49:23.392695 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 May 14 23:49:23.392705 kernel: efifb: scrolling: redraw May 14 23:49:23.392712 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 14 23:49:23.392719 kernel: Console: switching to colour frame buffer device 128x48 May 14 23:49:23.392726 kernel: fb0: EFI VGA frame buffer device May 14 23:49:23.392732 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... May 14 23:49:23.392740 kernel: hid: raw HID events driver (C) Jiri Kosina May 14 23:49:23.392747 kernel: No ACPI PMU IRQ for CPU0 May 14 23:49:23.392753 kernel: No ACPI PMU IRQ for CPU1 May 14 23:49:23.392761 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available May 14 23:49:23.392769 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 14 23:49:23.392776 kernel: watchdog: Hard watchdog permanently disabled May 14 23:49:23.392783 kernel: NET: Registered PF_INET6 protocol family May 14 23:49:23.392790 kernel: Segment Routing with IPv6 May 14 23:49:23.392797 kernel: In-situ OAM (IOAM) with IPv6 May 14 23:49:23.392804 kernel: NET: Registered PF_PACKET protocol family May 14 23:49:23.392811 kernel: Key type dns_resolver registered May 14 23:49:23.392818 kernel: registered taskstats version 1 May 14 23:49:23.392825 kernel: Loading compiled-in X.509 certificates May 14 23:49:23.392833 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: cdb7ce3984a1665183e8a6ab3419833bc5e4e7f4' May 14 23:49:23.392840 kernel: Key type .fscrypt registered May 14 23:49:23.392847 kernel: Key type fscrypt-provisioning registered May 14 23:49:23.392854 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 23:49:23.392861 kernel: ima: Allocated hash algorithm: sha1 May 14 23:49:23.392868 kernel: ima: No architecture policies found May 14 23:49:23.392875 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 14 23:49:23.392882 kernel: clk: Disabling unused clocks May 14 23:49:23.392889 kernel: Freeing unused kernel memory: 38336K May 14 23:49:23.392897 kernel: Run /init as init process May 14 23:49:23.392904 kernel: with arguments: May 14 23:49:23.392910 kernel: /init May 14 23:49:23.392917 kernel: with environment: May 14 23:49:23.392924 kernel: HOME=/ May 14 23:49:23.392931 kernel: TERM=linux May 14 23:49:23.392937 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 23:49:23.392945 systemd[1]: Successfully made /usr/ read-only. May 14 23:49:23.392957 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 23:49:23.392965 systemd[1]: Detected virtualization microsoft. May 14 23:49:23.392972 systemd[1]: Detected architecture arm64. May 14 23:49:23.392980 systemd[1]: Running in initrd. May 14 23:49:23.392987 systemd[1]: No hostname configured, using default hostname. May 14 23:49:23.392995 systemd[1]: Hostname set to . May 14 23:49:23.393002 systemd[1]: Initializing machine ID from random generator. May 14 23:49:23.393010 systemd[1]: Queued start job for default target initrd.target. May 14 23:49:23.393019 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 23:49:23.393026 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 23:49:23.393035 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 14 23:49:23.393043 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 23:49:23.393050 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 14 23:49:23.393059 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 14 23:49:23.393067 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 14 23:49:23.393076 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 14 23:49:23.393084 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 23:49:23.393091 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 23:49:23.393099 systemd[1]: Reached target paths.target - Path Units. May 14 23:49:23.393107 systemd[1]: Reached target slices.target - Slice Units. May 14 23:49:23.393114 systemd[1]: Reached target swap.target - Swaps. May 14 23:49:23.393122 systemd[1]: Reached target timers.target - Timer Units. May 14 23:49:23.393129 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 14 23:49:23.393139 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 23:49:23.393146 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 14 23:49:23.393154 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 14 23:49:23.393161 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 23:49:23.393169 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 23:49:23.393176 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 23:49:23.393184 systemd[1]: Reached target sockets.target - Socket Units. May 14 23:49:23.393192 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 14 23:49:23.393199 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 23:49:23.393209 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 14 23:49:23.393216 systemd[1]: Starting systemd-fsck-usr.service... May 14 23:49:23.393224 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 23:49:23.393246 systemd-journald[217]: Collecting audit messages is disabled. May 14 23:49:23.393267 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 23:49:23.393276 systemd-journald[217]: Journal started May 14 23:49:23.393294 systemd-journald[217]: Runtime Journal (/run/log/journal/2ed93ebe87614821ad88d74531b4d393) is 8M, max 78.5M, 70.5M free. May 14 23:49:23.414194 systemd-modules-load[219]: Inserted module 'overlay' May 14 23:49:23.420976 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:49:23.441419 systemd[1]: Started systemd-journald.service - Journal Service. May 14 23:49:23.441477 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 23:49:23.464218 systemd-modules-load[219]: Inserted module 'br_netfilter' May 14 23:49:23.470139 kernel: Bridge firewalling registered May 14 23:49:23.465743 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 14 23:49:23.477820 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 23:49:23.492849 systemd[1]: Finished systemd-fsck-usr.service. May 14 23:49:23.512183 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 23:49:23.519557 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:49:23.541809 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 23:49:23.561559 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 23:49:23.583677 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 23:49:23.600226 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 23:49:23.610596 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:49:23.626407 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 23:49:23.641752 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 23:49:23.657676 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 23:49:23.685618 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 14 23:49:23.701586 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 23:49:23.721247 dracut-cmdline[252]: dracut-dracut-053 May 14 23:49:23.721247 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=bfa141d6f8686d8fe96245516ecbaee60c938beef41636c397e3939a2c9a6ed9 May 14 23:49:23.725956 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 23:49:23.787817 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 23:49:23.823754 systemd-resolved[259]: Positive Trust Anchors: May 14 23:49:23.828325 kernel: SCSI subsystem initialized May 14 23:49:23.823772 systemd-resolved[259]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 23:49:23.823804 systemd-resolved[259]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 23:49:23.897638 kernel: Loading iSCSI transport class v2.0-870. May 14 23:49:23.897663 kernel: iscsi: registered transport (tcp) May 14 23:49:23.826010 systemd-resolved[259]: Defaulting to hostname 'linux'. May 14 23:49:23.836417 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 23:49:23.846856 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 23:49:23.931175 kernel: iscsi: registered transport (qla4xxx) May 14 23:49:23.931202 kernel: QLogic iSCSI HBA Driver May 14 23:49:23.970949 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 14 23:49:23.985622 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 14 23:49:24.019085 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 23:49:24.019134 kernel: device-mapper: uevent: version 1.0.3 May 14 23:49:24.026385 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 14 23:49:24.075400 kernel: raid6: neonx8 gen() 15755 MB/s May 14 23:49:24.095407 kernel: raid6: neonx4 gen() 15818 MB/s May 14 23:49:24.115395 kernel: raid6: neonx2 gen() 13212 MB/s May 14 23:49:24.136412 kernel: raid6: neonx1 gen() 10476 MB/s May 14 23:49:24.156377 kernel: raid6: int64x8 gen() 6767 MB/s May 14 23:49:24.176381 kernel: raid6: int64x4 gen() 7341 MB/s May 14 23:49:24.198408 kernel: raid6: int64x2 gen() 6099 MB/s May 14 23:49:24.221979 kernel: raid6: int64x1 gen() 5053 MB/s May 14 23:49:24.222045 kernel: raid6: using algorithm neonx4 gen() 15818 MB/s May 14 23:49:24.246300 kernel: raid6: .... xor() 12439 MB/s, rmw enabled May 14 23:49:24.246399 kernel: raid6: using neon recovery algorithm May 14 23:49:24.256381 kernel: xor: measuring software checksum speed May 14 23:49:24.260377 kernel: 8regs : 20088 MB/sec May 14 23:49:24.267720 kernel: 32regs : 20312 MB/sec May 14 23:49:24.267742 kernel: arm64_neon : 27851 MB/sec May 14 23:49:24.272396 kernel: xor: using function: arm64_neon (27851 MB/sec) May 14 23:49:24.324393 kernel: Btrfs loaded, zoned=no, fsverity=no May 14 23:49:24.335890 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 14 23:49:24.352551 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 23:49:24.378615 systemd-udevd[440]: Using default interface naming scheme 'v255'. May 14 23:49:24.384349 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 23:49:24.407565 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 14 23:49:24.427414 dracut-pre-trigger[451]: rd.md=0: removing MD RAID activation May 14 23:49:24.462482 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 14 23:49:24.485662 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 23:49:24.527984 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 23:49:24.552687 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 14 23:49:24.590965 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 14 23:49:24.600656 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 14 23:49:24.615654 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 23:49:24.633317 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 23:49:24.669387 kernel: hv_vmbus: Vmbus version:5.3 May 14 23:49:24.676547 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 14 23:49:24.728225 kernel: hv_vmbus: registering driver hyperv_keyboard May 14 23:49:24.728248 kernel: hv_vmbus: registering driver hid_hyperv May 14 23:49:24.728257 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 May 14 23:49:24.728267 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 May 14 23:49:24.728277 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on May 14 23:49:24.726757 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 23:49:24.751508 kernel: hv_vmbus: registering driver hv_netvsc May 14 23:49:24.751532 kernel: pps_core: LinuxPPS API ver. 1 registered May 14 23:49:24.726922 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:49:24.773797 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 14 23:49:24.773828 kernel: hv_vmbus: registering driver hv_storvsc May 14 23:49:24.746089 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 23:49:24.812455 kernel: PTP clock support registered May 14 23:49:24.812478 kernel: hv_utils: Registering HyperV Utility Driver May 14 23:49:24.791397 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 23:49:24.830114 kernel: hv_vmbus: registering driver hv_utils May 14 23:49:24.830145 kernel: hv_utils: Heartbeat IC version 3.0 May 14 23:49:24.791621 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:49:24.675638 kernel: hv_utils: Shutdown IC version 3.2 May 14 23:49:24.682960 kernel: scsi host0: storvsc_host_t May 14 23:49:24.683116 kernel: hv_utils: TimeSync IC version 4.0 May 14 23:49:24.683128 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 May 14 23:49:24.683147 systemd-journald[217]: Time jumped backwards, rotating. May 14 23:49:24.683182 kernel: scsi host1: storvsc_host_t May 14 23:49:24.825667 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:49:24.668850 systemd-resolved[259]: Clock change detected. Flushing caches. May 14 23:49:24.687196 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:49:24.701453 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 23:49:24.741328 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 May 14 23:49:24.702003 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 14 23:49:24.743513 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 23:49:24.747410 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:49:24.782852 kernel: sr 0:0:0:2: [sr0] scsi-1 drive May 14 23:49:24.783048 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 14 23:49:24.770788 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 23:49:24.794629 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 May 14 23:49:24.796589 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:49:24.829584 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) May 14 23:49:24.829830 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks May 14 23:49:24.829959 kernel: sd 0:0:0:0: [sda] Write Protect is off May 14 23:49:24.830104 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 May 14 23:49:24.830202 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA May 14 23:49:24.844319 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:49:24.866812 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 14 23:49:24.866832 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 14 23:49:24.868651 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 23:49:24.901363 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:49:25.293815 kernel: hv_netvsc 002248bb-8dbf-0022-48bb-8dbf002248bb eth0: VF slot 1 added May 14 23:49:25.310461 kernel: hv_vmbus: registering driver hv_pci May 14 23:49:25.310511 kernel: hv_pci e852d122-4e7a-4c12-93ca-6f37a53296a2: PCI VMBus probing: Using version 0x10004 May 14 23:49:25.519887 kernel: hv_pci e852d122-4e7a-4c12-93ca-6f37a53296a2: PCI host bridge to bus 4e7a:00 May 14 23:49:25.520332 kernel: pci_bus 4e7a:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] May 14 23:49:25.520473 kernel: pci_bus 4e7a:00: No busn resource found for root bus, will use [bus 00-ff] May 14 23:49:25.534401 kernel: pci 4e7a:00:02.0: [15b3:1018] type 00 class 0x020000 May 14 23:49:25.546486 kernel: pci 4e7a:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] May 14 23:49:25.552530 kernel: pci 4e7a:00:02.0: enabling Extended Tags May 14 23:49:25.573426 kernel: pci 4e7a:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 4e7a:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) May 14 23:49:25.586036 kernel: pci_bus 4e7a:00: busn_res: [bus 00-ff] end is updated to 00 May 14 23:49:25.586252 kernel: pci 4e7a:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] May 14 23:49:25.629237 kernel: mlx5_core 4e7a:00:02.0: enabling device (0000 -> 0002) May 14 23:49:25.640842 kernel: mlx5_core 4e7a:00:02.0: firmware version: 16.31.2424 May 14 23:49:25.739778 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. May 14 23:49:25.774612 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (491) May 14 23:49:25.774636 kernel: BTRFS: device fsid 369506fd-904a-45c2-a4ab-2d03e7866799 devid 1 transid 44 /dev/sda3 scanned by (udev-worker) (490) May 14 23:49:25.796942 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. May 14 23:49:25.828714 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. May 14 23:49:25.843307 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. May 14 23:49:25.851576 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. May 14 23:49:25.880577 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 14 23:49:25.909393 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 14 23:49:25.918369 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 14 23:49:26.088956 kernel: hv_netvsc 002248bb-8dbf-0022-48bb-8dbf002248bb eth0: VF registering: eth1 May 14 23:49:26.089166 kernel: mlx5_core 4e7a:00:02.0 eth1: joined to eth0 May 14 23:49:26.099551 kernel: mlx5_core 4e7a:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) May 14 23:49:26.113411 kernel: mlx5_core 4e7a:00:02.0 enP20090s1: renamed from eth1 May 14 23:49:26.928159 disk-uuid[606]: The operation has completed successfully. May 14 23:49:26.933788 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 14 23:49:26.995090 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 23:49:26.998549 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 14 23:49:27.045491 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 14 23:49:27.060259 sh[696]: Success May 14 23:49:27.090378 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 14 23:49:27.299449 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 14 23:49:27.325495 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 14 23:49:27.337573 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 14 23:49:27.367410 kernel: BTRFS info (device dm-0): first mount of filesystem 369506fd-904a-45c2-a4ab-2d03e7866799 May 14 23:49:27.367456 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 14 23:49:27.375573 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 14 23:49:27.381078 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 14 23:49:27.387011 kernel: BTRFS info (device dm-0): using free space tree May 14 23:49:27.717299 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 14 23:49:27.723240 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 14 23:49:27.740631 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 14 23:49:27.749566 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 14 23:49:27.790388 kernel: BTRFS info (device sda6): first mount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 14 23:49:27.801594 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 14 23:49:27.801635 kernel: BTRFS info (device sda6): using free space tree May 14 23:49:27.830381 kernel: BTRFS info (device sda6): auto enabling async discard May 14 23:49:27.841379 kernel: BTRFS info (device sda6): last unmount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 14 23:49:27.845480 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 14 23:49:27.859623 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 23:49:27.887632 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 14 23:49:27.897585 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 23:49:27.942272 systemd-networkd[877]: lo: Link UP May 14 23:49:27.942286 systemd-networkd[877]: lo: Gained carrier May 14 23:49:27.944379 systemd-networkd[877]: Enumeration completed May 14 23:49:27.944501 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 23:49:27.953273 systemd-networkd[877]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:49:27.953277 systemd-networkd[877]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 23:49:27.953945 systemd[1]: Reached target network.target - Network. May 14 23:49:28.048367 kernel: mlx5_core 4e7a:00:02.0 enP20090s1: Link up May 14 23:49:28.133077 kernel: hv_netvsc 002248bb-8dbf-0022-48bb-8dbf002248bb eth0: Data path switched to VF: enP20090s1 May 14 23:49:28.132122 systemd-networkd[877]: enP20090s1: Link UP May 14 23:49:28.132199 systemd-networkd[877]: eth0: Link UP May 14 23:49:28.132314 systemd-networkd[877]: eth0: Gained carrier May 14 23:49:28.132321 systemd-networkd[877]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:49:28.147942 systemd-networkd[877]: enP20090s1: Gained carrier May 14 23:49:28.173388 systemd-networkd[877]: eth0: DHCPv4 address 10.200.20.40/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 14 23:49:28.764062 ignition[876]: Ignition 2.20.0 May 14 23:49:28.764075 ignition[876]: Stage: fetch-offline May 14 23:49:28.764115 ignition[876]: no configs at "/usr/lib/ignition/base.d" May 14 23:49:28.771615 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 14 23:49:28.764124 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 14 23:49:28.764230 ignition[876]: parsed url from cmdline: "" May 14 23:49:28.764234 ignition[876]: no config URL provided May 14 23:49:28.764239 ignition[876]: reading system config file "/usr/lib/ignition/user.ign" May 14 23:49:28.799526 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 14 23:49:28.764246 ignition[876]: no config at "/usr/lib/ignition/user.ign" May 14 23:49:28.764252 ignition[876]: failed to fetch config: resource requires networking May 14 23:49:28.764466 ignition[876]: Ignition finished successfully May 14 23:49:28.815335 ignition[890]: Ignition 2.20.0 May 14 23:49:28.815358 ignition[890]: Stage: fetch May 14 23:49:28.815524 ignition[890]: no configs at "/usr/lib/ignition/base.d" May 14 23:49:28.815533 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 14 23:49:28.815620 ignition[890]: parsed url from cmdline: "" May 14 23:49:28.815624 ignition[890]: no config URL provided May 14 23:49:28.815628 ignition[890]: reading system config file "/usr/lib/ignition/user.ign" May 14 23:49:28.815635 ignition[890]: no config at "/usr/lib/ignition/user.ign" May 14 23:49:28.815660 ignition[890]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 May 14 23:49:28.917499 ignition[890]: GET result: OK May 14 23:49:28.917588 ignition[890]: config has been read from IMDS userdata May 14 23:49:28.917628 ignition[890]: parsing config with SHA512: 745711f649006a361cd260978ed98307cf6a1309d53b77c3ae143d6bbcba58f2aabe3c92a5d6021530d67aa96d174d363c16893cff48e9a6445abb7cfa7aa76f May 14 23:49:28.922377 unknown[890]: fetched base config from "system" May 14 23:49:28.927957 ignition[890]: fetch: fetch complete May 14 23:49:28.922388 unknown[890]: fetched base config from "system" May 14 23:49:28.927964 ignition[890]: fetch: fetch passed May 14 23:49:28.922393 unknown[890]: fetched user config from "azure" May 14 23:49:28.928043 ignition[890]: Ignition finished successfully May 14 23:49:28.930104 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 14 23:49:28.961618 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 14 23:49:28.996146 ignition[896]: Ignition 2.20.0 May 14 23:49:28.996158 ignition[896]: Stage: kargs May 14 23:49:28.996374 ignition[896]: no configs at "/usr/lib/ignition/base.d" May 14 23:49:29.005520 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 14 23:49:28.996385 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 14 23:49:29.021539 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 14 23:49:28.997498 ignition[896]: kargs: kargs passed May 14 23:49:28.997573 ignition[896]: Ignition finished successfully May 14 23:49:29.055542 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 14 23:49:29.048092 ignition[902]: Ignition 2.20.0 May 14 23:49:29.068649 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 14 23:49:29.048101 ignition[902]: Stage: disks May 14 23:49:29.082571 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 14 23:49:29.048529 ignition[902]: no configs at "/usr/lib/ignition/base.d" May 14 23:49:29.094412 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 23:49:29.048541 ignition[902]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 14 23:49:29.108451 systemd[1]: Reached target sysinit.target - System Initialization. May 14 23:49:29.051408 ignition[902]: disks: disks passed May 14 23:49:29.121097 systemd[1]: Reached target basic.target - Basic System. May 14 23:49:29.051524 ignition[902]: Ignition finished successfully May 14 23:49:29.149679 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 14 23:49:29.178975 systemd-networkd[877]: eth0: Gained IPv6LL May 14 23:49:29.228064 systemd-fsck[910]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks May 14 23:49:29.236036 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 14 23:49:29.269590 systemd[1]: Mounting sysroot.mount - /sysroot... May 14 23:49:29.334391 kernel: EXT4-fs (sda9): mounted filesystem 737cda88-7069-47ce-b2bc-d891099a68fb r/w with ordered data mode. Quota mode: none. May 14 23:49:29.335556 systemd[1]: Mounted sysroot.mount - /sysroot. May 14 23:49:29.341760 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 14 23:49:29.365449 systemd-networkd[877]: enP20090s1: Gained IPv6LL May 14 23:49:29.388462 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 23:49:29.396543 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 14 23:49:29.410778 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 14 23:49:29.443998 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 23:49:29.444051 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 14 23:49:29.492117 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (921) May 14 23:49:29.492144 kernel: BTRFS info (device sda6): first mount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 14 23:49:29.492155 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 14 23:49:29.453066 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 14 23:49:29.502380 kernel: BTRFS info (device sda6): using free space tree May 14 23:49:29.511920 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 14 23:49:29.530484 kernel: BTRFS info (device sda6): auto enabling async discard May 14 23:49:29.524903 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 23:49:29.999526 coreos-metadata[923]: May 14 23:49:29.999 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 14 23:49:30.010176 coreos-metadata[923]: May 14 23:49:30.010 INFO Fetch successful May 14 23:49:30.016034 coreos-metadata[923]: May 14 23:49:30.015 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 May 14 23:49:30.028961 coreos-metadata[923]: May 14 23:49:30.027 INFO Fetch successful May 14 23:49:30.042245 coreos-metadata[923]: May 14 23:49:30.042 INFO wrote hostname ci-4230.1.1-n-bf7705109b to /sysroot/etc/hostname May 14 23:49:30.055545 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 14 23:49:30.212943 initrd-setup-root[951]: cut: /sysroot/etc/passwd: No such file or directory May 14 23:49:30.259382 initrd-setup-root[958]: cut: /sysroot/etc/group: No such file or directory May 14 23:49:30.271669 initrd-setup-root[965]: cut: /sysroot/etc/shadow: No such file or directory May 14 23:49:30.282856 initrd-setup-root[972]: cut: /sysroot/etc/gshadow: No such file or directory May 14 23:49:31.427987 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 14 23:49:31.444564 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 14 23:49:31.453896 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 14 23:49:31.483359 kernel: BTRFS info (device sda6): last unmount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 14 23:49:31.489082 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 14 23:49:31.515227 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 14 23:49:31.529464 ignition[1043]: INFO : Ignition 2.20.0 May 14 23:49:31.529464 ignition[1043]: INFO : Stage: mount May 14 23:49:31.529464 ignition[1043]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 23:49:31.529464 ignition[1043]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 14 23:49:31.560531 ignition[1043]: INFO : mount: mount passed May 14 23:49:31.560531 ignition[1043]: INFO : Ignition finished successfully May 14 23:49:31.540681 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 14 23:49:31.567475 systemd[1]: Starting ignition-files.service - Ignition (files)... May 14 23:49:31.593684 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 23:49:31.624385 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1051) May 14 23:49:31.640118 kernel: BTRFS info (device sda6): first mount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 14 23:49:31.640178 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 14 23:49:31.645063 kernel: BTRFS info (device sda6): using free space tree May 14 23:49:31.652370 kernel: BTRFS info (device sda6): auto enabling async discard May 14 23:49:31.654423 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 23:49:31.687210 ignition[1068]: INFO : Ignition 2.20.0 May 14 23:49:31.687210 ignition[1068]: INFO : Stage: files May 14 23:49:31.696038 ignition[1068]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 23:49:31.696038 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 14 23:49:31.696038 ignition[1068]: DEBUG : files: compiled without relabeling support, skipping May 14 23:49:31.696038 ignition[1068]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 23:49:31.696038 ignition[1068]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 23:49:31.746790 ignition[1068]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 23:49:31.755650 ignition[1068]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 23:49:31.755650 ignition[1068]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 23:49:31.755650 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 14 23:49:31.755650 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 14 23:49:31.747291 unknown[1068]: wrote ssh authorized keys file for user: core May 14 23:49:31.856788 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 23:49:32.026847 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 14 23:49:32.026847 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 23:49:32.049794 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 14 23:49:32.341908 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 14 23:49:32.421651 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 23:49:32.432386 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 14 23:49:32.432386 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 14 23:49:32.432386 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 23:49:32.432386 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 23:49:32.432386 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 23:49:32.432386 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 23:49:32.432386 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 23:49:32.432386 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 23:49:32.432386 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 23:49:32.432386 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 23:49:32.432386 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 14 23:49:32.432386 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 14 23:49:32.432386 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 14 23:49:32.432386 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 May 14 23:49:32.813014 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 14 23:49:33.015316 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 14 23:49:33.015316 ignition[1068]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 14 23:49:33.039834 ignition[1068]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 23:49:33.039834 ignition[1068]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 23:49:33.039834 ignition[1068]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 14 23:49:33.039834 ignition[1068]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 14 23:49:33.039834 ignition[1068]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 14 23:49:33.039834 ignition[1068]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 23:49:33.039834 ignition[1068]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 23:49:33.039834 ignition[1068]: INFO : files: files passed May 14 23:49:33.039834 ignition[1068]: INFO : Ignition finished successfully May 14 23:49:33.032589 systemd[1]: Finished ignition-files.service - Ignition (files). May 14 23:49:33.080659 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 14 23:49:33.099634 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 14 23:49:33.124780 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 23:49:33.205329 initrd-setup-root-after-ignition[1099]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 23:49:33.124879 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 14 23:49:33.231924 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 23:49:33.231924 initrd-setup-root-after-ignition[1095]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 14 23:49:33.158233 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 23:49:33.170412 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 14 23:49:33.205622 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 14 23:49:33.275454 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 23:49:33.275612 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 14 23:49:33.288676 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 14 23:49:33.309796 systemd[1]: Reached target initrd.target - Initrd Default Target. May 14 23:49:33.334472 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 14 23:49:33.357560 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 14 23:49:33.385030 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 23:49:33.403587 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 14 23:49:33.428918 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 23:49:33.429052 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 14 23:49:33.445556 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 14 23:49:33.460201 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 23:49:33.475066 systemd[1]: Stopped target timers.target - Timer Units. May 14 23:49:33.488397 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 23:49:33.488483 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 23:49:33.509214 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 14 23:49:33.517361 systemd[1]: Stopped target basic.target - Basic System. May 14 23:49:33.530514 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 14 23:49:33.543641 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 14 23:49:33.556906 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 14 23:49:33.571647 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 14 23:49:33.588603 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 14 23:49:33.605350 systemd[1]: Stopped target sysinit.target - System Initialization. May 14 23:49:33.621121 systemd[1]: Stopped target local-fs.target - Local File Systems. May 14 23:49:33.635116 systemd[1]: Stopped target swap.target - Swaps. May 14 23:49:33.646442 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 23:49:33.646543 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 14 23:49:33.665044 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 14 23:49:33.672541 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 23:49:33.692500 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 14 23:49:33.692610 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 23:49:33.710792 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 23:49:33.710912 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 14 23:49:33.737003 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 23:49:33.737138 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 23:49:33.754313 systemd[1]: ignition-files.service: Deactivated successfully. May 14 23:49:33.754447 systemd[1]: Stopped ignition-files.service - Ignition (files). May 14 23:49:33.771933 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 14 23:49:33.772016 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 14 23:49:33.860089 ignition[1121]: INFO : Ignition 2.20.0 May 14 23:49:33.860089 ignition[1121]: INFO : Stage: umount May 14 23:49:33.860089 ignition[1121]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 23:49:33.860089 ignition[1121]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 14 23:49:33.860089 ignition[1121]: INFO : umount: umount passed May 14 23:49:33.860089 ignition[1121]: INFO : Ignition finished successfully May 14 23:49:33.813574 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 14 23:49:33.834945 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 14 23:49:33.854431 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 23:49:33.854532 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 14 23:49:33.874502 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 23:49:33.874584 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 14 23:49:33.893748 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 23:49:33.894457 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 23:49:33.894581 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 14 23:49:33.909139 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 23:49:33.909268 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 14 23:49:33.920316 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 23:49:33.920442 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 14 23:49:33.932717 systemd[1]: ignition-fetch.service: Deactivated successfully. May 14 23:49:33.932781 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 14 23:49:33.945484 systemd[1]: Stopped target network.target - Network. May 14 23:49:33.956437 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 23:49:33.956546 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 14 23:49:33.970404 systemd[1]: Stopped target paths.target - Path Units. May 14 23:49:33.983125 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 23:49:33.986378 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 23:49:34.007415 systemd[1]: Stopped target slices.target - Slice Units. May 14 23:49:34.022623 systemd[1]: Stopped target sockets.target - Socket Units. May 14 23:49:34.035985 systemd[1]: iscsid.socket: Deactivated successfully. May 14 23:49:34.036048 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 14 23:49:34.048821 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 23:49:34.048864 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 23:49:34.062815 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 23:49:34.062893 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 14 23:49:34.075668 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 14 23:49:34.075725 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 14 23:49:34.089056 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 14 23:49:34.103001 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 14 23:49:34.129086 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 23:49:34.129267 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 14 23:49:34.145077 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 14 23:49:34.145536 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 23:49:34.145676 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 14 23:49:34.172396 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 14 23:49:34.173680 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 23:49:34.442435 kernel: hv_netvsc 002248bb-8dbf-0022-48bb-8dbf002248bb eth0: Data path switched from VF: enP20090s1 May 14 23:49:34.173766 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 14 23:49:34.214572 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 14 23:49:34.226928 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 23:49:34.227020 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 23:49:34.241346 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 23:49:34.241443 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 23:49:34.260410 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 23:49:34.260481 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 14 23:49:34.268004 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 14 23:49:34.268102 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 23:49:34.288365 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 23:49:34.301585 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 23:49:34.301672 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 14 23:49:34.348355 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 23:49:34.348550 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 23:49:34.363986 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 23:49:34.364041 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 14 23:49:34.377632 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 23:49:34.377673 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 14 23:49:34.392612 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 23:49:34.392695 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 14 23:49:34.422259 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 23:49:34.422336 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 14 23:49:34.442524 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 23:49:34.442604 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:49:34.481614 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 14 23:49:34.506401 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 23:49:34.506494 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 23:49:34.526605 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 14 23:49:34.526667 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 23:49:34.535053 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 23:49:34.535113 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 14 23:49:34.549920 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 23:49:34.549980 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:49:34.571200 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 14 23:49:34.571281 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 23:49:34.571694 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 23:49:34.571814 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 14 23:49:34.585012 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 23:49:34.585116 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 14 23:49:36.561601 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 23:49:36.561721 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 14 23:49:36.568072 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 14 23:49:36.580632 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 23:49:36.580730 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 14 23:49:36.614604 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 14 23:49:36.629013 systemd[1]: Switching root. May 14 23:49:36.717551 systemd-journald[217]: Journal stopped May 14 23:49:40.995737 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). May 14 23:49:40.995781 kernel: SELinux: policy capability network_peer_controls=1 May 14 23:49:40.995792 kernel: SELinux: policy capability open_perms=1 May 14 23:49:40.995805 kernel: SELinux: policy capability extended_socket_class=1 May 14 23:49:40.995814 kernel: SELinux: policy capability always_check_network=0 May 14 23:49:40.995821 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 23:49:40.995830 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 23:49:40.995839 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 23:49:40.995846 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 23:49:40.995855 kernel: audit: type=1403 audit(1747266577.736:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 23:49:40.995866 systemd[1]: Successfully loaded SELinux policy in 115.236ms. May 14 23:49:40.995876 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.011ms. May 14 23:49:40.995886 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 23:49:40.995895 systemd[1]: Detected virtualization microsoft. May 14 23:49:40.995905 systemd[1]: Detected architecture arm64. May 14 23:49:40.995916 systemd[1]: Detected first boot. May 14 23:49:40.995925 systemd[1]: Hostname set to . May 14 23:49:40.995934 systemd[1]: Initializing machine ID from random generator. May 14 23:49:40.995943 zram_generator::config[1165]: No configuration found. May 14 23:49:40.995952 kernel: NET: Registered PF_VSOCK protocol family May 14 23:49:40.995963 systemd[1]: Populated /etc with preset unit settings. May 14 23:49:40.995975 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 14 23:49:40.995984 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 23:49:40.995993 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 14 23:49:40.996002 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 23:49:40.996012 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 14 23:49:40.996021 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 14 23:49:40.996031 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 14 23:49:40.996040 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 14 23:49:40.996051 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 14 23:49:40.996060 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 14 23:49:40.996069 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 14 23:49:40.996079 systemd[1]: Created slice user.slice - User and Session Slice. May 14 23:49:40.996088 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 23:49:40.996097 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 23:49:40.996107 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 14 23:49:40.996116 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 14 23:49:40.996127 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 14 23:49:40.996137 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 23:49:40.996146 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 14 23:49:40.996158 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 23:49:40.996169 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 14 23:49:40.996178 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 14 23:49:40.996188 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 14 23:49:40.996200 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 14 23:49:40.996209 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 23:49:40.996219 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 23:49:40.996228 systemd[1]: Reached target slices.target - Slice Units. May 14 23:49:40.996238 systemd[1]: Reached target swap.target - Swaps. May 14 23:49:40.996247 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 14 23:49:40.996257 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 14 23:49:40.996266 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 14 23:49:40.996278 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 23:49:40.996287 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 23:49:40.996297 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 23:49:40.996307 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 14 23:49:40.996317 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 14 23:49:40.996328 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 14 23:49:40.998381 systemd[1]: Mounting media.mount - External Media Directory... May 14 23:49:40.998435 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 14 23:49:40.998447 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 14 23:49:40.998457 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 14 23:49:40.998469 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 23:49:40.998482 systemd[1]: Reached target machines.target - Containers. May 14 23:49:40.998492 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 14 23:49:40.998502 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:49:40.998519 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 23:49:40.998529 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 14 23:49:40.998539 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 23:49:40.998548 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 23:49:40.998558 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 23:49:40.998568 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 14 23:49:40.998578 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 23:49:40.998588 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 23:49:40.998600 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 23:49:40.998610 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 14 23:49:40.998620 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 23:49:40.998629 systemd[1]: Stopped systemd-fsck-usr.service. May 14 23:49:40.998640 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 23:49:40.998650 kernel: loop: module loaded May 14 23:49:40.998659 kernel: fuse: init (API version 7.39) May 14 23:49:40.998668 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 23:49:40.998680 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 23:49:40.998689 kernel: ACPI: bus type drm_connector registered May 14 23:49:40.998700 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 23:49:40.998710 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 14 23:49:40.998763 systemd-journald[1262]: Collecting audit messages is disabled. May 14 23:49:40.998790 systemd-journald[1262]: Journal started May 14 23:49:40.998811 systemd-journald[1262]: Runtime Journal (/run/log/journal/bdb75fc04ef44cfda0ea00703bb3c604) is 8M, max 78.5M, 70.5M free. May 14 23:49:39.989214 systemd[1]: Queued start job for default target multi-user.target. May 14 23:49:40.003856 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 14 23:49:40.004387 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 23:49:40.004879 systemd[1]: systemd-journald.service: Consumed 3.954s CPU time. May 14 23:49:41.017177 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 14 23:49:41.049928 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 23:49:41.060371 systemd[1]: verity-setup.service: Deactivated successfully. May 14 23:49:41.060459 systemd[1]: Stopped verity-setup.service. May 14 23:49:41.083369 systemd[1]: Started systemd-journald.service - Journal Service. May 14 23:49:41.081123 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 14 23:49:41.087553 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 14 23:49:41.094265 systemd[1]: Mounted media.mount - External Media Directory. May 14 23:49:41.100158 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 14 23:49:41.107568 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 14 23:49:41.116037 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 14 23:49:41.123511 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 14 23:49:41.132167 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 23:49:41.141044 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 23:49:41.141246 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 14 23:49:41.148931 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 23:49:41.149118 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 23:49:41.156515 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 23:49:41.156699 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 23:49:41.163549 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 23:49:41.163732 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 23:49:41.171916 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 23:49:41.172110 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 14 23:49:41.179244 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 23:49:41.179453 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 23:49:41.186803 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 23:49:41.194310 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 23:49:41.202313 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 14 23:49:41.210182 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 14 23:49:41.218425 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 23:49:41.238086 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 23:49:41.255464 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 14 23:49:41.263615 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 14 23:49:41.271527 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 23:49:41.271586 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 23:49:41.279800 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 14 23:49:41.289865 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 14 23:49:41.298633 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 14 23:49:41.304880 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:49:41.321336 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 14 23:49:41.329077 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 14 23:49:41.335787 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 23:49:41.337074 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 14 23:49:41.343578 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 23:49:41.345417 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 23:49:41.355702 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 14 23:49:41.367740 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 23:49:41.393692 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 14 23:49:41.409654 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 14 23:49:41.434555 systemd-journald[1262]: Time spent on flushing to /var/log/journal/bdb75fc04ef44cfda0ea00703bb3c604 is 89.514ms for 920 entries. May 14 23:49:41.434555 systemd-journald[1262]: System Journal (/var/log/journal/bdb75fc04ef44cfda0ea00703bb3c604) is 11.8M, max 2.6G, 2.6G free. May 14 23:49:41.575228 kernel: loop0: detected capacity change from 0 to 201592 May 14 23:49:41.575274 systemd-journald[1262]: Received client request to flush runtime journal. May 14 23:49:41.575303 systemd-journald[1262]: /var/log/journal/bdb75fc04ef44cfda0ea00703bb3c604/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. May 14 23:49:41.575325 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 23:49:41.575364 systemd-journald[1262]: Rotating system journal. May 14 23:49:41.575421 kernel: loop1: detected capacity change from 0 to 113512 May 14 23:49:41.423753 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 14 23:49:41.434807 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 14 23:49:41.453322 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 14 23:49:41.469166 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 14 23:49:41.506725 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 14 23:49:41.514972 udevadm[1308]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 14 23:49:41.565549 systemd-tmpfiles[1307]: ACLs are not supported, ignoring. May 14 23:49:41.565561 systemd-tmpfiles[1307]: ACLs are not supported, ignoring. May 14 23:49:41.567088 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 23:49:41.576102 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 23:49:41.584667 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 14 23:49:41.608557 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 14 23:49:41.636588 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 23:49:41.638151 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 14 23:49:41.920177 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 14 23:49:41.933627 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 23:49:41.955986 systemd-tmpfiles[1327]: ACLs are not supported, ignoring. May 14 23:49:41.956006 systemd-tmpfiles[1327]: ACLs are not supported, ignoring. May 14 23:49:41.960754 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 23:49:41.979365 kernel: loop2: detected capacity change from 0 to 28720 May 14 23:49:42.374488 kernel: loop3: detected capacity change from 0 to 123192 May 14 23:49:42.612459 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 14 23:49:42.624548 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 23:49:42.660366 systemd-udevd[1333]: Using default interface naming scheme 'v255'. May 14 23:49:42.732388 kernel: loop4: detected capacity change from 0 to 201592 May 14 23:49:42.745363 kernel: loop5: detected capacity change from 0 to 113512 May 14 23:49:42.755391 kernel: loop6: detected capacity change from 0 to 28720 May 14 23:49:42.765390 kernel: loop7: detected capacity change from 0 to 123192 May 14 23:49:42.770469 (sd-merge)[1335]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. May 14 23:49:42.772298 (sd-merge)[1335]: Merged extensions into '/usr'. May 14 23:49:42.776087 systemd[1]: Reload requested from client PID 1305 ('systemd-sysext') (unit systemd-sysext.service)... May 14 23:49:42.776282 systemd[1]: Reloading... May 14 23:49:42.941811 zram_generator::config[1385]: No configuration found. May 14 23:49:43.082711 kernel: mousedev: PS/2 mouse device common for all mice May 14 23:49:43.155462 kernel: hv_vmbus: registering driver hv_balloon May 14 23:49:43.155569 kernel: hv_vmbus: registering driver hyperv_fb May 14 23:49:43.155588 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 May 14 23:49:43.164375 kernel: hyperv_fb: Synthvid Version major 3, minor 5 May 14 23:49:43.164511 kernel: hv_balloon: Memory hot add disabled on ARM64 May 14 23:49:43.179493 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 May 14 23:49:43.191460 kernel: Console: switching to colour dummy device 80x25 May 14 23:49:43.186260 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:49:43.200787 kernel: Console: switching to colour frame buffer device 128x48 May 14 23:49:43.305555 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (1349) May 14 23:49:43.308113 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 14 23:49:43.308326 systemd[1]: Reloading finished in 531 ms. May 14 23:49:43.328637 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 23:49:43.343800 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 14 23:49:43.380821 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 14 23:49:43.399741 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. May 14 23:49:43.416883 systemd[1]: Starting ensure-sysext.service... May 14 23:49:43.424705 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 14 23:49:43.434621 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 14 23:49:43.445757 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 23:49:43.454634 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 23:49:43.464653 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:49:43.482690 lvm[1518]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 23:49:43.483278 systemd[1]: Reload requested from client PID 1517 ('systemctl') (unit ensure-sysext.service)... May 14 23:49:43.483298 systemd[1]: Reloading... May 14 23:49:43.508490 systemd-tmpfiles[1521]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 23:49:43.509469 systemd-tmpfiles[1521]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 23:49:43.511906 systemd-tmpfiles[1521]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 23:49:43.512142 systemd-tmpfiles[1521]: ACLs are not supported, ignoring. May 14 23:49:43.512184 systemd-tmpfiles[1521]: ACLs are not supported, ignoring. May 14 23:49:43.576419 zram_generator::config[1563]: No configuration found. May 14 23:49:43.595697 systemd-tmpfiles[1521]: Detected autofs mount point /boot during canonicalization of boot. May 14 23:49:43.595709 systemd-tmpfiles[1521]: Skipping /boot May 14 23:49:43.605922 systemd-tmpfiles[1521]: Detected autofs mount point /boot during canonicalization of boot. May 14 23:49:43.606082 systemd-tmpfiles[1521]: Skipping /boot May 14 23:49:43.703389 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:49:43.803028 systemd[1]: Reloading finished in 319 ms. May 14 23:49:43.829121 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 14 23:49:43.838251 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 14 23:49:43.847267 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 23:49:43.867266 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 23:49:43.882692 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 23:49:43.909660 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 14 23:49:43.920511 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 14 23:49:43.934402 lvm[1621]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 23:49:43.936995 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 14 23:49:43.957476 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 23:49:43.971446 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 14 23:49:43.994512 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 14 23:49:44.010406 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:49:44.020170 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 14 23:49:44.029295 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 14 23:49:44.051504 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 14 23:49:44.069967 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:49:44.082481 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 23:49:44.094795 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 23:49:44.111723 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 23:49:44.124998 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:49:44.125177 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 23:49:44.130295 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 14 23:49:44.145832 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 23:49:44.147508 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 23:49:44.153236 augenrules[1658]: No rules May 14 23:49:44.163250 systemd[1]: audit-rules.service: Deactivated successfully. May 14 23:49:44.163595 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 23:49:44.172321 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 23:49:44.172591 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 23:49:44.182976 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 23:49:44.183179 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 23:49:44.195504 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 23:49:44.195751 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 23:49:44.209785 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 23:49:44.221181 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:49:44.227700 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 23:49:44.239458 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 23:49:44.253451 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 23:49:44.262159 augenrules[1667]: /sbin/augenrules: No change May 14 23:49:44.266932 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 23:49:44.275433 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:49:44.275603 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 23:49:44.275762 systemd[1]: Reached target time-set.target - System Time Set. May 14 23:49:44.284991 augenrules[1688]: No rules May 14 23:49:44.285061 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 23:49:44.285331 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 23:49:44.294701 systemd[1]: audit-rules.service: Deactivated successfully. May 14 23:49:44.294970 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 23:49:44.300803 systemd-resolved[1623]: Positive Trust Anchors: May 14 23:49:44.301203 systemd-resolved[1623]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 23:49:44.301243 systemd-resolved[1623]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 23:49:44.303035 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 23:49:44.303237 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 23:49:44.312958 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 23:49:44.313409 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 23:49:44.320298 systemd-networkd[1520]: lo: Link UP May 14 23:49:44.320306 systemd-networkd[1520]: lo: Gained carrier May 14 23:49:44.322276 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 23:49:44.322812 systemd-networkd[1520]: Enumeration completed May 14 23:49:44.323168 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 23:49:44.323185 systemd-networkd[1520]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:49:44.323189 systemd-networkd[1520]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 23:49:44.331963 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 23:49:44.343289 systemd[1]: Finished ensure-sysext.service. May 14 23:49:44.352029 systemd-resolved[1623]: Using system hostname 'ci-4230.1.1-n-bf7705109b'. May 14 23:49:44.361591 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 14 23:49:44.371620 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 14 23:49:44.383246 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 23:49:44.383309 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 23:49:44.407373 kernel: mlx5_core 4e7a:00:02.0 enP20090s1: Link up May 14 23:49:44.456380 kernel: hv_netvsc 002248bb-8dbf-0022-48bb-8dbf002248bb eth0: Data path switched to VF: enP20090s1 May 14 23:49:44.458836 systemd-networkd[1520]: enP20090s1: Link UP May 14 23:49:44.458995 systemd-networkd[1520]: eth0: Link UP May 14 23:49:44.458999 systemd-networkd[1520]: eth0: Gained carrier May 14 23:49:44.459019 systemd-networkd[1520]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:49:44.460782 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 23:49:44.469435 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 14 23:49:44.470763 systemd-networkd[1520]: enP20090s1: Gained carrier May 14 23:49:44.479332 systemd[1]: Reached target network.target - Network. May 14 23:49:44.485450 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 23:49:44.496438 systemd-networkd[1520]: eth0: DHCPv4 address 10.200.20.40/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 14 23:49:44.695262 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 14 23:49:44.703128 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 23:49:46.005463 systemd-networkd[1520]: enP20090s1: Gained IPv6LL May 14 23:49:46.069516 systemd-networkd[1520]: eth0: Gained IPv6LL May 14 23:49:46.072083 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 14 23:49:46.079920 systemd[1]: Reached target network-online.target - Network is Online. May 14 23:49:47.467374 ldconfig[1300]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 23:49:47.482000 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 14 23:49:47.494538 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 14 23:49:47.504907 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 14 23:49:47.512504 systemd[1]: Reached target sysinit.target - System Initialization. May 14 23:49:47.519131 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 14 23:49:47.526850 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 14 23:49:47.534773 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 14 23:49:47.540853 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 14 23:49:47.548106 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 14 23:49:47.555321 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 23:49:47.555374 systemd[1]: Reached target paths.target - Path Units. May 14 23:49:47.560495 systemd[1]: Reached target timers.target - Timer Units. May 14 23:49:47.567586 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 14 23:49:47.575772 systemd[1]: Starting docker.socket - Docker Socket for the API... May 14 23:49:47.583745 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 14 23:49:47.591121 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 14 23:49:47.598376 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 14 23:49:47.607982 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 14 23:49:47.614780 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 14 23:49:47.622394 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 14 23:49:47.629426 systemd[1]: Reached target sockets.target - Socket Units. May 14 23:49:47.635312 systemd[1]: Reached target basic.target - Basic System. May 14 23:49:47.641399 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 14 23:49:47.641427 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 14 23:49:47.655504 systemd[1]: Starting chronyd.service - NTP client/server... May 14 23:49:47.665054 systemd[1]: Starting containerd.service - containerd container runtime... May 14 23:49:47.677629 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 14 23:49:47.693579 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 14 23:49:47.702964 (chronyd)[1710]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS May 14 23:49:47.705541 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 14 23:49:47.713602 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 14 23:49:47.715597 jq[1717]: false May 14 23:49:47.721911 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 23:49:47.721959 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). May 14 23:49:47.730087 chronyd[1721]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) May 14 23:49:47.733034 KVP[1719]: KVP starting; pid is:1719 May 14 23:49:47.730873 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. May 14 23:49:47.740901 KVP[1719]: KVP LIC Version: 3.1 May 14 23:49:47.742605 kernel: hv_utils: KVP IC version 4.0 May 14 23:49:47.742475 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). May 14 23:49:47.744568 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:49:47.754535 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 14 23:49:47.762566 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 14 23:49:47.771407 chronyd[1721]: Timezone right/UTC failed leap second check, ignoring May 14 23:49:47.772397 chronyd[1721]: Loaded seccomp filter (level 2) May 14 23:49:47.773553 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 14 23:49:47.784567 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 14 23:49:47.789504 extend-filesystems[1718]: Found loop4 May 14 23:49:47.802908 extend-filesystems[1718]: Found loop5 May 14 23:49:47.802908 extend-filesystems[1718]: Found loop6 May 14 23:49:47.802908 extend-filesystems[1718]: Found loop7 May 14 23:49:47.802908 extend-filesystems[1718]: Found sda May 14 23:49:47.802908 extend-filesystems[1718]: Found sda1 May 14 23:49:47.802908 extend-filesystems[1718]: Found sda2 May 14 23:49:47.802908 extend-filesystems[1718]: Found sda3 May 14 23:49:47.802908 extend-filesystems[1718]: Found usr May 14 23:49:47.802908 extend-filesystems[1718]: Found sda4 May 14 23:49:47.802908 extend-filesystems[1718]: Found sda6 May 14 23:49:47.802908 extend-filesystems[1718]: Found sda7 May 14 23:49:47.802908 extend-filesystems[1718]: Found sda9 May 14 23:49:47.802908 extend-filesystems[1718]: Checking size of /dev/sda9 May 14 23:49:47.882581 dbus-daemon[1713]: [system] SELinux support is enabled May 14 23:49:47.804631 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 14 23:49:47.962253 extend-filesystems[1718]: Old size kept for /dev/sda9 May 14 23:49:47.962253 extend-filesystems[1718]: Found sr0 May 14 23:49:47.832943 systemd[1]: Starting systemd-logind.service - User Login Management... May 14 23:49:48.034331 coreos-metadata[1712]: May 14 23:49:48.007 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 14 23:49:48.034331 coreos-metadata[1712]: May 14 23:49:48.023 INFO Fetch successful May 14 23:49:48.034331 coreos-metadata[1712]: May 14 23:49:48.023 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 May 14 23:49:48.034331 coreos-metadata[1712]: May 14 23:49:48.030 INFO Fetch successful May 14 23:49:48.034331 coreos-metadata[1712]: May 14 23:49:48.030 INFO Fetching http://168.63.129.16/machine/a05bcc81-744a-4c69-89d0-d6c3ad3f953b/09d3cf8e%2Df539%2D40c5%2D854b%2D7d1144d88b30.%5Fci%2D4230.1.1%2Dn%2Dbf7705109b?comp=config&type=sharedConfig&incarnation=1: Attempt #1 May 14 23:49:47.849861 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 23:49:47.854549 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 23:49:48.035015 update_engine[1744]: I20250514 23:49:47.966523 1744 main.cc:92] Flatcar Update Engine starting May 14 23:49:48.035015 update_engine[1744]: I20250514 23:49:47.978590 1744 update_check_scheduler.cc:74] Next update check in 4m53s May 14 23:49:47.859688 systemd[1]: Starting update-engine.service - Update Engine... May 14 23:49:48.047627 jq[1751]: true May 14 23:49:48.047906 coreos-metadata[1712]: May 14 23:49:48.035 INFO Fetch successful May 14 23:49:48.047906 coreos-metadata[1712]: May 14 23:49:48.035 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 May 14 23:49:47.893292 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 14 23:49:47.908042 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 14 23:49:47.919649 systemd[1]: Started chronyd.service - NTP client/server. May 14 23:49:47.940038 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 23:49:47.940261 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 14 23:49:47.940602 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 23:49:47.940784 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 14 23:49:47.967585 systemd[1]: motdgen.service: Deactivated successfully. May 14 23:49:47.967799 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 14 23:49:47.979409 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 14 23:49:47.997051 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 23:49:47.997283 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 14 23:49:48.032757 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 23:49:48.032785 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 14 23:49:48.057631 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 23:49:48.057667 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 14 23:49:48.073068 coreos-metadata[1712]: May 14 23:49:48.067 INFO Fetch successful May 14 23:49:48.087863 systemd[1]: Started update-engine.service - Update Engine. May 14 23:49:48.094637 systemd-logind[1736]: New seat seat0. May 14 23:49:48.095688 (ntainerd)[1766]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 14 23:49:48.102304 jq[1765]: true May 14 23:49:48.099609 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 14 23:49:48.105564 systemd-logind[1736]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) May 14 23:49:48.117716 systemd[1]: Started systemd-logind.service - User Login Management. May 14 23:49:48.146818 tar[1764]: linux-arm64/LICENSE May 14 23:49:48.146818 tar[1764]: linux-arm64/helm May 14 23:49:48.200329 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 14 23:49:48.212223 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 14 23:49:48.267581 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (1768) May 14 23:49:48.373226 bash[1820]: Updated "/home/core/.ssh/authorized_keys" May 14 23:49:48.374615 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 14 23:49:48.389264 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 14 23:49:48.641807 locksmithd[1787]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 23:49:48.892834 sshd_keygen[1749]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 23:49:48.907781 tar[1764]: linux-arm64/README.md May 14 23:49:48.928130 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 14 23:49:48.944304 containerd[1766]: time="2025-05-14T23:49:48.944173380Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 14 23:49:48.947980 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 14 23:49:48.966965 systemd[1]: Starting issuegen.service - Generate /run/issue... May 14 23:49:48.987638 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... May 14 23:49:49.000192 systemd[1]: issuegen.service: Deactivated successfully. May 14 23:49:49.000465 systemd[1]: Finished issuegen.service - Generate /run/issue. May 14 23:49:49.013163 containerd[1766]: time="2025-05-14T23:49:49.013092180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 14 23:49:49.018725 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 14 23:49:49.019792 containerd[1766]: time="2025-05-14T23:49:49.019739180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 14 23:49:49.020501 containerd[1766]: time="2025-05-14T23:49:49.020470940Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 14 23:49:49.024809 containerd[1766]: time="2025-05-14T23:49:49.021030780Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 14 23:49:49.024809 containerd[1766]: time="2025-05-14T23:49:49.021258300Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 14 23:49:49.024809 containerd[1766]: time="2025-05-14T23:49:49.021294340Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 14 23:49:49.024809 containerd[1766]: time="2025-05-14T23:49:49.021393980Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 14 23:49:49.024809 containerd[1766]: time="2025-05-14T23:49:49.021408060Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 14 23:49:49.024809 containerd[1766]: time="2025-05-14T23:49:49.021662460Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 14 23:49:49.024809 containerd[1766]: time="2025-05-14T23:49:49.021678180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 14 23:49:49.024809 containerd[1766]: time="2025-05-14T23:49:49.021694300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 14 23:49:49.024809 containerd[1766]: time="2025-05-14T23:49:49.021703300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 14 23:49:49.024809 containerd[1766]: time="2025-05-14T23:49:49.021778420Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 14 23:49:49.024809 containerd[1766]: time="2025-05-14T23:49:49.021979380Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 14 23:49:49.025193 containerd[1766]: time="2025-05-14T23:49:49.022143300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 14 23:49:49.025193 containerd[1766]: time="2025-05-14T23:49:49.022157380Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 14 23:49:49.025193 containerd[1766]: time="2025-05-14T23:49:49.022233540Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 14 23:49:49.025193 containerd[1766]: time="2025-05-14T23:49:49.022276780Z" level=info msg="metadata content store policy set" policy=shared May 14 23:49:49.044386 containerd[1766]: time="2025-05-14T23:49:49.043466500Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 14 23:49:49.044386 containerd[1766]: time="2025-05-14T23:49:49.043547540Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 14 23:49:49.044386 containerd[1766]: time="2025-05-14T23:49:49.043565860Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 14 23:49:49.044386 containerd[1766]: time="2025-05-14T23:49:49.043585460Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 14 23:49:49.044386 containerd[1766]: time="2025-05-14T23:49:49.043603660Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 14 23:49:49.044386 containerd[1766]: time="2025-05-14T23:49:49.043814180Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 14 23:49:49.044386 containerd[1766]: time="2025-05-14T23:49:49.044069980Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 14 23:49:49.044386 containerd[1766]: time="2025-05-14T23:49:49.044178620Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 14 23:49:49.044386 containerd[1766]: time="2025-05-14T23:49:49.044202900Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 14 23:49:49.044386 containerd[1766]: time="2025-05-14T23:49:49.044220540Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 14 23:49:49.044386 containerd[1766]: time="2025-05-14T23:49:49.044235300Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 14 23:49:49.044386 containerd[1766]: time="2025-05-14T23:49:49.044248860Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 14 23:49:49.044386 containerd[1766]: time="2025-05-14T23:49:49.044261420Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 14 23:49:49.044386 containerd[1766]: time="2025-05-14T23:49:49.044276900Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 14 23:49:49.044762 containerd[1766]: time="2025-05-14T23:49:49.044294780Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 14 23:49:49.044762 containerd[1766]: time="2025-05-14T23:49:49.044309500Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 14 23:49:49.044762 containerd[1766]: time="2025-05-14T23:49:49.044322260Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 14 23:49:49.044762 containerd[1766]: time="2025-05-14T23:49:49.044336140Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 14 23:49:49.044762 containerd[1766]: time="2025-05-14T23:49:49.044383660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 14 23:49:49.044762 containerd[1766]: time="2025-05-14T23:49:49.044405540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 14 23:49:49.044762 containerd[1766]: time="2025-05-14T23:49:49.044419780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 14 23:49:49.044762 containerd[1766]: time="2025-05-14T23:49:49.044433460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 14 23:49:49.044762 containerd[1766]: time="2025-05-14T23:49:49.044445940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 14 23:49:49.044762 containerd[1766]: time="2025-05-14T23:49:49.044461100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 14 23:49:49.044762 containerd[1766]: time="2025-05-14T23:49:49.044474340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 14 23:49:49.044762 containerd[1766]: time="2025-05-14T23:49:49.044488980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 14 23:49:49.044762 containerd[1766]: time="2025-05-14T23:49:49.044504380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 14 23:49:49.044762 containerd[1766]: time="2025-05-14T23:49:49.044519700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 14 23:49:49.045013 containerd[1766]: time="2025-05-14T23:49:49.044531900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 14 23:49:49.045013 containerd[1766]: time="2025-05-14T23:49:49.044543540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 14 23:49:49.045013 containerd[1766]: time="2025-05-14T23:49:49.044560340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 14 23:49:49.045013 containerd[1766]: time="2025-05-14T23:49:49.044577580Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 14 23:49:49.045013 containerd[1766]: time="2025-05-14T23:49:49.044600860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 14 23:49:49.045013 containerd[1766]: time="2025-05-14T23:49:49.044617340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 14 23:49:49.045013 containerd[1766]: time="2025-05-14T23:49:49.044629060Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 14 23:49:49.045013 containerd[1766]: time="2025-05-14T23:49:49.044685460Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 14 23:49:49.045013 containerd[1766]: time="2025-05-14T23:49:49.044707980Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 14 23:49:49.045013 containerd[1766]: time="2025-05-14T23:49:49.044719500Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 14 23:49:49.045013 containerd[1766]: time="2025-05-14T23:49:49.044731580Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 14 23:49:49.045013 containerd[1766]: time="2025-05-14T23:49:49.044745020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 14 23:49:49.045013 containerd[1766]: time="2025-05-14T23:49:49.044757820Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 14 23:49:49.045013 containerd[1766]: time="2025-05-14T23:49:49.044768660Z" level=info msg="NRI interface is disabled by configuration." May 14 23:49:49.045248 containerd[1766]: time="2025-05-14T23:49:49.044779820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 14 23:49:49.045269 containerd[1766]: time="2025-05-14T23:49:49.045084740Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 14 23:49:49.045269 containerd[1766]: time="2025-05-14T23:49:49.045134620Z" level=info msg="Connect containerd service" May 14 23:49:49.045269 containerd[1766]: time="2025-05-14T23:49:49.045181620Z" level=info msg="using legacy CRI server" May 14 23:49:49.045269 containerd[1766]: time="2025-05-14T23:49:49.045188900Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 14 23:49:49.045510 containerd[1766]: time="2025-05-14T23:49:49.045327700Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 14 23:49:49.052648 containerd[1766]: time="2025-05-14T23:49:49.049679980Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 23:49:49.052648 containerd[1766]: time="2025-05-14T23:49:49.050027820Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 23:49:49.052648 containerd[1766]: time="2025-05-14T23:49:49.050066780Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 23:49:49.052648 containerd[1766]: time="2025-05-14T23:49:49.050117620Z" level=info msg="Start subscribing containerd event" May 14 23:49:49.052648 containerd[1766]: time="2025-05-14T23:49:49.050165180Z" level=info msg="Start recovering state" May 14 23:49:49.052648 containerd[1766]: time="2025-05-14T23:49:49.050240700Z" level=info msg="Start event monitor" May 14 23:49:49.052648 containerd[1766]: time="2025-05-14T23:49:49.050252780Z" level=info msg="Start snapshots syncer" May 14 23:49:49.052648 containerd[1766]: time="2025-05-14T23:49:49.050262820Z" level=info msg="Start cni network conf syncer for default" May 14 23:49:49.052648 containerd[1766]: time="2025-05-14T23:49:49.050272340Z" level=info msg="Start streaming server" May 14 23:49:49.052648 containerd[1766]: time="2025-05-14T23:49:49.050336020Z" level=info msg="containerd successfully booted in 0.108732s" May 14 23:49:49.045949 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 14 23:49:49.055278 systemd[1]: Started containerd.service - containerd container runtime. May 14 23:49:49.072713 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. May 14 23:49:49.094480 systemd[1]: Started getty@tty1.service - Getty on tty1. May 14 23:49:49.107860 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 14 23:49:49.114996 systemd[1]: Reached target getty.target - Login Prompts. May 14 23:49:49.169598 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:49:49.178028 systemd[1]: Reached target multi-user.target - Multi-User System. May 14 23:49:49.178374 (kubelet)[1904]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:49:49.189504 systemd[1]: Startup finished in 739ms (kernel) + 15.037s (initrd) + 11.566s (userspace) = 27.343s. May 14 23:49:49.541468 login[1897]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 14 23:49:49.545583 login[1898]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 14 23:49:49.555921 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 14 23:49:49.562683 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 14 23:49:49.567735 systemd-logind[1736]: New session 2 of user core. May 14 23:49:49.579675 systemd-logind[1736]: New session 1 of user core. May 14 23:49:49.588838 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 14 23:49:49.595275 kubelet[1904]: E0514 23:49:49.595184 1904 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:49:49.599868 systemd[1]: Starting user@500.service - User Manager for UID 500... May 14 23:49:49.600135 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:49:49.600263 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:49:49.600636 systemd[1]: kubelet.service: Consumed 744ms CPU time, 249.9M memory peak. May 14 23:49:49.605865 (systemd)[1917]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 23:49:49.608828 systemd-logind[1736]: New session c1 of user core. May 14 23:49:49.809406 systemd[1917]: Queued start job for default target default.target. May 14 23:49:49.818427 systemd[1917]: Created slice app.slice - User Application Slice. May 14 23:49:49.818461 systemd[1917]: Reached target paths.target - Paths. May 14 23:49:49.818509 systemd[1917]: Reached target timers.target - Timers. May 14 23:49:49.820027 systemd[1917]: Starting dbus.socket - D-Bus User Message Bus Socket... May 14 23:49:49.834589 systemd[1917]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 14 23:49:49.834769 systemd[1917]: Reached target sockets.target - Sockets. May 14 23:49:49.834837 systemd[1917]: Reached target basic.target - Basic System. May 14 23:49:49.834874 systemd[1917]: Reached target default.target - Main User Target. May 14 23:49:49.834918 systemd[1917]: Startup finished in 217ms. May 14 23:49:49.835126 systemd[1]: Started user@500.service - User Manager for UID 500. May 14 23:49:49.842550 systemd[1]: Started session-1.scope - Session 1 of User core. May 14 23:49:49.843427 systemd[1]: Started session-2.scope - Session 2 of User core. May 14 23:49:51.131947 waagent[1895]: 2025-05-14T23:49:51.131834Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 May 14 23:49:51.138326 waagent[1895]: 2025-05-14T23:49:51.138238Z INFO Daemon Daemon OS: flatcar 4230.1.1 May 14 23:49:51.143304 waagent[1895]: 2025-05-14T23:49:51.143232Z INFO Daemon Daemon Python: 3.11.11 May 14 23:49:51.148463 waagent[1895]: 2025-05-14T23:49:51.148229Z INFO Daemon Daemon Run daemon May 14 23:49:51.152712 waagent[1895]: 2025-05-14T23:49:51.152647Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4230.1.1' May 14 23:49:51.162494 waagent[1895]: 2025-05-14T23:49:51.162413Z INFO Daemon Daemon Using waagent for provisioning May 14 23:49:51.168530 waagent[1895]: 2025-05-14T23:49:51.168473Z INFO Daemon Daemon Activate resource disk May 14 23:49:51.173985 waagent[1895]: 2025-05-14T23:49:51.173915Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb May 14 23:49:51.187937 waagent[1895]: 2025-05-14T23:49:51.187840Z INFO Daemon Daemon Found device: None May 14 23:49:51.193163 waagent[1895]: 2025-05-14T23:49:51.193088Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology May 14 23:49:51.202709 waagent[1895]: 2025-05-14T23:49:51.202641Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 May 14 23:49:51.215801 waagent[1895]: 2025-05-14T23:49:51.215734Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 14 23:49:51.222211 waagent[1895]: 2025-05-14T23:49:51.222127Z INFO Daemon Daemon Running default provisioning handler May 14 23:49:51.234877 waagent[1895]: 2025-05-14T23:49:51.234778Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. May 14 23:49:51.252016 waagent[1895]: 2025-05-14T23:49:51.251863Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' May 14 23:49:51.264313 waagent[1895]: 2025-05-14T23:49:51.264232Z INFO Daemon Daemon cloud-init is enabled: False May 14 23:49:51.270430 waagent[1895]: 2025-05-14T23:49:51.270352Z INFO Daemon Daemon Copying ovf-env.xml May 14 23:49:51.374152 waagent[1895]: 2025-05-14T23:49:51.374024Z INFO Daemon Daemon Successfully mounted dvd May 14 23:49:51.406782 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. May 14 23:49:51.409132 waagent[1895]: 2025-05-14T23:49:51.409037Z INFO Daemon Daemon Detect protocol endpoint May 14 23:49:51.414841 waagent[1895]: 2025-05-14T23:49:51.414747Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 14 23:49:51.421298 waagent[1895]: 2025-05-14T23:49:51.421210Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler May 14 23:49:51.432388 waagent[1895]: 2025-05-14T23:49:51.428668Z INFO Daemon Daemon Test for route to 168.63.129.16 May 14 23:49:51.434569 waagent[1895]: 2025-05-14T23:49:51.434499Z INFO Daemon Daemon Route to 168.63.129.16 exists May 14 23:49:51.440131 waagent[1895]: 2025-05-14T23:49:51.440056Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 May 14 23:49:51.475802 waagent[1895]: 2025-05-14T23:49:51.475724Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 May 14 23:49:51.483478 waagent[1895]: 2025-05-14T23:49:51.483440Z INFO Daemon Daemon Wire protocol version:2012-11-30 May 14 23:49:51.489686 waagent[1895]: 2025-05-14T23:49:51.489611Z INFO Daemon Daemon Server preferred version:2015-04-05 May 14 23:49:51.637942 waagent[1895]: 2025-05-14T23:49:51.637814Z INFO Daemon Daemon Initializing goal state during protocol detection May 14 23:49:51.645164 waagent[1895]: 2025-05-14T23:49:51.645079Z INFO Daemon Daemon Forcing an update of the goal state. May 14 23:49:51.663695 waagent[1895]: 2025-05-14T23:49:51.663580Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] May 14 23:49:51.706235 waagent[1895]: 2025-05-14T23:49:51.706179Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.164 May 14 23:49:51.712914 waagent[1895]: 2025-05-14T23:49:51.712853Z INFO Daemon May 14 23:49:51.716017 waagent[1895]: 2025-05-14T23:49:51.715947Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: f3578646-6cdf-40f3-830f-eb80c8675f24 eTag: 7615677764565373818 source: Fabric] May 14 23:49:51.728240 waagent[1895]: 2025-05-14T23:49:51.728181Z INFO Daemon The vmSettings originated via Fabric; will ignore them. May 14 23:49:51.735909 waagent[1895]: 2025-05-14T23:49:51.735853Z INFO Daemon May 14 23:49:51.738985 waagent[1895]: 2025-05-14T23:49:51.738925Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] May 14 23:49:51.751051 waagent[1895]: 2025-05-14T23:49:51.751005Z INFO Daemon Daemon Downloading artifacts profile blob May 14 23:49:51.856859 waagent[1895]: 2025-05-14T23:49:51.856748Z INFO Daemon Downloaded certificate {'thumbprint': '921134207FE2954E0119EB0D6079A3C92A5E2683', 'hasPrivateKey': True} May 14 23:49:51.867498 waagent[1895]: 2025-05-14T23:49:51.867438Z INFO Daemon Downloaded certificate {'thumbprint': '24A6FA95971BDDD46B9F5F8A2D1089EAE22103AB', 'hasPrivateKey': False} May 14 23:49:51.877961 waagent[1895]: 2025-05-14T23:49:51.877898Z INFO Daemon Fetch goal state completed May 14 23:49:51.890703 waagent[1895]: 2025-05-14T23:49:51.890646Z INFO Daemon Daemon Starting provisioning May 14 23:49:51.896131 waagent[1895]: 2025-05-14T23:49:51.896060Z INFO Daemon Daemon Handle ovf-env.xml. May 14 23:49:51.901400 waagent[1895]: 2025-05-14T23:49:51.901288Z INFO Daemon Daemon Set hostname [ci-4230.1.1-n-bf7705109b] May 14 23:49:51.924382 waagent[1895]: 2025-05-14T23:49:51.923740Z INFO Daemon Daemon Publish hostname [ci-4230.1.1-n-bf7705109b] May 14 23:49:51.931671 waagent[1895]: 2025-05-14T23:49:51.931594Z INFO Daemon Daemon Examine /proc/net/route for primary interface May 14 23:49:51.938334 waagent[1895]: 2025-05-14T23:49:51.938262Z INFO Daemon Daemon Primary interface is [eth0] May 14 23:49:51.951429 systemd-networkd[1520]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:49:51.951437 systemd-networkd[1520]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 23:49:51.951466 systemd-networkd[1520]: eth0: DHCP lease lost May 14 23:49:51.952611 waagent[1895]: 2025-05-14T23:49:51.952518Z INFO Daemon Daemon Create user account if not exists May 14 23:49:51.958544 waagent[1895]: 2025-05-14T23:49:51.958467Z INFO Daemon Daemon User core already exists, skip useradd May 14 23:49:51.964744 waagent[1895]: 2025-05-14T23:49:51.964675Z INFO Daemon Daemon Configure sudoer May 14 23:49:51.969931 waagent[1895]: 2025-05-14T23:49:51.969808Z INFO Daemon Daemon Configure sshd May 14 23:49:51.974747 waagent[1895]: 2025-05-14T23:49:51.974669Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. May 14 23:49:51.988220 waagent[1895]: 2025-05-14T23:49:51.988146Z INFO Daemon Daemon Deploy ssh public key. May 14 23:49:51.999432 systemd-networkd[1520]: eth0: DHCPv4 address 10.200.20.40/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 14 23:49:53.133381 waagent[1895]: 2025-05-14T23:49:53.129334Z INFO Daemon Daemon Provisioning complete May 14 23:49:53.149310 waagent[1895]: 2025-05-14T23:49:53.149250Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping May 14 23:49:53.156027 waagent[1895]: 2025-05-14T23:49:53.155945Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. May 14 23:49:53.166628 waagent[1895]: 2025-05-14T23:49:53.166553Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent May 14 23:49:53.320225 waagent[1974]: 2025-05-14T23:49:53.320124Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) May 14 23:49:53.320625 waagent[1974]: 2025-05-14T23:49:53.320352Z INFO ExtHandler ExtHandler OS: flatcar 4230.1.1 May 14 23:49:53.320625 waagent[1974]: 2025-05-14T23:49:53.320425Z INFO ExtHandler ExtHandler Python: 3.11.11 May 14 23:49:53.475447 waagent[1974]: 2025-05-14T23:49:53.473126Z INFO ExtHandler ExtHandler Distro: flatcar-4230.1.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; May 14 23:49:53.475447 waagent[1974]: 2025-05-14T23:49:53.473445Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 14 23:49:53.475447 waagent[1974]: 2025-05-14T23:49:53.473520Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 May 14 23:49:53.492873 waagent[1974]: 2025-05-14T23:49:53.492769Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] May 14 23:49:53.509155 waagent[1974]: 2025-05-14T23:49:53.509097Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 May 14 23:49:53.509977 waagent[1974]: 2025-05-14T23:49:53.509927Z INFO ExtHandler May 14 23:49:53.510158 waagent[1974]: 2025-05-14T23:49:53.510124Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: ab0c5800-9016-479b-a503-5afb99dba5e1 eTag: 7615677764565373818 source: Fabric] May 14 23:49:53.510613 waagent[1974]: 2025-05-14T23:49:53.510568Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. May 14 23:49:53.511325 waagent[1974]: 2025-05-14T23:49:53.511278Z INFO ExtHandler May 14 23:49:53.511537 waagent[1974]: 2025-05-14T23:49:53.511500Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] May 14 23:49:53.516117 waagent[1974]: 2025-05-14T23:49:53.516064Z INFO ExtHandler ExtHandler Downloading artifacts profile blob May 14 23:49:53.674967 waagent[1974]: 2025-05-14T23:49:53.674865Z INFO ExtHandler Downloaded certificate {'thumbprint': '921134207FE2954E0119EB0D6079A3C92A5E2683', 'hasPrivateKey': True} May 14 23:49:53.675727 waagent[1974]: 2025-05-14T23:49:53.675647Z INFO ExtHandler Downloaded certificate {'thumbprint': '24A6FA95971BDDD46B9F5F8A2D1089EAE22103AB', 'hasPrivateKey': False} May 14 23:49:53.676479 waagent[1974]: 2025-05-14T23:49:53.676429Z INFO ExtHandler Fetch goal state completed May 14 23:49:53.706437 waagent[1974]: 2025-05-14T23:49:53.706330Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1974 May 14 23:49:53.706802 waagent[1974]: 2025-05-14T23:49:53.706760Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** May 14 23:49:53.708780 waagent[1974]: 2025-05-14T23:49:53.708716Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4230.1.1', '', 'Flatcar Container Linux by Kinvolk'] May 14 23:49:53.709387 waagent[1974]: 2025-05-14T23:49:53.709315Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules May 14 23:49:54.368658 waagent[1974]: 2025-05-14T23:49:54.368544Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service May 14 23:49:54.646089 waagent[1974]: 2025-05-14T23:49:54.645620Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup May 14 23:49:54.652442 waagent[1974]: 2025-05-14T23:49:54.652385Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now May 14 23:49:54.660220 systemd[1]: Reload requested from client PID 1992 ('systemctl') (unit waagent.service)... May 14 23:49:54.660243 systemd[1]: Reloading... May 14 23:49:54.781463 zram_generator::config[2040]: No configuration found. May 14 23:49:54.874395 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:49:54.980081 systemd[1]: Reloading finished in 319 ms. May 14 23:49:54.992869 waagent[1974]: 2025-05-14T23:49:54.992336Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service May 14 23:49:55.000904 systemd[1]: Reload requested from client PID 2085 ('systemctl') (unit waagent.service)... May 14 23:49:55.000920 systemd[1]: Reloading... May 14 23:49:55.120389 zram_generator::config[2124]: No configuration found. May 14 23:49:55.258922 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:49:55.370300 systemd[1]: Reloading finished in 369 ms. May 14 23:49:55.386411 waagent[1974]: 2025-05-14T23:49:55.385106Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service May 14 23:49:55.386411 waagent[1974]: 2025-05-14T23:49:55.385417Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully May 14 23:49:55.656597 waagent[1974]: 2025-05-14T23:49:55.656492Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. May 14 23:49:55.657296 waagent[1974]: 2025-05-14T23:49:55.657202Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] May 14 23:49:55.658336 waagent[1974]: 2025-05-14T23:49:55.658220Z INFO ExtHandler ExtHandler Starting env monitor service. May 14 23:49:55.658946 waagent[1974]: 2025-05-14T23:49:55.658793Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. May 14 23:49:55.660141 waagent[1974]: 2025-05-14T23:49:55.659183Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 14 23:49:55.660141 waagent[1974]: 2025-05-14T23:49:55.659292Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 May 14 23:49:55.660141 waagent[1974]: 2025-05-14T23:49:55.659560Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. May 14 23:49:55.660141 waagent[1974]: 2025-05-14T23:49:55.659772Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: May 14 23:49:55.660141 waagent[1974]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT May 14 23:49:55.660141 waagent[1974]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 May 14 23:49:55.660141 waagent[1974]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 May 14 23:49:55.660141 waagent[1974]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 May 14 23:49:55.660141 waagent[1974]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 14 23:49:55.660141 waagent[1974]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 14 23:49:55.660763 waagent[1974]: 2025-05-14T23:49:55.660538Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread May 14 23:49:55.660831 waagent[1974]: 2025-05-14T23:49:55.660790Z INFO ExtHandler ExtHandler Start Extension Telemetry service. May 14 23:49:55.661406 waagent[1974]: 2025-05-14T23:49:55.661313Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True May 14 23:49:55.661604 waagent[1974]: 2025-05-14T23:49:55.661545Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. May 14 23:49:55.662303 waagent[1974]: 2025-05-14T23:49:55.662272Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread May 14 23:49:55.662390 waagent[1974]: 2025-05-14T23:49:55.662141Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 14 23:49:55.662541 waagent[1974]: 2025-05-14T23:49:55.662498Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 May 14 23:49:55.666287 waagent[1974]: 2025-05-14T23:49:55.666211Z INFO EnvHandler ExtHandler Configure routes May 14 23:49:55.669086 waagent[1974]: 2025-05-14T23:49:55.669004Z INFO EnvHandler ExtHandler Gateway:None May 14 23:49:55.669940 waagent[1974]: 2025-05-14T23:49:55.669875Z INFO ExtHandler ExtHandler May 14 23:49:55.670260 waagent[1974]: 2025-05-14T23:49:55.670194Z INFO EnvHandler ExtHandler Routes:None May 14 23:49:55.670861 waagent[1974]: 2025-05-14T23:49:55.670778Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 2a42c0e3-a50f-46b2-be98-8ffcd4237169 correlation a5c809c8-c643-4bd7-a14b-ce8acb8f19ac created: 2025-05-14T23:48:34.506264Z] May 14 23:49:55.672607 waagent[1974]: 2025-05-14T23:49:55.672533Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. May 14 23:49:55.673927 waagent[1974]: 2025-05-14T23:49:55.673843Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] May 14 23:49:55.738924 waagent[1974]: 2025-05-14T23:49:55.738857Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: F23E4543-60AB-491A-BBB8-E0186BACAA48;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] May 14 23:49:55.791717 waagent[1974]: 2025-05-14T23:49:55.791616Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: May 14 23:49:55.791717 waagent[1974]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 14 23:49:55.791717 waagent[1974]: pkts bytes target prot opt in out source destination May 14 23:49:55.791717 waagent[1974]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 14 23:49:55.791717 waagent[1974]: pkts bytes target prot opt in out source destination May 14 23:49:55.791717 waagent[1974]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 14 23:49:55.791717 waagent[1974]: pkts bytes target prot opt in out source destination May 14 23:49:55.791717 waagent[1974]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 14 23:49:55.791717 waagent[1974]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 14 23:49:55.791717 waagent[1974]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 14 23:49:55.795432 waagent[1974]: 2025-05-14T23:49:55.795315Z INFO EnvHandler ExtHandler Current Firewall rules: May 14 23:49:55.795432 waagent[1974]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 14 23:49:55.795432 waagent[1974]: pkts bytes target prot opt in out source destination May 14 23:49:55.795432 waagent[1974]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 14 23:49:55.795432 waagent[1974]: pkts bytes target prot opt in out source destination May 14 23:49:55.795432 waagent[1974]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 14 23:49:55.795432 waagent[1974]: pkts bytes target prot opt in out source destination May 14 23:49:55.795432 waagent[1974]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 14 23:49:55.795432 waagent[1974]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 14 23:49:55.795432 waagent[1974]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 14 23:49:55.795734 waagent[1974]: 2025-05-14T23:49:55.795701Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 May 14 23:49:55.805374 waagent[1974]: 2025-05-14T23:49:55.804838Z INFO MonitorHandler ExtHandler Network interfaces: May 14 23:49:55.805374 waagent[1974]: Executing ['ip', '-a', '-o', 'link']: May 14 23:49:55.805374 waagent[1974]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 May 14 23:49:55.805374 waagent[1974]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bb:8d:bf brd ff:ff:ff:ff:ff:ff May 14 23:49:55.805374 waagent[1974]: 3: enP20090s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bb:8d:bf brd ff:ff:ff:ff:ff:ff\ altname enP20090p0s2 May 14 23:49:55.805374 waagent[1974]: Executing ['ip', '-4', '-a', '-o', 'address']: May 14 23:49:55.805374 waagent[1974]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever May 14 23:49:55.805374 waagent[1974]: 2: eth0 inet 10.200.20.40/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever May 14 23:49:55.805374 waagent[1974]: Executing ['ip', '-6', '-a', '-o', 'address']: May 14 23:49:55.805374 waagent[1974]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever May 14 23:49:55.805374 waagent[1974]: 2: eth0 inet6 fe80::222:48ff:febb:8dbf/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever May 14 23:49:55.805374 waagent[1974]: 3: enP20090s1 inet6 fe80::222:48ff:febb:8dbf/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever May 14 23:49:59.613728 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 23:49:59.619592 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:49:59.744165 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:49:59.748715 (kubelet)[2217]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:49:59.843538 kubelet[2217]: E0514 23:49:59.843473 2217 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:49:59.846983 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:49:59.847137 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:49:59.847992 systemd[1]: kubelet.service: Consumed 148ms CPU time, 102.6M memory peak. May 14 23:50:09.864072 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 14 23:50:09.873612 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:50:10.335285 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:50:10.353833 (kubelet)[2232]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:50:10.395242 kubelet[2232]: E0514 23:50:10.395194 2232 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:50:10.398123 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:50:10.398267 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:50:10.398794 systemd[1]: kubelet.service: Consumed 140ms CPU time, 102.2M memory peak. May 14 23:50:11.568932 chronyd[1721]: Selected source PHC0 May 14 23:50:20.613829 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 14 23:50:20.623527 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:50:20.757228 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:50:20.760573 (kubelet)[2248]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:50:20.811804 kubelet[2248]: E0514 23:50:20.811747 2248 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:50:20.813794 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:50:20.813916 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:50:20.814575 systemd[1]: kubelet.service: Consumed 123ms CPU time, 100M memory peak. May 14 23:50:21.504098 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 14 23:50:21.505945 systemd[1]: Started sshd@0-10.200.20.40:22-10.200.16.10:47600.service - OpenSSH per-connection server daemon (10.200.16.10:47600). May 14 23:50:22.091154 sshd[2257]: Accepted publickey for core from 10.200.16.10 port 47600 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:50:22.092441 sshd-session[2257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:50:22.097402 systemd-logind[1736]: New session 3 of user core. May 14 23:50:22.102575 systemd[1]: Started session-3.scope - Session 3 of User core. May 14 23:50:22.496566 systemd[1]: Started sshd@1-10.200.20.40:22-10.200.16.10:47610.service - OpenSSH per-connection server daemon (10.200.16.10:47610). May 14 23:50:22.940731 sshd[2262]: Accepted publickey for core from 10.200.16.10 port 47610 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:50:22.941999 sshd-session[2262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:50:22.947394 systemd-logind[1736]: New session 4 of user core. May 14 23:50:22.952557 systemd[1]: Started session-4.scope - Session 4 of User core. May 14 23:50:23.262440 sshd[2264]: Connection closed by 10.200.16.10 port 47610 May 14 23:50:23.262942 sshd-session[2262]: pam_unix(sshd:session): session closed for user core May 14 23:50:23.266929 systemd[1]: sshd@1-10.200.20.40:22-10.200.16.10:47610.service: Deactivated successfully. May 14 23:50:23.275949 systemd[1]: session-4.scope: Deactivated successfully. May 14 23:50:23.276943 systemd-logind[1736]: Session 4 logged out. Waiting for processes to exit. May 14 23:50:23.277754 systemd-logind[1736]: Removed session 4. May 14 23:50:23.348616 systemd[1]: Started sshd@2-10.200.20.40:22-10.200.16.10:47612.service - OpenSSH per-connection server daemon (10.200.16.10:47612). May 14 23:50:23.794566 sshd[2270]: Accepted publickey for core from 10.200.16.10 port 47612 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:50:23.795866 sshd-session[2270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:50:23.800549 systemd-logind[1736]: New session 5 of user core. May 14 23:50:23.807585 systemd[1]: Started session-5.scope - Session 5 of User core. May 14 23:50:24.112226 sshd[2272]: Connection closed by 10.200.16.10 port 47612 May 14 23:50:24.115860 sshd-session[2270]: pam_unix(sshd:session): session closed for user core May 14 23:50:24.119585 systemd[1]: sshd@2-10.200.20.40:22-10.200.16.10:47612.service: Deactivated successfully. May 14 23:50:24.121132 systemd[1]: session-5.scope: Deactivated successfully. May 14 23:50:24.123327 systemd-logind[1736]: Session 5 logged out. Waiting for processes to exit. May 14 23:50:24.124418 systemd-logind[1736]: Removed session 5. May 14 23:50:24.209608 systemd[1]: Started sshd@3-10.200.20.40:22-10.200.16.10:47620.service - OpenSSH per-connection server daemon (10.200.16.10:47620). May 14 23:50:24.664136 sshd[2278]: Accepted publickey for core from 10.200.16.10 port 47620 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:50:24.665437 sshd-session[2278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:50:24.670709 systemd-logind[1736]: New session 6 of user core. May 14 23:50:24.680629 systemd[1]: Started session-6.scope - Session 6 of User core. May 14 23:50:25.002512 sshd[2280]: Connection closed by 10.200.16.10 port 47620 May 14 23:50:25.003113 sshd-session[2278]: pam_unix(sshd:session): session closed for user core May 14 23:50:25.006401 systemd[1]: sshd@3-10.200.20.40:22-10.200.16.10:47620.service: Deactivated successfully. May 14 23:50:25.008128 systemd[1]: session-6.scope: Deactivated successfully. May 14 23:50:25.009796 systemd-logind[1736]: Session 6 logged out. Waiting for processes to exit. May 14 23:50:25.011046 systemd-logind[1736]: Removed session 6. May 14 23:50:25.094739 systemd[1]: Started sshd@4-10.200.20.40:22-10.200.16.10:47630.service - OpenSSH per-connection server daemon (10.200.16.10:47630). May 14 23:50:25.549736 sshd[2286]: Accepted publickey for core from 10.200.16.10 port 47630 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:50:25.551009 sshd-session[2286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:50:25.555415 systemd-logind[1736]: New session 7 of user core. May 14 23:50:25.560487 systemd[1]: Started session-7.scope - Session 7 of User core. May 14 23:50:25.951112 sudo[2289]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 14 23:50:25.951787 sudo[2289]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:50:25.963244 sudo[2289]: pam_unix(sudo:session): session closed for user root May 14 23:50:26.045407 sshd[2288]: Connection closed by 10.200.16.10 port 47630 May 14 23:50:26.045233 sshd-session[2286]: pam_unix(sshd:session): session closed for user core May 14 23:50:26.049214 systemd-logind[1736]: Session 7 logged out. Waiting for processes to exit. May 14 23:50:26.050309 systemd[1]: sshd@4-10.200.20.40:22-10.200.16.10:47630.service: Deactivated successfully. May 14 23:50:26.052783 systemd[1]: session-7.scope: Deactivated successfully. May 14 23:50:26.054031 systemd-logind[1736]: Removed session 7. May 14 23:50:26.131631 systemd[1]: Started sshd@5-10.200.20.40:22-10.200.16.10:47644.service - OpenSSH per-connection server daemon (10.200.16.10:47644). May 14 23:50:26.583891 sshd[2295]: Accepted publickey for core from 10.200.16.10 port 47644 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:50:26.585258 sshd-session[2295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:50:26.591362 systemd-logind[1736]: New session 8 of user core. May 14 23:50:26.598545 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 23:50:26.835886 sudo[2299]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 14 23:50:26.836677 sudo[2299]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:50:26.840071 sudo[2299]: pam_unix(sudo:session): session closed for user root May 14 23:50:26.844785 sudo[2298]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 14 23:50:26.845035 sudo[2298]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:50:26.859782 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 23:50:26.881513 augenrules[2321]: No rules May 14 23:50:26.882933 systemd[1]: audit-rules.service: Deactivated successfully. May 14 23:50:26.883138 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 23:50:26.884439 sudo[2298]: pam_unix(sudo:session): session closed for user root May 14 23:50:26.954222 sshd[2297]: Connection closed by 10.200.16.10 port 47644 May 14 23:50:26.954778 sshd-session[2295]: pam_unix(sshd:session): session closed for user core May 14 23:50:26.958252 systemd-logind[1736]: Session 8 logged out. Waiting for processes to exit. May 14 23:50:26.958660 systemd[1]: sshd@5-10.200.20.40:22-10.200.16.10:47644.service: Deactivated successfully. May 14 23:50:26.960240 systemd[1]: session-8.scope: Deactivated successfully. May 14 23:50:26.961921 systemd-logind[1736]: Removed session 8. May 14 23:50:27.037581 systemd[1]: Started sshd@6-10.200.20.40:22-10.200.16.10:47648.service - OpenSSH per-connection server daemon (10.200.16.10:47648). May 14 23:50:27.503180 sshd[2330]: Accepted publickey for core from 10.200.16.10 port 47648 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:50:27.504488 sshd-session[2330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:50:27.508454 systemd-logind[1736]: New session 9 of user core. May 14 23:50:27.517511 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 23:50:27.760264 sudo[2333]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 23:50:27.760564 sudo[2333]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:50:29.442759 (dockerd)[2350]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 23:50:29.443398 systemd[1]: Starting docker.service - Docker Application Container Engine... May 14 23:50:30.562191 dockerd[2350]: time="2025-05-14T23:50:30.562133485Z" level=info msg="Starting up" May 14 23:50:30.811529 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2447435848-merged.mount: Deactivated successfully. May 14 23:50:30.863609 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 14 23:50:30.868540 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:50:31.307093 kernel: hv_balloon: Max. dynamic memory size: 4096 MB May 14 23:50:31.314973 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:50:31.327625 (kubelet)[2376]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:50:31.333177 dockerd[2350]: time="2025-05-14T23:50:31.333138625Z" level=info msg="Loading containers: start." May 14 23:50:31.367471 kubelet[2376]: E0514 23:50:31.365495 2376 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:50:31.368484 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:50:31.368848 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:50:31.370485 systemd[1]: kubelet.service: Consumed 121ms CPU time, 99.7M memory peak. May 14 23:50:31.555368 kernel: Initializing XFRM netlink socket May 14 23:50:31.695078 systemd-networkd[1520]: docker0: Link UP May 14 23:50:31.732528 dockerd[2350]: time="2025-05-14T23:50:31.732491786Z" level=info msg="Loading containers: done." May 14 23:50:31.753508 dockerd[2350]: time="2025-05-14T23:50:31.753424323Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 23:50:31.753671 dockerd[2350]: time="2025-05-14T23:50:31.753527643Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 14 23:50:31.753671 dockerd[2350]: time="2025-05-14T23:50:31.753644403Z" level=info msg="Daemon has completed initialization" May 14 23:50:31.803702 dockerd[2350]: time="2025-05-14T23:50:31.803611244Z" level=info msg="API listen on /run/docker.sock" May 14 23:50:31.804078 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 23:50:32.680330 containerd[1766]: time="2025-05-14T23:50:32.680279148Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 14 23:50:32.847444 update_engine[1744]: I20250514 23:50:32.847374 1744 update_attempter.cc:509] Updating boot flags... May 14 23:50:32.948602 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (2564) May 14 23:50:33.500934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount589074858.mount: Deactivated successfully. May 14 23:50:36.191407 containerd[1766]: time="2025-05-14T23:50:36.190605698Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:36.194725 containerd[1766]: time="2025-05-14T23:50:36.194672981Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=26233118" May 14 23:50:36.197818 containerd[1766]: time="2025-05-14T23:50:36.197769344Z" level=info msg="ImageCreate event name:\"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:36.203726 containerd[1766]: time="2025-05-14T23:50:36.203673989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:36.205413 containerd[1766]: time="2025-05-14T23:50:36.205037110Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"26229918\" in 3.524716202s" May 14 23:50:36.205413 containerd[1766]: time="2025-05-14T23:50:36.205072390Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\"" May 14 23:50:36.206169 containerd[1766]: time="2025-05-14T23:50:36.206140391Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 14 23:50:38.627392 containerd[1766]: time="2025-05-14T23:50:38.626734171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:38.631399 containerd[1766]: time="2025-05-14T23:50:38.631168175Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=22529571" May 14 23:50:38.635095 containerd[1766]: time="2025-05-14T23:50:38.635044498Z" level=info msg="ImageCreate event name:\"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:38.642123 containerd[1766]: time="2025-05-14T23:50:38.642068144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:38.643273 containerd[1766]: time="2025-05-14T23:50:38.643147425Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"23971132\" in 2.436969954s" May 14 23:50:38.643273 containerd[1766]: time="2025-05-14T23:50:38.643178865Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\"" May 14 23:50:38.643916 containerd[1766]: time="2025-05-14T23:50:38.643748586Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 14 23:50:40.516480 containerd[1766]: time="2025-05-14T23:50:40.516423521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:40.519442 containerd[1766]: time="2025-05-14T23:50:40.519380124Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=17482173" May 14 23:50:40.524925 containerd[1766]: time="2025-05-14T23:50:40.524874448Z" level=info msg="ImageCreate event name:\"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:40.531003 containerd[1766]: time="2025-05-14T23:50:40.530946694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:40.532403 containerd[1766]: time="2025-05-14T23:50:40.532073255Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"18923752\" in 1.888294669s" May 14 23:50:40.532403 containerd[1766]: time="2025-05-14T23:50:40.532108895Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\"" May 14 23:50:40.532918 containerd[1766]: time="2025-05-14T23:50:40.532743575Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 14 23:50:41.613644 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 14 23:50:41.621532 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:50:43.371796 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:50:43.379621 (kubelet)[2682]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:50:43.414595 kubelet[2682]: E0514 23:50:43.414518 2682 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:50:43.417021 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:50:43.417159 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:50:43.417688 systemd[1]: kubelet.service: Consumed 123ms CPU time, 100.3M memory peak. May 14 23:50:46.578301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3457018137.mount: Deactivated successfully. May 14 23:50:46.951727 containerd[1766]: time="2025-05-14T23:50:46.951670806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:46.955366 containerd[1766]: time="2025-05-14T23:50:46.955193889Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=27370351" May 14 23:50:46.959495 containerd[1766]: time="2025-05-14T23:50:46.959463052Z" level=info msg="ImageCreate event name:\"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:46.965445 containerd[1766]: time="2025-05-14T23:50:46.965376817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:46.966221 containerd[1766]: time="2025-05-14T23:50:46.966088298Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"27369370\" in 6.433307043s" May 14 23:50:46.966221 containerd[1766]: time="2025-05-14T23:50:46.966117738Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\"" May 14 23:50:46.966892 containerd[1766]: time="2025-05-14T23:50:46.966684498Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 14 23:50:48.076151 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2830621027.mount: Deactivated successfully. May 14 23:50:51.323928 containerd[1766]: time="2025-05-14T23:50:51.323867364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:51.327944 containerd[1766]: time="2025-05-14T23:50:51.327887087Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" May 14 23:50:51.331736 containerd[1766]: time="2025-05-14T23:50:51.331684050Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:51.338081 containerd[1766]: time="2025-05-14T23:50:51.338008216Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:51.339332 containerd[1766]: time="2025-05-14T23:50:51.339285977Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 4.372568879s" May 14 23:50:51.339332 containerd[1766]: time="2025-05-14T23:50:51.339329577Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 14 23:50:51.340184 containerd[1766]: time="2025-05-14T23:50:51.339838657Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 14 23:50:51.911942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2176196400.mount: Deactivated successfully. May 14 23:50:51.939394 containerd[1766]: time="2025-05-14T23:50:51.938655907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:51.942648 containerd[1766]: time="2025-05-14T23:50:51.942349670Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" May 14 23:50:51.946524 containerd[1766]: time="2025-05-14T23:50:51.946476114Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:51.952636 containerd[1766]: time="2025-05-14T23:50:51.952556439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:51.953699 containerd[1766]: time="2025-05-14T23:50:51.953448239Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 613.572062ms" May 14 23:50:51.953699 containerd[1766]: time="2025-05-14T23:50:51.953479479Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 14 23:50:51.954244 containerd[1766]: time="2025-05-14T23:50:51.954218960Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 14 23:50:52.679621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3093069549.mount: Deactivated successfully. May 14 23:50:53.613753 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. May 14 23:50:53.619589 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:50:53.714819 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:50:53.728848 (kubelet)[2777]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:50:56.238494 kubelet[2777]: E0514 23:50:53.986152 2777 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:50:53.987752 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:50:53.987872 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:50:53.988176 systemd[1]: kubelet.service: Consumed 129ms CPU time, 99.7M memory peak. May 14 23:51:04.113831 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. May 14 23:51:04.123569 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:51:06.638849 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:51:06.642464 (kubelet)[2828]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:51:06.680564 kubelet[2828]: E0514 23:51:06.680490 2828 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:51:06.682505 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:51:06.682632 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:51:06.683053 systemd[1]: kubelet.service: Consumed 124ms CPU time, 102.2M memory peak. May 14 23:51:10.372382 containerd[1766]: time="2025-05-14T23:51:10.372314240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:51:10.375088 containerd[1766]: time="2025-05-14T23:51:10.375055122Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812469" May 14 23:51:10.379018 containerd[1766]: time="2025-05-14T23:51:10.378971485Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:51:10.384177 containerd[1766]: time="2025-05-14T23:51:10.384126650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:51:10.385640 containerd[1766]: time="2025-05-14T23:51:10.385513931Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 18.431263651s" May 14 23:51:10.385640 containerd[1766]: time="2025-05-14T23:51:10.385545251Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 14 23:51:16.665196 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:51:16.665368 systemd[1]: kubelet.service: Consumed 124ms CPU time, 102.2M memory peak. May 14 23:51:16.675564 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:51:16.703326 systemd[1]: Reload requested from client PID 2864 ('systemctl') (unit session-9.scope)... May 14 23:51:16.703508 systemd[1]: Reloading... May 14 23:51:16.824529 zram_generator::config[2917]: No configuration found. May 14 23:51:16.923481 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:51:17.026725 systemd[1]: Reloading finished in 322 ms. May 14 23:51:17.081554 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:51:17.086598 (kubelet)[2969]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 23:51:17.089586 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:51:17.090745 systemd[1]: kubelet.service: Deactivated successfully. May 14 23:51:17.091003 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:51:17.091060 systemd[1]: kubelet.service: Consumed 83ms CPU time, 91.4M memory peak. May 14 23:51:17.099599 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:51:26.079803 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:51:26.083628 (kubelet)[2981]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 23:51:26.121385 kubelet[2981]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:51:26.121724 kubelet[2981]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 14 23:51:26.121768 kubelet[2981]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:51:26.121924 kubelet[2981]: I0514 23:51:26.121893 2981 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 23:51:26.677312 kubelet[2981]: I0514 23:51:26.677273 2981 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 14 23:51:26.678382 kubelet[2981]: I0514 23:51:26.677491 2981 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 23:51:26.678382 kubelet[2981]: I0514 23:51:26.677773 2981 server.go:954] "Client rotation is on, will bootstrap in background" May 14 23:51:26.695472 kubelet[2981]: E0514 23:51:26.695425 2981 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" May 14 23:51:26.697328 kubelet[2981]: I0514 23:51:26.697282 2981 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 23:51:26.704712 kubelet[2981]: E0514 23:51:26.704682 2981 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 14 23:51:26.704865 kubelet[2981]: I0514 23:51:26.704851 2981 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 14 23:51:26.708079 kubelet[2981]: I0514 23:51:26.708055 2981 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 23:51:26.709377 kubelet[2981]: I0514 23:51:26.708963 2981 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 23:51:26.709377 kubelet[2981]: I0514 23:51:26.709007 2981 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.1-n-bf7705109b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 23:51:26.709377 kubelet[2981]: I0514 23:51:26.709183 2981 topology_manager.go:138] "Creating topology manager with none policy" May 14 23:51:26.709377 kubelet[2981]: I0514 23:51:26.709192 2981 container_manager_linux.go:304] "Creating device plugin manager" May 14 23:51:26.709587 kubelet[2981]: I0514 23:51:26.709336 2981 state_mem.go:36] "Initialized new in-memory state store" May 14 23:51:26.712050 kubelet[2981]: I0514 23:51:26.712025 2981 kubelet.go:446] "Attempting to sync node with API server" May 14 23:51:26.712099 kubelet[2981]: I0514 23:51:26.712055 2981 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 23:51:26.712099 kubelet[2981]: I0514 23:51:26.712076 2981 kubelet.go:352] "Adding apiserver pod source" May 14 23:51:26.712099 kubelet[2981]: I0514 23:51:26.712088 2981 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 23:51:26.718245 kubelet[2981]: W0514 23:51:26.716863 2981 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-n-bf7705109b&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused May 14 23:51:26.718245 kubelet[2981]: E0514 23:51:26.716934 2981 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-n-bf7705109b&limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" May 14 23:51:26.718245 kubelet[2981]: I0514 23:51:26.717055 2981 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 14 23:51:26.718245 kubelet[2981]: I0514 23:51:26.717618 2981 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 23:51:26.718245 kubelet[2981]: W0514 23:51:26.717681 2981 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 23:51:26.719072 kubelet[2981]: I0514 23:51:26.719041 2981 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 14 23:51:26.719125 kubelet[2981]: I0514 23:51:26.719079 2981 server.go:1287] "Started kubelet" May 14 23:51:26.722908 kubelet[2981]: I0514 23:51:26.722880 2981 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 23:51:26.723543 kubelet[2981]: W0514 23:51:26.723507 2981 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused May 14 23:51:26.723659 kubelet[2981]: E0514 23:51:26.723642 2981 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" May 14 23:51:26.723868 kubelet[2981]: E0514 23:51:26.723761 2981 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.40:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.40:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.1.1-n-bf7705109b.183f89cb4dc369c2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.1.1-n-bf7705109b,UID:ci-4230.1.1-n-bf7705109b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.1.1-n-bf7705109b,},FirstTimestamp:2025-05-14 23:51:26.719060418 +0000 UTC m=+0.632384233,LastTimestamp:2025-05-14 23:51:26.719060418 +0000 UTC m=+0.632384233,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.1.1-n-bf7705109b,}" May 14 23:51:26.724767 kubelet[2981]: I0514 23:51:26.724739 2981 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 14 23:51:26.725683 kubelet[2981]: I0514 23:51:26.725663 2981 server.go:490] "Adding debug handlers to kubelet server" May 14 23:51:26.726661 kubelet[2981]: I0514 23:51:26.726608 2981 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 23:51:26.726948 kubelet[2981]: I0514 23:51:26.726931 2981 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 23:51:26.727213 kubelet[2981]: I0514 23:51:26.727195 2981 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 23:51:26.727970 kubelet[2981]: I0514 23:51:26.727936 2981 volume_manager.go:297] "Starting Kubelet Volume Manager" May 14 23:51:26.728439 kubelet[2981]: E0514 23:51:26.728422 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:26.728822 kubelet[2981]: E0514 23:51:26.728789 2981 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-n-bf7705109b?timeout=10s\": dial tcp 10.200.20.40:6443: connect: connection refused" interval="200ms" May 14 23:51:26.729651 kubelet[2981]: I0514 23:51:26.729610 2981 factory.go:221] Registration of the systemd container factory successfully May 14 23:51:26.729718 kubelet[2981]: I0514 23:51:26.729705 2981 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 23:51:26.730464 kubelet[2981]: I0514 23:51:26.730441 2981 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 23:51:26.731497 kubelet[2981]: W0514 23:51:26.731449 2981 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused May 14 23:51:26.731572 kubelet[2981]: E0514 23:51:26.731495 2981 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" May 14 23:51:26.731630 kubelet[2981]: I0514 23:51:26.731608 2981 factory.go:221] Registration of the containerd container factory successfully May 14 23:51:26.732007 kubelet[2981]: I0514 23:51:26.731991 2981 reconciler.go:26] "Reconciler: start to sync state" May 14 23:51:26.752978 kubelet[2981]: E0514 23:51:26.752945 2981 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 23:51:26.759013 kubelet[2981]: I0514 23:51:26.758974 2981 cpu_manager.go:221] "Starting CPU manager" policy="none" May 14 23:51:26.759013 kubelet[2981]: I0514 23:51:26.758993 2981 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 14 23:51:26.759013 kubelet[2981]: I0514 23:51:26.759011 2981 state_mem.go:36] "Initialized new in-memory state store" May 14 23:51:26.829531 kubelet[2981]: E0514 23:51:26.829485 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:29.410639 kubelet[2981]: E0514 23:51:26.930045 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:29.410639 kubelet[2981]: E0514 23:51:26.930576 2981 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-n-bf7705109b?timeout=10s\": dial tcp 10.200.20.40:6443: connect: connection refused" interval="400ms" May 14 23:51:29.410639 kubelet[2981]: E0514 23:51:27.030951 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:29.410639 kubelet[2981]: E0514 23:51:27.131175 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:29.410639 kubelet[2981]: E0514 23:51:27.231966 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:29.410639 kubelet[2981]: E0514 23:51:27.331793 2981 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-n-bf7705109b?timeout=10s\": dial tcp 10.200.20.40:6443: connect: connection refused" interval="800ms" May 14 23:51:29.410639 kubelet[2981]: E0514 23:51:27.332931 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:29.410639 kubelet[2981]: E0514 23:51:27.433586 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:29.410639 kubelet[2981]: E0514 23:51:27.534235 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:29.410639 kubelet[2981]: W0514 23:51:27.559907 2981 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-n-bf7705109b&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused May 14 23:51:29.411106 kubelet[2981]: E0514 23:51:27.559961 2981 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-n-bf7705109b&limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" May 14 23:51:29.411106 kubelet[2981]: W0514 23:51:27.625671 2981 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused May 14 23:51:29.411106 kubelet[2981]: E0514 23:51:27.625744 2981 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" May 14 23:51:29.411106 kubelet[2981]: E0514 23:51:27.635291 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:29.411106 kubelet[2981]: E0514 23:51:27.735880 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:29.411106 kubelet[2981]: E0514 23:51:27.836695 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:29.411106 kubelet[2981]: W0514 23:51:27.857452 2981 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused May 14 23:51:29.411245 kubelet[2981]: E0514 23:51:27.857506 2981 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" May 14 23:51:29.411245 kubelet[2981]: E0514 23:51:27.937007 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:29.411245 kubelet[2981]: E0514 23:51:28.037675 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:29.411245 kubelet[2981]: E0514 23:51:28.132653 2981 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-n-bf7705109b?timeout=10s\": dial tcp 10.200.20.40:6443: connect: connection refused" interval="1.6s" May 14 23:51:29.411245 kubelet[2981]: E0514 23:51:28.137864 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:29.411245 kubelet[2981]: E0514 23:51:28.238809 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:29.411245 kubelet[2981]: E0514 23:51:28.339560 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:29.411245 kubelet[2981]: E0514 23:51:28.440125 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:29.411245 kubelet[2981]: E0514 23:51:28.540749 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:29.411245 kubelet[2981]: E0514 23:51:28.641420 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:29.411245 kubelet[2981]: E0514 23:51:28.741857 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:29.411491 kubelet[2981]: E0514 23:51:28.760805 2981 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" May 14 23:51:29.411491 kubelet[2981]: E0514 23:51:28.842382 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:29.411491 kubelet[2981]: E0514 23:51:28.942982 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:29.411491 kubelet[2981]: E0514 23:51:29.043637 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:29.411491 kubelet[2981]: E0514 23:51:29.144277 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:29.411491 kubelet[2981]: E0514 23:51:29.244383 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:29.411491 kubelet[2981]: E0514 23:51:29.344967 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:29.411491 kubelet[2981]: I0514 23:51:29.386484 2981 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 23:51:29.411491 kubelet[2981]: I0514 23:51:29.387466 2981 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 23:51:29.411491 kubelet[2981]: I0514 23:51:29.387484 2981 status_manager.go:227] "Starting to sync pod status with apiserver" May 14 23:51:29.411491 kubelet[2981]: I0514 23:51:29.387505 2981 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 14 23:51:29.411491 kubelet[2981]: I0514 23:51:29.387511 2981 kubelet.go:2388] "Starting kubelet main sync loop" May 14 23:51:29.411708 kubelet[2981]: E0514 23:51:29.387551 2981 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 23:51:29.411708 kubelet[2981]: W0514 23:51:29.388939 2981 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused May 14 23:51:29.411708 kubelet[2981]: E0514 23:51:29.389091 2981 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" May 14 23:51:29.445562 kubelet[2981]: E0514 23:51:29.445518 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:29.487845 kubelet[2981]: E0514 23:51:29.487809 2981 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 23:51:29.803385 kubelet[2981]: E0514 23:51:29.546420 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:29.803385 kubelet[2981]: W0514 23:51:29.579151 2981 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-n-bf7705109b&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused May 14 23:51:29.803385 kubelet[2981]: E0514 23:51:29.579192 2981 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-n-bf7705109b&limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" May 14 23:51:29.803385 kubelet[2981]: E0514 23:51:29.646783 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:29.803385 kubelet[2981]: E0514 23:51:29.688098 2981 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 23:51:29.803385 kubelet[2981]: E0514 23:51:29.733834 2981 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-n-bf7705109b?timeout=10s\": dial tcp 10.200.20.40:6443: connect: connection refused" interval="3.2s" May 14 23:51:29.803385 kubelet[2981]: E0514 23:51:29.746873 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:29.804772 kubelet[2981]: I0514 23:51:29.804738 2981 policy_none.go:49] "None policy: Start" May 14 23:51:29.804835 kubelet[2981]: I0514 23:51:29.804783 2981 memory_manager.go:186] "Starting memorymanager" policy="None" May 14 23:51:29.804835 kubelet[2981]: I0514 23:51:29.804797 2981 state_mem.go:35] "Initializing new in-memory state store" May 14 23:51:29.815274 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 23:51:29.825936 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 23:51:29.829308 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 23:51:29.841095 kubelet[2981]: I0514 23:51:29.841073 2981 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 23:51:29.841384 kubelet[2981]: I0514 23:51:29.841365 2981 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 23:51:29.841495 kubelet[2981]: I0514 23:51:29.841461 2981 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 23:51:29.842172 kubelet[2981]: I0514 23:51:29.841851 2981 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 23:51:29.843696 kubelet[2981]: E0514 23:51:29.843633 2981 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 14 23:51:29.843810 kubelet[2981]: E0514 23:51:29.843708 2981 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:29.930097 kubelet[2981]: W0514 23:51:29.930017 2981 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused May 14 23:51:29.930097 kubelet[2981]: E0514 23:51:29.930063 2981 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" May 14 23:51:29.944217 kubelet[2981]: I0514 23:51:29.943865 2981 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.1.1-n-bf7705109b" May 14 23:51:29.944217 kubelet[2981]: E0514 23:51:29.944154 2981 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-4230.1.1-n-bf7705109b" May 14 23:51:30.098234 systemd[1]: Created slice kubepods-burstable-pod0e14ce8ad3505f004d01153be26bc530.slice - libcontainer container kubepods-burstable-pod0e14ce8ad3505f004d01153be26bc530.slice. May 14 23:51:30.107272 kubelet[2981]: E0514 23:51:30.107018 2981 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.1.1-n-bf7705109b\" not found" node="ci-4230.1.1-n-bf7705109b" May 14 23:51:30.110546 systemd[1]: Created slice kubepods-burstable-poddf64489caac4f3f3cb02c79d02682d3d.slice - libcontainer container kubepods-burstable-poddf64489caac4f3f3cb02c79d02682d3d.slice. May 14 23:51:30.112715 kubelet[2981]: E0514 23:51:30.112678 2981 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.1.1-n-bf7705109b\" not found" node="ci-4230.1.1-n-bf7705109b" May 14 23:51:30.115447 systemd[1]: Created slice kubepods-burstable-podfc7c8e517ca2b20b56152164c2f011ba.slice - libcontainer container kubepods-burstable-podfc7c8e517ca2b20b56152164c2f011ba.slice. May 14 23:51:30.117033 kubelet[2981]: E0514 23:51:30.117002 2981 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.1.1-n-bf7705109b\" not found" node="ci-4230.1.1-n-bf7705109b" May 14 23:51:30.147186 kubelet[2981]: I0514 23:51:30.146880 2981 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.1.1-n-bf7705109b" May 14 23:51:30.147299 kubelet[2981]: E0514 23:51:30.147245 2981 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-4230.1.1-n-bf7705109b" May 14 23:51:30.151522 kubelet[2981]: I0514 23:51:30.151438 2981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0e14ce8ad3505f004d01153be26bc530-k8s-certs\") pod \"kube-apiserver-ci-4230.1.1-n-bf7705109b\" (UID: \"0e14ce8ad3505f004d01153be26bc530\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-bf7705109b" May 14 23:51:30.151522 kubelet[2981]: I0514 23:51:30.151477 2981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/df64489caac4f3f3cb02c79d02682d3d-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.1-n-bf7705109b\" (UID: \"df64489caac4f3f3cb02c79d02682d3d\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-bf7705109b" May 14 23:51:30.151522 kubelet[2981]: I0514 23:51:30.151497 2981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/df64489caac4f3f3cb02c79d02682d3d-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.1-n-bf7705109b\" (UID: \"df64489caac4f3f3cb02c79d02682d3d\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-bf7705109b" May 14 23:51:30.151522 kubelet[2981]: I0514 23:51:30.151515 2981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/df64489caac4f3f3cb02c79d02682d3d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.1-n-bf7705109b\" (UID: \"df64489caac4f3f3cb02c79d02682d3d\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-bf7705109b" May 14 23:51:30.151522 kubelet[2981]: I0514 23:51:30.151533 2981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fc7c8e517ca2b20b56152164c2f011ba-kubeconfig\") pod \"kube-scheduler-ci-4230.1.1-n-bf7705109b\" (UID: \"fc7c8e517ca2b20b56152164c2f011ba\") " pod="kube-system/kube-scheduler-ci-4230.1.1-n-bf7705109b" May 14 23:51:30.151767 kubelet[2981]: I0514 23:51:30.151548 2981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0e14ce8ad3505f004d01153be26bc530-ca-certs\") pod \"kube-apiserver-ci-4230.1.1-n-bf7705109b\" (UID: \"0e14ce8ad3505f004d01153be26bc530\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-bf7705109b" May 14 23:51:30.151767 kubelet[2981]: I0514 23:51:30.151564 2981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0e14ce8ad3505f004d01153be26bc530-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.1-n-bf7705109b\" (UID: \"0e14ce8ad3505f004d01153be26bc530\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-bf7705109b" May 14 23:51:30.151767 kubelet[2981]: I0514 23:51:30.151592 2981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/df64489caac4f3f3cb02c79d02682d3d-ca-certs\") pod \"kube-controller-manager-ci-4230.1.1-n-bf7705109b\" (UID: \"df64489caac4f3f3cb02c79d02682d3d\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-bf7705109b" May 14 23:51:30.151767 kubelet[2981]: I0514 23:51:30.151611 2981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/df64489caac4f3f3cb02c79d02682d3d-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.1-n-bf7705109b\" (UID: \"df64489caac4f3f3cb02c79d02682d3d\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-bf7705109b" May 14 23:51:30.390286 kubelet[2981]: W0514 23:51:30.390239 2981 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused May 14 23:51:30.390568 kubelet[2981]: E0514 23:51:30.390488 2981 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" May 14 23:51:30.408622 containerd[1766]: time="2025-05-14T23:51:30.408581926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.1-n-bf7705109b,Uid:0e14ce8ad3505f004d01153be26bc530,Namespace:kube-system,Attempt:0,}" May 14 23:51:30.413630 containerd[1766]: time="2025-05-14T23:51:30.413593330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.1-n-bf7705109b,Uid:df64489caac4f3f3cb02c79d02682d3d,Namespace:kube-system,Attempt:0,}" May 14 23:51:30.418624 containerd[1766]: time="2025-05-14T23:51:30.418419134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.1-n-bf7705109b,Uid:fc7c8e517ca2b20b56152164c2f011ba,Namespace:kube-system,Attempt:0,}" May 14 23:51:30.549408 kubelet[2981]: I0514 23:51:30.549373 2981 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.1.1-n-bf7705109b" May 14 23:51:30.549923 kubelet[2981]: E0514 23:51:30.549756 2981 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-4230.1.1-n-bf7705109b" May 14 23:51:30.756000 kubelet[2981]: W0514 23:51:30.755272 2981 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused May 14 23:51:30.756000 kubelet[2981]: E0514 23:51:30.755318 2981 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.40:6443: connect: connection refused" logger="UnhandledError" May 14 23:51:31.129996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2950350317.mount: Deactivated successfully. May 14 23:51:31.166870 containerd[1766]: time="2025-05-14T23:51:31.166815140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:51:31.185614 containerd[1766]: time="2025-05-14T23:51:31.185565395Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" May 14 23:51:31.190370 containerd[1766]: time="2025-05-14T23:51:31.189847359Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:51:31.194875 containerd[1766]: time="2025-05-14T23:51:31.194084722Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:51:31.203036 containerd[1766]: time="2025-05-14T23:51:31.202971570Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 14 23:51:31.207256 containerd[1766]: time="2025-05-14T23:51:31.206494332Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:51:31.211695 containerd[1766]: time="2025-05-14T23:51:31.211660857Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:51:31.212591 containerd[1766]: time="2025-05-14T23:51:31.212539017Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 803.876771ms" May 14 23:51:31.215923 containerd[1766]: time="2025-05-14T23:51:31.215849140Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 14 23:51:31.220196 containerd[1766]: time="2025-05-14T23:51:31.220150783Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 801.675449ms" May 14 23:51:31.239071 containerd[1766]: time="2025-05-14T23:51:31.239036239Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 825.378229ms" May 14 23:51:31.366379 kubelet[2981]: I0514 23:51:31.366321 2981 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.1.1-n-bf7705109b" May 14 23:51:31.366796 kubelet[2981]: E0514 23:51:31.366759 2981 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-4230.1.1-n-bf7705109b" May 14 23:51:31.993722 containerd[1766]: time="2025-05-14T23:51:31.993257371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:51:31.993722 containerd[1766]: time="2025-05-14T23:51:31.993559971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:51:31.993722 containerd[1766]: time="2025-05-14T23:51:31.993577211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:51:31.993722 containerd[1766]: time="2025-05-14T23:51:31.993676291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:51:32.000371 containerd[1766]: time="2025-05-14T23:51:32.000267576Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:51:32.001629 containerd[1766]: time="2025-05-14T23:51:32.001404257Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:51:32.001629 containerd[1766]: time="2025-05-14T23:51:32.001424937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:51:32.002308 containerd[1766]: time="2025-05-14T23:51:32.001581617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:51:32.002908 containerd[1766]: time="2025-05-14T23:51:32.002593978Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:51:32.002908 containerd[1766]: time="2025-05-14T23:51:32.002644538Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:51:32.002908 containerd[1766]: time="2025-05-14T23:51:32.002656058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:51:32.002908 containerd[1766]: time="2025-05-14T23:51:32.002726458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:51:32.022604 systemd[1]: Started cri-containerd-a70bb6dc2d0f87daf7fb9d810f294e805cd009457c877edf4a0fc4eb7d8b9d34.scope - libcontainer container a70bb6dc2d0f87daf7fb9d810f294e805cd009457c877edf4a0fc4eb7d8b9d34. May 14 23:51:32.028231 systemd[1]: Started cri-containerd-a23a9139779912001423be3ddc7ff7a7a030c7eb97968f32978c3d181f81040f.scope - libcontainer container a23a9139779912001423be3ddc7ff7a7a030c7eb97968f32978c3d181f81040f. May 14 23:51:32.030114 systemd[1]: Started cri-containerd-ac06762c801f28ec2aec007d2854fb879678a1832c0ec02e221529c02eb05dc8.scope - libcontainer container ac06762c801f28ec2aec007d2854fb879678a1832c0ec02e221529c02eb05dc8. May 14 23:51:32.076057 containerd[1766]: time="2025-05-14T23:51:32.075838238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.1-n-bf7705109b,Uid:df64489caac4f3f3cb02c79d02682d3d,Namespace:kube-system,Attempt:0,} returns sandbox id \"a70bb6dc2d0f87daf7fb9d810f294e805cd009457c877edf4a0fc4eb7d8b9d34\"" May 14 23:51:32.080753 containerd[1766]: time="2025-05-14T23:51:32.080323802Z" level=info msg="CreateContainer within sandbox \"a70bb6dc2d0f87daf7fb9d810f294e805cd009457c877edf4a0fc4eb7d8b9d34\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 23:51:32.089763 containerd[1766]: time="2025-05-14T23:51:32.089621689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.1-n-bf7705109b,Uid:0e14ce8ad3505f004d01153be26bc530,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac06762c801f28ec2aec007d2854fb879678a1832c0ec02e221529c02eb05dc8\"" May 14 23:51:32.092050 containerd[1766]: time="2025-05-14T23:51:32.091976531Z" level=info msg="CreateContainer within sandbox \"ac06762c801f28ec2aec007d2854fb879678a1832c0ec02e221529c02eb05dc8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 23:51:32.093921 containerd[1766]: time="2025-05-14T23:51:32.093883013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.1-n-bf7705109b,Uid:fc7c8e517ca2b20b56152164c2f011ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"a23a9139779912001423be3ddc7ff7a7a030c7eb97968f32978c3d181f81040f\"" May 14 23:51:32.097099 containerd[1766]: time="2025-05-14T23:51:32.097074895Z" level=info msg="CreateContainer within sandbox \"a23a9139779912001423be3ddc7ff7a7a030c7eb97968f32978c3d181f81040f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 23:51:32.148163 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3206668729.mount: Deactivated successfully. May 14 23:51:32.152104 containerd[1766]: time="2025-05-14T23:51:32.152067540Z" level=info msg="CreateContainer within sandbox \"a70bb6dc2d0f87daf7fb9d810f294e805cd009457c877edf4a0fc4eb7d8b9d34\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fd1964ca01a8895226ff04083f95ceba413de9539f6806197ac52529d00813fc\"" May 14 23:51:32.152997 containerd[1766]: time="2025-05-14T23:51:32.152964221Z" level=info msg="StartContainer for \"fd1964ca01a8895226ff04083f95ceba413de9539f6806197ac52529d00813fc\"" May 14 23:51:32.159169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4178669394.mount: Deactivated successfully. May 14 23:51:32.172842 containerd[1766]: time="2025-05-14T23:51:32.172726197Z" level=info msg="CreateContainer within sandbox \"ac06762c801f28ec2aec007d2854fb879678a1832c0ec02e221529c02eb05dc8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e3f353995b436e07d9a391539d0f6ec7938f0c0f4d7d46166b43726aa4b99df7\"" May 14 23:51:32.174440 containerd[1766]: time="2025-05-14T23:51:32.173404797Z" level=info msg="StartContainer for \"e3f353995b436e07d9a391539d0f6ec7938f0c0f4d7d46166b43726aa4b99df7\"" May 14 23:51:32.181594 systemd[1]: Started cri-containerd-fd1964ca01a8895226ff04083f95ceba413de9539f6806197ac52529d00813fc.scope - libcontainer container fd1964ca01a8895226ff04083f95ceba413de9539f6806197ac52529d00813fc. May 14 23:51:32.184521 containerd[1766]: time="2025-05-14T23:51:32.184478286Z" level=info msg="CreateContainer within sandbox \"a23a9139779912001423be3ddc7ff7a7a030c7eb97968f32978c3d181f81040f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9eb41ffb91910d6ef247e4f8b8d3a84fa2344228e71a0adb601b48e1f47b4c44\"" May 14 23:51:32.187926 containerd[1766]: time="2025-05-14T23:51:32.187614169Z" level=info msg="StartContainer for \"9eb41ffb91910d6ef247e4f8b8d3a84fa2344228e71a0adb601b48e1f47b4c44\"" May 14 23:51:32.210505 systemd[1]: Started cri-containerd-e3f353995b436e07d9a391539d0f6ec7938f0c0f4d7d46166b43726aa4b99df7.scope - libcontainer container e3f353995b436e07d9a391539d0f6ec7938f0c0f4d7d46166b43726aa4b99df7. May 14 23:51:32.229501 systemd[1]: Started cri-containerd-9eb41ffb91910d6ef247e4f8b8d3a84fa2344228e71a0adb601b48e1f47b4c44.scope - libcontainer container 9eb41ffb91910d6ef247e4f8b8d3a84fa2344228e71a0adb601b48e1f47b4c44. May 14 23:51:32.241071 containerd[1766]: time="2025-05-14T23:51:32.240849012Z" level=info msg="StartContainer for \"fd1964ca01a8895226ff04083f95ceba413de9539f6806197ac52529d00813fc\" returns successfully" May 14 23:51:32.295789 containerd[1766]: time="2025-05-14T23:51:32.295332136Z" level=info msg="StartContainer for \"9eb41ffb91910d6ef247e4f8b8d3a84fa2344228e71a0adb601b48e1f47b4c44\" returns successfully" May 14 23:51:32.296926 containerd[1766]: time="2025-05-14T23:51:32.295332416Z" level=info msg="StartContainer for \"e3f353995b436e07d9a391539d0f6ec7938f0c0f4d7d46166b43726aa4b99df7\" returns successfully" May 14 23:51:32.400462 kubelet[2981]: E0514 23:51:32.400169 2981 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.1.1-n-bf7705109b\" not found" node="ci-4230.1.1-n-bf7705109b" May 14 23:51:32.406492 kubelet[2981]: E0514 23:51:32.405613 2981 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.1.1-n-bf7705109b\" not found" node="ci-4230.1.1-n-bf7705109b" May 14 23:51:32.408323 kubelet[2981]: E0514 23:51:32.408203 2981 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.1.1-n-bf7705109b\" not found" node="ci-4230.1.1-n-bf7705109b" May 14 23:51:32.969869 kubelet[2981]: I0514 23:51:32.969097 2981 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.1.1-n-bf7705109b" May 14 23:51:33.409359 kubelet[2981]: E0514 23:51:33.408940 2981 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.1.1-n-bf7705109b\" not found" node="ci-4230.1.1-n-bf7705109b" May 14 23:51:33.411389 kubelet[2981]: E0514 23:51:33.410587 2981 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.1.1-n-bf7705109b\" not found" node="ci-4230.1.1-n-bf7705109b" May 14 23:51:34.077244 kubelet[2981]: E0514 23:51:34.077205 2981 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.1.1-n-bf7705109b\" not found" node="ci-4230.1.1-n-bf7705109b" May 14 23:51:34.159317 kubelet[2981]: I0514 23:51:34.158975 2981 kubelet_node_status.go:79] "Successfully registered node" node="ci-4230.1.1-n-bf7705109b" May 14 23:51:34.159582 kubelet[2981]: E0514 23:51:34.159566 2981 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"ci-4230.1.1-n-bf7705109b\": node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:34.190592 kubelet[2981]: E0514 23:51:34.190553 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:34.290874 kubelet[2981]: E0514 23:51:34.290834 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:34.391567 kubelet[2981]: E0514 23:51:34.391530 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:34.410751 kubelet[2981]: E0514 23:51:34.410701 2981 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.1.1-n-bf7705109b\" not found" node="ci-4230.1.1-n-bf7705109b" May 14 23:51:34.492504 kubelet[2981]: E0514 23:51:34.492464 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:34.593033 kubelet[2981]: E0514 23:51:34.592985 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:34.693619 kubelet[2981]: E0514 23:51:34.693510 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:34.794369 kubelet[2981]: E0514 23:51:34.794323 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:34.886863 kubelet[2981]: E0514 23:51:34.886468 2981 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.1.1-n-bf7705109b\" not found" node="ci-4230.1.1-n-bf7705109b" May 14 23:51:34.895270 kubelet[2981]: E0514 23:51:34.895245 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:34.996515 kubelet[2981]: E0514 23:51:34.996352 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:35.097489 kubelet[2981]: E0514 23:51:35.097429 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:35.198838 kubelet[2981]: E0514 23:51:35.198795 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:35.299983 kubelet[2981]: E0514 23:51:35.299855 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:35.400487 kubelet[2981]: E0514 23:51:35.400447 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:35.502249 kubelet[2981]: E0514 23:51:35.501513 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:35.602331 kubelet[2981]: E0514 23:51:35.602071 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:35.703121 kubelet[2981]: E0514 23:51:35.703077 2981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:35.829275 kubelet[2981]: I0514 23:51:35.829222 2981 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.1.1-n-bf7705109b" May 14 23:51:35.842555 kubelet[2981]: W0514 23:51:35.842207 2981 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 23:51:35.842555 kubelet[2981]: I0514 23:51:35.842336 2981 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.1.1-n-bf7705109b" May 14 23:51:35.849789 kubelet[2981]: W0514 23:51:35.849768 2981 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 23:51:35.850019 kubelet[2981]: I0514 23:51:35.850003 2981 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.1.1-n-bf7705109b" May 14 23:51:35.859509 kubelet[2981]: W0514 23:51:35.859415 2981 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 23:51:36.025956 systemd[1]: Reload requested from client PID 3256 ('systemctl') (unit session-9.scope)... May 14 23:51:36.025970 systemd[1]: Reloading... May 14 23:51:36.139374 zram_generator::config[3303]: No configuration found. May 14 23:51:36.241646 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:51:36.327947 kubelet[2981]: I0514 23:51:36.326637 2981 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.1.1-n-bf7705109b" May 14 23:51:36.335398 kubelet[2981]: W0514 23:51:36.335370 2981 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 23:51:36.335983 kubelet[2981]: E0514 23:51:36.335588 2981 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.1.1-n-bf7705109b\" already exists" pod="kube-system/kube-scheduler-ci-4230.1.1-n-bf7705109b" May 14 23:51:36.358974 systemd[1]: Reloading finished in 332 ms. May 14 23:51:36.392217 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:51:36.392889 kubelet[2981]: I0514 23:51:36.392577 2981 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 23:51:36.404767 systemd[1]: kubelet.service: Deactivated successfully. May 14 23:51:36.406399 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:51:36.406458 systemd[1]: kubelet.service: Consumed 961ms CPU time, 122.2M memory peak. May 14 23:51:36.411580 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:51:36.519379 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:51:36.524107 (kubelet)[3367]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 23:51:36.567843 kubelet[3367]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:51:36.567843 kubelet[3367]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 14 23:51:36.567843 kubelet[3367]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:51:36.570162 kubelet[3367]: I0514 23:51:36.568885 3367 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 23:51:36.580738 kubelet[3367]: I0514 23:51:36.580708 3367 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 14 23:51:36.581118 kubelet[3367]: I0514 23:51:36.581102 3367 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 23:51:36.581519 kubelet[3367]: I0514 23:51:36.581501 3367 server.go:954] "Client rotation is on, will bootstrap in background" May 14 23:51:36.582877 kubelet[3367]: I0514 23:51:36.582857 3367 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 23:51:36.586059 kubelet[3367]: I0514 23:51:36.586017 3367 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 23:51:36.590362 kubelet[3367]: E0514 23:51:36.590314 3367 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 14 23:51:36.590433 kubelet[3367]: I0514 23:51:36.590426 3367 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 14 23:51:36.593801 kubelet[3367]: I0514 23:51:36.593770 3367 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 23:51:36.593982 kubelet[3367]: I0514 23:51:36.593945 3367 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 23:51:36.594151 kubelet[3367]: I0514 23:51:36.593976 3367 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.1-n-bf7705109b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 23:51:36.594227 kubelet[3367]: I0514 23:51:36.594158 3367 topology_manager.go:138] "Creating topology manager with none policy" May 14 23:51:36.594227 kubelet[3367]: I0514 23:51:36.594167 3367 container_manager_linux.go:304] "Creating device plugin manager" May 14 23:51:36.594227 kubelet[3367]: I0514 23:51:36.594208 3367 state_mem.go:36] "Initialized new in-memory state store" May 14 23:51:36.596416 kubelet[3367]: I0514 23:51:36.594447 3367 kubelet.go:446] "Attempting to sync node with API server" May 14 23:51:36.596416 kubelet[3367]: I0514 23:51:36.594478 3367 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 23:51:36.596416 kubelet[3367]: I0514 23:51:36.594499 3367 kubelet.go:352] "Adding apiserver pod source" May 14 23:51:36.596416 kubelet[3367]: I0514 23:51:36.594509 3367 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 23:51:36.602604 kubelet[3367]: I0514 23:51:36.602580 3367 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 14 23:51:36.603238 kubelet[3367]: I0514 23:51:36.603221 3367 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 23:51:36.603836 kubelet[3367]: I0514 23:51:36.603814 3367 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 14 23:51:36.603940 kubelet[3367]: I0514 23:51:36.603930 3367 server.go:1287] "Started kubelet" May 14 23:51:36.607506 kubelet[3367]: I0514 23:51:36.607488 3367 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 23:51:36.608583 kubelet[3367]: I0514 23:51:36.608537 3367 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 14 23:51:36.610081 kubelet[3367]: I0514 23:51:36.609820 3367 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 23:51:36.610730 kubelet[3367]: I0514 23:51:36.610702 3367 volume_manager.go:297] "Starting Kubelet Volume Manager" May 14 23:51:36.610823 kubelet[3367]: E0514 23:51:36.610801 3367 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-bf7705109b\" not found" May 14 23:51:36.611996 kubelet[3367]: I0514 23:51:36.611973 3367 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 23:51:36.612130 kubelet[3367]: I0514 23:51:36.612112 3367 reconciler.go:26] "Reconciler: start to sync state" May 14 23:51:36.617906 kubelet[3367]: I0514 23:51:36.617867 3367 server.go:490] "Adding debug handlers to kubelet server" May 14 23:51:36.618815 kubelet[3367]: I0514 23:51:36.618759 3367 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 23:51:36.619025 kubelet[3367]: I0514 23:51:36.619005 3367 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 23:51:36.624031 kubelet[3367]: I0514 23:51:36.622966 3367 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 23:51:36.624121 kubelet[3367]: I0514 23:51:36.624040 3367 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 23:51:36.624121 kubelet[3367]: I0514 23:51:36.624059 3367 status_manager.go:227] "Starting to sync pod status with apiserver" May 14 23:51:36.624121 kubelet[3367]: I0514 23:51:36.624097 3367 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 14 23:51:36.624121 kubelet[3367]: I0514 23:51:36.624104 3367 kubelet.go:2388] "Starting kubelet main sync loop" May 14 23:51:36.624204 kubelet[3367]: E0514 23:51:36.624173 3367 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 23:51:36.629602 kubelet[3367]: I0514 23:51:36.628042 3367 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 23:51:36.634254 kubelet[3367]: I0514 23:51:36.634231 3367 factory.go:221] Registration of the containerd container factory successfully May 14 23:51:36.634422 kubelet[3367]: I0514 23:51:36.634411 3367 factory.go:221] Registration of the systemd container factory successfully May 14 23:51:36.698133 kubelet[3367]: I0514 23:51:36.698037 3367 cpu_manager.go:221] "Starting CPU manager" policy="none" May 14 23:51:36.698284 kubelet[3367]: I0514 23:51:36.698271 3367 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 14 23:51:36.698412 kubelet[3367]: I0514 23:51:36.698402 3367 state_mem.go:36] "Initialized new in-memory state store" May 14 23:51:36.698717 kubelet[3367]: I0514 23:51:36.698694 3367 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 23:51:36.698812 kubelet[3367]: I0514 23:51:36.698787 3367 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 23:51:36.698864 kubelet[3367]: I0514 23:51:36.698856 3367 policy_none.go:49] "None policy: Start" May 14 23:51:36.698909 kubelet[3367]: I0514 23:51:36.698902 3367 memory_manager.go:186] "Starting memorymanager" policy="None" May 14 23:51:36.698961 kubelet[3367]: I0514 23:51:36.698953 3367 state_mem.go:35] "Initializing new in-memory state store" May 14 23:51:36.699111 kubelet[3367]: I0514 23:51:36.699099 3367 state_mem.go:75] "Updated machine memory state" May 14 23:51:36.703253 kubelet[3367]: I0514 23:51:36.703073 3367 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 23:51:36.703702 kubelet[3367]: I0514 23:51:36.703610 3367 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 23:51:36.703888 kubelet[3367]: I0514 23:51:36.703780 3367 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 23:51:36.704485 kubelet[3367]: I0514 23:51:36.704169 3367 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 23:51:36.706932 kubelet[3367]: E0514 23:51:36.706912 3367 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 14 23:51:36.725020 kubelet[3367]: I0514 23:51:36.724939 3367 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.1.1-n-bf7705109b" May 14 23:51:36.725521 kubelet[3367]: I0514 23:51:36.725285 3367 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.1.1-n-bf7705109b" May 14 23:51:36.725521 kubelet[3367]: I0514 23:51:36.724948 3367 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.1.1-n-bf7705109b" May 14 23:51:36.736601 kubelet[3367]: W0514 23:51:36.736557 3367 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 23:51:36.736726 kubelet[3367]: E0514 23:51:36.736621 3367 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.1.1-n-bf7705109b\" already exists" pod="kube-system/kube-scheduler-ci-4230.1.1-n-bf7705109b" May 14 23:51:36.737643 kubelet[3367]: W0514 23:51:36.737447 3367 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 23:51:36.737643 kubelet[3367]: W0514 23:51:36.737550 3367 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 23:51:36.737643 kubelet[3367]: E0514 23:51:36.737557 3367 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230.1.1-n-bf7705109b\" already exists" pod="kube-system/kube-controller-manager-ci-4230.1.1-n-bf7705109b" May 14 23:51:36.737643 kubelet[3367]: E0514 23:51:36.737594 3367 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.1.1-n-bf7705109b\" already exists" pod="kube-system/kube-apiserver-ci-4230.1.1-n-bf7705109b" May 14 23:51:36.810372 kubelet[3367]: I0514 23:51:36.809824 3367 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.1.1-n-bf7705109b" May 14 23:51:36.822984 kubelet[3367]: I0514 23:51:36.822945 3367 kubelet_node_status.go:125] "Node was previously registered" node="ci-4230.1.1-n-bf7705109b" May 14 23:51:36.823117 kubelet[3367]: I0514 23:51:36.823037 3367 kubelet_node_status.go:79] "Successfully registered node" node="ci-4230.1.1-n-bf7705109b" May 14 23:51:36.913242 kubelet[3367]: I0514 23:51:36.913134 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0e14ce8ad3505f004d01153be26bc530-ca-certs\") pod \"kube-apiserver-ci-4230.1.1-n-bf7705109b\" (UID: \"0e14ce8ad3505f004d01153be26bc530\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-bf7705109b" May 14 23:51:36.913242 kubelet[3367]: I0514 23:51:36.913207 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0e14ce8ad3505f004d01153be26bc530-k8s-certs\") pod \"kube-apiserver-ci-4230.1.1-n-bf7705109b\" (UID: \"0e14ce8ad3505f004d01153be26bc530\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-bf7705109b" May 14 23:51:36.913242 kubelet[3367]: I0514 23:51:36.913227 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/df64489caac4f3f3cb02c79d02682d3d-ca-certs\") pod \"kube-controller-manager-ci-4230.1.1-n-bf7705109b\" (UID: \"df64489caac4f3f3cb02c79d02682d3d\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-bf7705109b" May 14 23:51:36.913242 kubelet[3367]: I0514 23:51:36.913252 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/df64489caac4f3f3cb02c79d02682d3d-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.1-n-bf7705109b\" (UID: \"df64489caac4f3f3cb02c79d02682d3d\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-bf7705109b" May 14 23:51:36.913584 kubelet[3367]: I0514 23:51:36.913270 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fc7c8e517ca2b20b56152164c2f011ba-kubeconfig\") pod \"kube-scheduler-ci-4230.1.1-n-bf7705109b\" (UID: \"fc7c8e517ca2b20b56152164c2f011ba\") " pod="kube-system/kube-scheduler-ci-4230.1.1-n-bf7705109b" May 14 23:51:36.913584 kubelet[3367]: I0514 23:51:36.913288 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0e14ce8ad3505f004d01153be26bc530-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.1-n-bf7705109b\" (UID: \"0e14ce8ad3505f004d01153be26bc530\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-bf7705109b" May 14 23:51:36.913584 kubelet[3367]: I0514 23:51:36.913305 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/df64489caac4f3f3cb02c79d02682d3d-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.1-n-bf7705109b\" (UID: \"df64489caac4f3f3cb02c79d02682d3d\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-bf7705109b" May 14 23:51:36.913584 kubelet[3367]: I0514 23:51:36.913319 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/df64489caac4f3f3cb02c79d02682d3d-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.1-n-bf7705109b\" (UID: \"df64489caac4f3f3cb02c79d02682d3d\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-bf7705109b" May 14 23:51:36.913584 kubelet[3367]: I0514 23:51:36.913361 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/df64489caac4f3f3cb02c79d02682d3d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.1-n-bf7705109b\" (UID: \"df64489caac4f3f3cb02c79d02682d3d\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-bf7705109b" May 14 23:51:37.596431 kubelet[3367]: I0514 23:51:37.596392 3367 apiserver.go:52] "Watching apiserver" May 14 23:51:37.612439 kubelet[3367]: I0514 23:51:37.612396 3367 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 23:51:39.916462 kubelet[3367]: I0514 23:51:37.736422 3367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.1.1-n-bf7705109b" podStartSLOduration=2.736403523 podStartE2EDuration="2.736403523s" podCreationTimestamp="2025-05-14 23:51:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:51:37.735914923 +0000 UTC m=+1.207896944" watchObservedRunningTime="2025-05-14 23:51:37.736403523 +0000 UTC m=+1.208385544" May 14 23:51:39.916462 kubelet[3367]: I0514 23:51:37.736554 3367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.1.1-n-bf7705109b" podStartSLOduration=2.736550244 podStartE2EDuration="2.736550244s" podCreationTimestamp="2025-05-14 23:51:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:51:37.718578229 +0000 UTC m=+1.190560250" watchObservedRunningTime="2025-05-14 23:51:37.736550244 +0000 UTC m=+1.208532265" May 14 23:51:39.916462 kubelet[3367]: I0514 23:51:37.760499 3367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.1.1-n-bf7705109b" podStartSLOduration=2.760480303 podStartE2EDuration="2.760480303s" podCreationTimestamp="2025-05-14 23:51:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:51:37.746234091 +0000 UTC m=+1.218216112" watchObservedRunningTime="2025-05-14 23:51:37.760480303 +0000 UTC m=+1.232462324" May 14 23:51:39.979887 sudo[3400]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 14 23:51:39.980188 sudo[3400]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 14 23:51:40.336357 kubelet[3367]: I0514 23:51:40.336141 3367 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 23:51:40.337072 containerd[1766]: time="2025-05-14T23:51:40.337031605Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 23:51:40.338673 kubelet[3367]: I0514 23:51:40.337699 3367 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 23:51:40.459375 sudo[3400]: pam_unix(sudo:session): session closed for user root May 14 23:51:41.366863 systemd[1]: Created slice kubepods-besteffort-podd99d4af3_09ad_4719_9737_4fea5a5c5dc0.slice - libcontainer container kubepods-besteffort-podd99d4af3_09ad_4719_9737_4fea5a5c5dc0.slice. May 14 23:51:41.382942 systemd[1]: Created slice kubepods-burstable-pod9802231f_c231_4831_96a2_3393633b9c5c.slice - libcontainer container kubepods-burstable-pod9802231f_c231_4831_96a2_3393633b9c5c.slice. May 14 23:51:41.386433 kubelet[3367]: W0514 23:51:41.385462 3367 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4230.1.1-n-bf7705109b" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230.1.1-n-bf7705109b' and this object May 14 23:51:41.386433 kubelet[3367]: E0514 23:51:41.385499 3367 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4230.1.1-n-bf7705109b\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230.1.1-n-bf7705109b' and this object" logger="UnhandledError" May 14 23:51:41.386433 kubelet[3367]: W0514 23:51:41.385538 3367 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4230.1.1-n-bf7705109b" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230.1.1-n-bf7705109b' and this object May 14 23:51:41.386433 kubelet[3367]: E0514 23:51:41.385550 3367 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4230.1.1-n-bf7705109b\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230.1.1-n-bf7705109b' and this object" logger="UnhandledError" May 14 23:51:41.386433 kubelet[3367]: W0514 23:51:41.385955 3367 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4230.1.1-n-bf7705109b" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230.1.1-n-bf7705109b' and this object May 14 23:51:41.386787 kubelet[3367]: E0514 23:51:41.385981 3367 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-4230.1.1-n-bf7705109b\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230.1.1-n-bf7705109b' and this object" logger="UnhandledError" May 14 23:51:41.442767 kubelet[3367]: I0514 23:51:41.442272 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d99d4af3-09ad-4719-9737-4fea5a5c5dc0-lib-modules\") pod \"kube-proxy-x254c\" (UID: \"d99d4af3-09ad-4719-9737-4fea5a5c5dc0\") " pod="kube-system/kube-proxy-x254c" May 14 23:51:41.442767 kubelet[3367]: I0514 23:51:41.442320 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9802231f-c231-4831-96a2-3393633b9c5c-cilium-config-path\") pod \"cilium-qvqk7\" (UID: \"9802231f-c231-4831-96a2-3393633b9c5c\") " pod="kube-system/cilium-qvqk7" May 14 23:51:41.442767 kubelet[3367]: I0514 23:51:41.442356 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9802231f-c231-4831-96a2-3393633b9c5c-host-proc-sys-kernel\") pod \"cilium-qvqk7\" (UID: \"9802231f-c231-4831-96a2-3393633b9c5c\") " pod="kube-system/cilium-qvqk7" May 14 23:51:41.442767 kubelet[3367]: I0514 23:51:41.442373 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9802231f-c231-4831-96a2-3393633b9c5c-cilium-run\") pod \"cilium-qvqk7\" (UID: \"9802231f-c231-4831-96a2-3393633b9c5c\") " pod="kube-system/cilium-qvqk7" May 14 23:51:41.442767 kubelet[3367]: I0514 23:51:41.442410 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9802231f-c231-4831-96a2-3393633b9c5c-bpf-maps\") pod \"cilium-qvqk7\" (UID: \"9802231f-c231-4831-96a2-3393633b9c5c\") " pod="kube-system/cilium-qvqk7" May 14 23:51:41.442767 kubelet[3367]: I0514 23:51:41.442427 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9802231f-c231-4831-96a2-3393633b9c5c-etc-cni-netd\") pod \"cilium-qvqk7\" (UID: \"9802231f-c231-4831-96a2-3393633b9c5c\") " pod="kube-system/cilium-qvqk7" May 14 23:51:41.443009 kubelet[3367]: I0514 23:51:41.442443 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9802231f-c231-4831-96a2-3393633b9c5c-clustermesh-secrets\") pod \"cilium-qvqk7\" (UID: \"9802231f-c231-4831-96a2-3393633b9c5c\") " pod="kube-system/cilium-qvqk7" May 14 23:51:41.443009 kubelet[3367]: I0514 23:51:41.442458 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9802231f-c231-4831-96a2-3393633b9c5c-hubble-tls\") pod \"cilium-qvqk7\" (UID: \"9802231f-c231-4831-96a2-3393633b9c5c\") " pod="kube-system/cilium-qvqk7" May 14 23:51:41.443009 kubelet[3367]: I0514 23:51:41.442472 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9802231f-c231-4831-96a2-3393633b9c5c-cni-path\") pod \"cilium-qvqk7\" (UID: \"9802231f-c231-4831-96a2-3393633b9c5c\") " pod="kube-system/cilium-qvqk7" May 14 23:51:41.443009 kubelet[3367]: I0514 23:51:41.442486 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9802231f-c231-4831-96a2-3393633b9c5c-hostproc\") pod \"cilium-qvqk7\" (UID: \"9802231f-c231-4831-96a2-3393633b9c5c\") " pod="kube-system/cilium-qvqk7" May 14 23:51:41.443009 kubelet[3367]: I0514 23:51:41.442507 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9802231f-c231-4831-96a2-3393633b9c5c-host-proc-sys-net\") pod \"cilium-qvqk7\" (UID: \"9802231f-c231-4831-96a2-3393633b9c5c\") " pod="kube-system/cilium-qvqk7" May 14 23:51:41.443009 kubelet[3367]: I0514 23:51:41.442526 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwgvt\" (UniqueName: \"kubernetes.io/projected/d99d4af3-09ad-4719-9737-4fea5a5c5dc0-kube-api-access-hwgvt\") pod \"kube-proxy-x254c\" (UID: \"d99d4af3-09ad-4719-9737-4fea5a5c5dc0\") " pod="kube-system/kube-proxy-x254c" May 14 23:51:41.443127 kubelet[3367]: I0514 23:51:41.442543 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9802231f-c231-4831-96a2-3393633b9c5c-cilium-cgroup\") pod \"cilium-qvqk7\" (UID: \"9802231f-c231-4831-96a2-3393633b9c5c\") " pod="kube-system/cilium-qvqk7" May 14 23:51:41.443127 kubelet[3367]: I0514 23:51:41.442558 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d99d4af3-09ad-4719-9737-4fea5a5c5dc0-kube-proxy\") pod \"kube-proxy-x254c\" (UID: \"d99d4af3-09ad-4719-9737-4fea5a5c5dc0\") " pod="kube-system/kube-proxy-x254c" May 14 23:51:41.443127 kubelet[3367]: I0514 23:51:41.442574 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d99d4af3-09ad-4719-9737-4fea5a5c5dc0-xtables-lock\") pod \"kube-proxy-x254c\" (UID: \"d99d4af3-09ad-4719-9737-4fea5a5c5dc0\") " pod="kube-system/kube-proxy-x254c" May 14 23:51:41.443127 kubelet[3367]: I0514 23:51:41.442588 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9802231f-c231-4831-96a2-3393633b9c5c-lib-modules\") pod \"cilium-qvqk7\" (UID: \"9802231f-c231-4831-96a2-3393633b9c5c\") " pod="kube-system/cilium-qvqk7" May 14 23:51:41.443127 kubelet[3367]: I0514 23:51:41.442603 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9802231f-c231-4831-96a2-3393633b9c5c-xtables-lock\") pod \"cilium-qvqk7\" (UID: \"9802231f-c231-4831-96a2-3393633b9c5c\") " pod="kube-system/cilium-qvqk7" May 14 23:51:41.443127 kubelet[3367]: I0514 23:51:41.442618 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf4p7\" (UniqueName: \"kubernetes.io/projected/9802231f-c231-4831-96a2-3393633b9c5c-kube-api-access-mf4p7\") pod \"cilium-qvqk7\" (UID: \"9802231f-c231-4831-96a2-3393633b9c5c\") " pod="kube-system/cilium-qvqk7" May 14 23:51:41.556512 systemd[1]: Created slice kubepods-besteffort-pod72a538d3_817d_4308_b25c_cfb0d11e33b8.slice - libcontainer container kubepods-besteffort-pod72a538d3_817d_4308_b25c_cfb0d11e33b8.slice. May 14 23:51:41.644453 kubelet[3367]: I0514 23:51:41.643778 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/72a538d3-817d-4308-b25c-cfb0d11e33b8-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-7t7vc\" (UID: \"72a538d3-817d-4308-b25c-cfb0d11e33b8\") " pod="kube-system/cilium-operator-6c4d7847fc-7t7vc" May 14 23:51:41.644453 kubelet[3367]: I0514 23:51:41.643837 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t77x2\" (UniqueName: \"kubernetes.io/projected/72a538d3-817d-4308-b25c-cfb0d11e33b8-kube-api-access-t77x2\") pod \"cilium-operator-6c4d7847fc-7t7vc\" (UID: \"72a538d3-817d-4308-b25c-cfb0d11e33b8\") " pod="kube-system/cilium-operator-6c4d7847fc-7t7vc" May 14 23:51:41.679276 containerd[1766]: time="2025-05-14T23:51:41.679196988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x254c,Uid:d99d4af3-09ad-4719-9737-4fea5a5c5dc0,Namespace:kube-system,Attempt:0,}" May 14 23:51:42.544583 kubelet[3367]: E0514 23:51:42.544528 3367 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition May 14 23:51:42.544955 kubelet[3367]: E0514 23:51:42.544637 3367 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9802231f-c231-4831-96a2-3393633b9c5c-cilium-config-path podName:9802231f-c231-4831-96a2-3393633b9c5c nodeName:}" failed. No retries permitted until 2025-05-14 23:51:43.04461598 +0000 UTC m=+6.516597961 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/9802231f-c231-4831-96a2-3393633b9c5c-cilium-config-path") pod "cilium-qvqk7" (UID: "9802231f-c231-4831-96a2-3393633b9c5c") : failed to sync configmap cache: timed out waiting for the condition May 14 23:51:42.544955 kubelet[3367]: E0514 23:51:42.544542 3367 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition May 14 23:51:42.544955 kubelet[3367]: E0514 23:51:42.544897 3367 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-qvqk7: failed to sync secret cache: timed out waiting for the condition May 14 23:51:42.544955 kubelet[3367]: E0514 23:51:42.544932 3367 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9802231f-c231-4831-96a2-3393633b9c5c-hubble-tls podName:9802231f-c231-4831-96a2-3393633b9c5c nodeName:}" failed. No retries permitted until 2025-05-14 23:51:43.04492334 +0000 UTC m=+6.516905361 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/9802231f-c231-4831-96a2-3393633b9c5c-hubble-tls") pod "cilium-qvqk7" (UID: "9802231f-c231-4831-96a2-3393633b9c5c") : failed to sync secret cache: timed out waiting for the condition May 14 23:51:42.759888 containerd[1766]: time="2025-05-14T23:51:42.759835277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-7t7vc,Uid:72a538d3-817d-4308-b25c-cfb0d11e33b8,Namespace:kube-system,Attempt:0,}" May 14 23:51:43.189057 containerd[1766]: time="2025-05-14T23:51:43.189011350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qvqk7,Uid:9802231f-c231-4831-96a2-3393633b9c5c,Namespace:kube-system,Attempt:0,}" May 14 23:51:46.633520 containerd[1766]: time="2025-05-14T23:51:46.633285102Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:51:46.634510 containerd[1766]: time="2025-05-14T23:51:46.633563342Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:51:46.634510 containerd[1766]: time="2025-05-14T23:51:46.633582662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:51:46.634510 containerd[1766]: time="2025-05-14T23:51:46.634064103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:51:46.641530 containerd[1766]: time="2025-05-14T23:51:46.640794508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:51:46.641530 containerd[1766]: time="2025-05-14T23:51:46.640865148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:51:46.641530 containerd[1766]: time="2025-05-14T23:51:46.640880148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:51:46.641530 containerd[1766]: time="2025-05-14T23:51:46.641175108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:51:46.665497 systemd[1]: Started cri-containerd-b26aeea3ac38574f86e77342d058b6679eb2263d5e7115a928e5a2be3e704ae5.scope - libcontainer container b26aeea3ac38574f86e77342d058b6679eb2263d5e7115a928e5a2be3e704ae5. May 14 23:51:46.669971 systemd[1]: Started cri-containerd-c7753dd860abdee2004f1ad68487d060eb6046cdb631a047905d5445d40459a4.scope - libcontainer container c7753dd860abdee2004f1ad68487d060eb6046cdb631a047905d5445d40459a4. May 14 23:51:46.681987 containerd[1766]: time="2025-05-14T23:51:46.681678382Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:51:46.683923 containerd[1766]: time="2025-05-14T23:51:46.683384383Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:51:46.683923 containerd[1766]: time="2025-05-14T23:51:46.683407623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:51:46.683923 containerd[1766]: time="2025-05-14T23:51:46.683503703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:51:46.699134 containerd[1766]: time="2025-05-14T23:51:46.699100796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qvqk7,Uid:9802231f-c231-4831-96a2-3393633b9c5c,Namespace:kube-system,Attempt:0,} returns sandbox id \"b26aeea3ac38574f86e77342d058b6679eb2263d5e7115a928e5a2be3e704ae5\"" May 14 23:51:46.702366 containerd[1766]: time="2025-05-14T23:51:46.702000559Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 14 23:51:46.723510 systemd[1]: Started cri-containerd-365300bde33653ce9f3d5ceec2ac07f13b9c9f848206e0046f357e3e6e6f0835.scope - libcontainer container 365300bde33653ce9f3d5ceec2ac07f13b9c9f848206e0046f357e3e6e6f0835. May 14 23:51:46.730008 containerd[1766]: time="2025-05-14T23:51:46.729967101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x254c,Uid:d99d4af3-09ad-4719-9737-4fea5a5c5dc0,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7753dd860abdee2004f1ad68487d060eb6046cdb631a047905d5445d40459a4\"" May 14 23:51:46.736288 containerd[1766]: time="2025-05-14T23:51:46.736242307Z" level=info msg="CreateContainer within sandbox \"c7753dd860abdee2004f1ad68487d060eb6046cdb631a047905d5445d40459a4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 23:51:46.764575 containerd[1766]: time="2025-05-14T23:51:46.764520690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-7t7vc,Uid:72a538d3-817d-4308-b25c-cfb0d11e33b8,Namespace:kube-system,Attempt:0,} returns sandbox id \"365300bde33653ce9f3d5ceec2ac07f13b9c9f848206e0046f357e3e6e6f0835\"" May 14 23:51:47.008108 containerd[1766]: time="2025-05-14T23:51:47.007805010Z" level=info msg="CreateContainer within sandbox \"c7753dd860abdee2004f1ad68487d060eb6046cdb631a047905d5445d40459a4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"36469e6f45b7fac83619c33d1822f6f45955da328dc4c7bc7bc0f82c942393b5\"" May 14 23:51:47.008814 containerd[1766]: time="2025-05-14T23:51:47.008614371Z" level=info msg="StartContainer for \"36469e6f45b7fac83619c33d1822f6f45955da328dc4c7bc7bc0f82c942393b5\"" May 14 23:51:47.036503 systemd[1]: Started cri-containerd-36469e6f45b7fac83619c33d1822f6f45955da328dc4c7bc7bc0f82c942393b5.scope - libcontainer container 36469e6f45b7fac83619c33d1822f6f45955da328dc4c7bc7bc0f82c942393b5. May 14 23:51:47.070369 containerd[1766]: time="2025-05-14T23:51:47.070084861Z" level=info msg="StartContainer for \"36469e6f45b7fac83619c33d1822f6f45955da328dc4c7bc7bc0f82c942393b5\" returns successfully" May 14 23:51:47.616543 systemd[1]: run-containerd-runc-k8s.io-b26aeea3ac38574f86e77342d058b6679eb2263d5e7115a928e5a2be3e704ae5-runc.UxcXr1.mount: Deactivated successfully. May 14 23:51:47.616910 systemd[1]: run-containerd-runc-k8s.io-c7753dd860abdee2004f1ad68487d060eb6046cdb631a047905d5445d40459a4-runc.trmNMh.mount: Deactivated successfully. May 14 23:51:49.449846 kubelet[3367]: I0514 23:51:49.449058 3367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-x254c" podStartSLOduration=8.449037634 podStartE2EDuration="8.449037634s" podCreationTimestamp="2025-05-14 23:51:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:51:47.715990632 +0000 UTC m=+11.187972653" watchObservedRunningTime="2025-05-14 23:51:49.449037634 +0000 UTC m=+12.921019655" May 14 23:51:57.481630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2509661025.mount: Deactivated successfully. May 14 23:52:02.904392 containerd[1766]: time="2025-05-14T23:52:02.903585829Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:10.110126 containerd[1766]: time="2025-05-14T23:52:10.061755524Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 14 23:52:10.413410 containerd[1766]: time="2025-05-14T23:52:10.411606728Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:10.418374 containerd[1766]: time="2025-05-14T23:52:10.418315373Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 23.716276814s" May 14 23:52:10.418374 containerd[1766]: time="2025-05-14T23:52:10.418376574Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 14 23:52:10.420058 containerd[1766]: time="2025-05-14T23:52:10.420030415Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 14 23:52:10.422297 containerd[1766]: time="2025-05-14T23:52:10.421583816Z" level=info msg="CreateContainer within sandbox \"b26aeea3ac38574f86e77342d058b6679eb2263d5e7115a928e5a2be3e704ae5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 23:52:10.620209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount964706235.mount: Deactivated successfully. May 14 23:52:10.711822 containerd[1766]: time="2025-05-14T23:52:10.711707772Z" level=info msg="CreateContainer within sandbox \"b26aeea3ac38574f86e77342d058b6679eb2263d5e7115a928e5a2be3e704ae5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"09feaaad4064e5884366bfd6d094900bf95844b00577c6aaf22a1c94447d8e26\"" May 14 23:52:10.713568 containerd[1766]: time="2025-05-14T23:52:10.713530933Z" level=info msg="StartContainer for \"09feaaad4064e5884366bfd6d094900bf95844b00577c6aaf22a1c94447d8e26\"" May 14 23:52:10.739515 systemd[1]: Started cri-containerd-09feaaad4064e5884366bfd6d094900bf95844b00577c6aaf22a1c94447d8e26.scope - libcontainer container 09feaaad4064e5884366bfd6d094900bf95844b00577c6aaf22a1c94447d8e26. May 14 23:52:10.775129 systemd[1]: cri-containerd-09feaaad4064e5884366bfd6d094900bf95844b00577c6aaf22a1c94447d8e26.scope: Deactivated successfully. May 14 23:52:10.803836 containerd[1766]: time="2025-05-14T23:52:10.803600886Z" level=info msg="StartContainer for \"09feaaad4064e5884366bfd6d094900bf95844b00577c6aaf22a1c94447d8e26\" returns successfully" May 14 23:52:11.616162 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09feaaad4064e5884366bfd6d094900bf95844b00577c6aaf22a1c94447d8e26-rootfs.mount: Deactivated successfully. May 14 23:52:16.709229 containerd[1766]: time="2025-05-14T23:52:16.709174705Z" level=info msg="shim disconnected" id=09feaaad4064e5884366bfd6d094900bf95844b00577c6aaf22a1c94447d8e26 namespace=k8s.io May 14 23:52:16.709813 containerd[1766]: time="2025-05-14T23:52:16.709375185Z" level=warning msg="cleaning up after shim disconnected" id=09feaaad4064e5884366bfd6d094900bf95844b00577c6aaf22a1c94447d8e26 namespace=k8s.io May 14 23:52:16.709813 containerd[1766]: time="2025-05-14T23:52:16.709389705Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:52:16.765563 containerd[1766]: time="2025-05-14T23:52:16.765399671Z" level=info msg="CreateContainer within sandbox \"b26aeea3ac38574f86e77342d058b6679eb2263d5e7115a928e5a2be3e704ae5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 23:52:16.870481 containerd[1766]: time="2025-05-14T23:52:16.870432797Z" level=info msg="CreateContainer within sandbox \"b26aeea3ac38574f86e77342d058b6679eb2263d5e7115a928e5a2be3e704ae5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"600e528c9e65240849b732a91fc04b203b09a9ed56c8bfd5ba1f63a58089c40a\"" May 14 23:52:16.871523 containerd[1766]: time="2025-05-14T23:52:16.871497678Z" level=info msg="StartContainer for \"600e528c9e65240849b732a91fc04b203b09a9ed56c8bfd5ba1f63a58089c40a\"" May 14 23:52:16.899060 systemd[1]: Started cri-containerd-600e528c9e65240849b732a91fc04b203b09a9ed56c8bfd5ba1f63a58089c40a.scope - libcontainer container 600e528c9e65240849b732a91fc04b203b09a9ed56c8bfd5ba1f63a58089c40a. May 14 23:52:16.933189 containerd[1766]: time="2025-05-14T23:52:16.933139848Z" level=info msg="StartContainer for \"600e528c9e65240849b732a91fc04b203b09a9ed56c8bfd5ba1f63a58089c40a\" returns successfully" May 14 23:52:16.942727 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 23:52:16.942936 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 23:52:16.943452 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 14 23:52:16.952812 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 23:52:16.954904 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 23:52:16.956058 systemd[1]: cri-containerd-600e528c9e65240849b732a91fc04b203b09a9ed56c8bfd5ba1f63a58089c40a.scope: Deactivated successfully. May 14 23:52:16.963887 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 23:52:16.993072 containerd[1766]: time="2025-05-14T23:52:16.993019097Z" level=info msg="shim disconnected" id=600e528c9e65240849b732a91fc04b203b09a9ed56c8bfd5ba1f63a58089c40a namespace=k8s.io May 14 23:52:16.993295 containerd[1766]: time="2025-05-14T23:52:16.993077137Z" level=warning msg="cleaning up after shim disconnected" id=600e528c9e65240849b732a91fc04b203b09a9ed56c8bfd5ba1f63a58089c40a namespace=k8s.io May 14 23:52:16.993295 containerd[1766]: time="2025-05-14T23:52:16.993100017Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:52:17.002559 containerd[1766]: time="2025-05-14T23:52:17.002481785Z" level=warning msg="cleanup warnings time=\"2025-05-14T23:52:17Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 14 23:52:17.770018 containerd[1766]: time="2025-05-14T23:52:17.769811851Z" level=info msg="CreateContainer within sandbox \"b26aeea3ac38574f86e77342d058b6679eb2263d5e7115a928e5a2be3e704ae5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 23:52:17.831054 containerd[1766]: time="2025-05-14T23:52:17.831005381Z" level=info msg="CreateContainer within sandbox \"b26aeea3ac38574f86e77342d058b6679eb2263d5e7115a928e5a2be3e704ae5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5fed1aa469e342100b3de26b55ba3aa224af32d476942b7ac7a034ab46fd8d38\"" May 14 23:52:17.834047 containerd[1766]: time="2025-05-14T23:52:17.833732983Z" level=info msg="StartContainer for \"5fed1aa469e342100b3de26b55ba3aa224af32d476942b7ac7a034ab46fd8d38\"" May 14 23:52:17.849766 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-600e528c9e65240849b732a91fc04b203b09a9ed56c8bfd5ba1f63a58089c40a-rootfs.mount: Deactivated successfully. May 14 23:52:17.870020 systemd[1]: run-containerd-runc-k8s.io-5fed1aa469e342100b3de26b55ba3aa224af32d476942b7ac7a034ab46fd8d38-runc.INc6yj.mount: Deactivated successfully. May 14 23:52:17.881508 systemd[1]: Started cri-containerd-5fed1aa469e342100b3de26b55ba3aa224af32d476942b7ac7a034ab46fd8d38.scope - libcontainer container 5fed1aa469e342100b3de26b55ba3aa224af32d476942b7ac7a034ab46fd8d38. May 14 23:52:17.925005 systemd[1]: cri-containerd-5fed1aa469e342100b3de26b55ba3aa224af32d476942b7ac7a034ab46fd8d38.scope: Deactivated successfully. May 14 23:52:17.927402 containerd[1766]: time="2025-05-14T23:52:17.927158300Z" level=info msg="StartContainer for \"5fed1aa469e342100b3de26b55ba3aa224af32d476942b7ac7a034ab46fd8d38\" returns successfully" May 14 23:52:17.950532 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5fed1aa469e342100b3de26b55ba3aa224af32d476942b7ac7a034ab46fd8d38-rootfs.mount: Deactivated successfully. May 14 23:52:18.034721 containerd[1766]: time="2025-05-14T23:52:18.034574707Z" level=info msg="shim disconnected" id=5fed1aa469e342100b3de26b55ba3aa224af32d476942b7ac7a034ab46fd8d38 namespace=k8s.io May 14 23:52:18.034721 containerd[1766]: time="2025-05-14T23:52:18.034631867Z" level=warning msg="cleaning up after shim disconnected" id=5fed1aa469e342100b3de26b55ba3aa224af32d476942b7ac7a034ab46fd8d38 namespace=k8s.io May 14 23:52:18.034721 containerd[1766]: time="2025-05-14T23:52:18.034639867Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:52:18.300446 containerd[1766]: time="2025-05-14T23:52:18.298495043Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:18.305369 containerd[1766]: time="2025-05-14T23:52:18.305291048Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 14 23:52:18.311450 containerd[1766]: time="2025-05-14T23:52:18.311420093Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:18.312843 containerd[1766]: time="2025-05-14T23:52:18.312814575Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 7.890714678s" May 14 23:52:18.312946 containerd[1766]: time="2025-05-14T23:52:18.312928655Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 14 23:52:18.315560 containerd[1766]: time="2025-05-14T23:52:18.315517097Z" level=info msg="CreateContainer within sandbox \"365300bde33653ce9f3d5ceec2ac07f13b9c9f848206e0046f357e3e6e6f0835\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 14 23:52:18.377626 containerd[1766]: time="2025-05-14T23:52:18.377564347Z" level=info msg="CreateContainer within sandbox \"365300bde33653ce9f3d5ceec2ac07f13b9c9f848206e0046f357e3e6e6f0835\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"15ea7793051fc1860e07e097921228d31b7ca5ffd655be6a18d78efc5a47439d\"" May 14 23:52:18.378194 containerd[1766]: time="2025-05-14T23:52:18.378042388Z" level=info msg="StartContainer for \"15ea7793051fc1860e07e097921228d31b7ca5ffd655be6a18d78efc5a47439d\"" May 14 23:52:18.400500 systemd[1]: Started cri-containerd-15ea7793051fc1860e07e097921228d31b7ca5ffd655be6a18d78efc5a47439d.scope - libcontainer container 15ea7793051fc1860e07e097921228d31b7ca5ffd655be6a18d78efc5a47439d. May 14 23:52:18.428112 containerd[1766]: time="2025-05-14T23:52:18.428060589Z" level=info msg="StartContainer for \"15ea7793051fc1860e07e097921228d31b7ca5ffd655be6a18d78efc5a47439d\" returns successfully" May 14 23:52:18.779370 containerd[1766]: time="2025-05-14T23:52:18.778421635Z" level=info msg="CreateContainer within sandbox \"b26aeea3ac38574f86e77342d058b6679eb2263d5e7115a928e5a2be3e704ae5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 23:52:18.795732 kubelet[3367]: I0514 23:52:18.795661 3367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-7t7vc" podStartSLOduration=6.248312405 podStartE2EDuration="37.795640609s" podCreationTimestamp="2025-05-14 23:51:41 +0000 UTC" firstStartedPulling="2025-05-14 23:51:46.766328491 +0000 UTC m=+10.238310472" lastFinishedPulling="2025-05-14 23:52:18.313656655 +0000 UTC m=+41.785638676" observedRunningTime="2025-05-14 23:52:18.789907644 +0000 UTC m=+42.261889745" watchObservedRunningTime="2025-05-14 23:52:18.795640609 +0000 UTC m=+42.267622630" May 14 23:52:18.815940 containerd[1766]: time="2025-05-14T23:52:18.815789105Z" level=info msg="CreateContainer within sandbox \"b26aeea3ac38574f86e77342d058b6679eb2263d5e7115a928e5a2be3e704ae5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c38a5813a0515fbe36ab3903b8ae4d130f5fabaf37ee3c9ca161d5fa7a9b58ee\"" May 14 23:52:18.816560 containerd[1766]: time="2025-05-14T23:52:18.816523466Z" level=info msg="StartContainer for \"c38a5813a0515fbe36ab3903b8ae4d130f5fabaf37ee3c9ca161d5fa7a9b58ee\"" May 14 23:52:18.861517 systemd[1]: Started cri-containerd-c38a5813a0515fbe36ab3903b8ae4d130f5fabaf37ee3c9ca161d5fa7a9b58ee.scope - libcontainer container c38a5813a0515fbe36ab3903b8ae4d130f5fabaf37ee3c9ca161d5fa7a9b58ee. May 14 23:52:18.920412 systemd[1]: cri-containerd-c38a5813a0515fbe36ab3903b8ae4d130f5fabaf37ee3c9ca161d5fa7a9b58ee.scope: Deactivated successfully. May 14 23:52:18.922905 containerd[1766]: time="2025-05-14T23:52:18.922731233Z" level=info msg="StartContainer for \"c38a5813a0515fbe36ab3903b8ae4d130f5fabaf37ee3c9ca161d5fa7a9b58ee\" returns successfully" May 14 23:52:18.952582 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c38a5813a0515fbe36ab3903b8ae4d130f5fabaf37ee3c9ca161d5fa7a9b58ee-rootfs.mount: Deactivated successfully. May 14 23:52:19.176013 containerd[1766]: time="2025-05-14T23:52:19.175929079Z" level=info msg="shim disconnected" id=c38a5813a0515fbe36ab3903b8ae4d130f5fabaf37ee3c9ca161d5fa7a9b58ee namespace=k8s.io May 14 23:52:19.176013 containerd[1766]: time="2025-05-14T23:52:19.175980679Z" level=warning msg="cleaning up after shim disconnected" id=c38a5813a0515fbe36ab3903b8ae4d130f5fabaf37ee3c9ca161d5fa7a9b58ee namespace=k8s.io May 14 23:52:19.176013 containerd[1766]: time="2025-05-14T23:52:19.175988599Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:52:19.782424 containerd[1766]: time="2025-05-14T23:52:19.782317774Z" level=info msg="CreateContainer within sandbox \"b26aeea3ac38574f86e77342d058b6679eb2263d5e7115a928e5a2be3e704ae5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 23:52:19.825280 containerd[1766]: time="2025-05-14T23:52:19.825231689Z" level=info msg="CreateContainer within sandbox \"b26aeea3ac38574f86e77342d058b6679eb2263d5e7115a928e5a2be3e704ae5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e1a31b236578ab464f8e914578262a5560715a1b61669190cb4619cd7e27d9ab\"" May 14 23:52:19.826286 containerd[1766]: time="2025-05-14T23:52:19.826037370Z" level=info msg="StartContainer for \"e1a31b236578ab464f8e914578262a5560715a1b61669190cb4619cd7e27d9ab\"" May 14 23:52:19.862944 systemd[1]: run-containerd-runc-k8s.io-e1a31b236578ab464f8e914578262a5560715a1b61669190cb4619cd7e27d9ab-runc.f08YKE.mount: Deactivated successfully. May 14 23:52:19.873498 systemd[1]: Started cri-containerd-e1a31b236578ab464f8e914578262a5560715a1b61669190cb4619cd7e27d9ab.scope - libcontainer container e1a31b236578ab464f8e914578262a5560715a1b61669190cb4619cd7e27d9ab. May 14 23:52:19.909371 containerd[1766]: time="2025-05-14T23:52:19.908826957Z" level=info msg="StartContainer for \"e1a31b236578ab464f8e914578262a5560715a1b61669190cb4619cd7e27d9ab\" returns successfully" May 14 23:52:19.937169 systemd[1]: run-containerd-runc-k8s.io-e1a31b236578ab464f8e914578262a5560715a1b61669190cb4619cd7e27d9ab-runc.eHahiW.mount: Deactivated successfully. May 14 23:52:20.072618 kubelet[3367]: I0514 23:52:20.072500 3367 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 14 23:52:20.122829 systemd[1]: Created slice kubepods-burstable-pod8dc09ba1_61f9_438e_a7f2_97050519c6db.slice - libcontainer container kubepods-burstable-pod8dc09ba1_61f9_438e_a7f2_97050519c6db.slice. May 14 23:52:20.131130 systemd[1]: Created slice kubepods-burstable-pod70bebcee_78ca_420f_9e41_1e34f052c25e.slice - libcontainer container kubepods-burstable-pod70bebcee_78ca_420f_9e41_1e34f052c25e.slice. May 14 23:52:20.189541 kubelet[3367]: I0514 23:52:20.189500 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8dc09ba1-61f9-438e-a7f2-97050519c6db-config-volume\") pod \"coredns-668d6bf9bc-kdh7p\" (UID: \"8dc09ba1-61f9-438e-a7f2-97050519c6db\") " pod="kube-system/coredns-668d6bf9bc-kdh7p" May 14 23:52:20.189541 kubelet[3367]: I0514 23:52:20.189543 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtrjt\" (UniqueName: \"kubernetes.io/projected/8dc09ba1-61f9-438e-a7f2-97050519c6db-kube-api-access-vtrjt\") pod \"coredns-668d6bf9bc-kdh7p\" (UID: \"8dc09ba1-61f9-438e-a7f2-97050519c6db\") " pod="kube-system/coredns-668d6bf9bc-kdh7p" May 14 23:52:20.189706 kubelet[3367]: I0514 23:52:20.189564 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x55pc\" (UniqueName: \"kubernetes.io/projected/70bebcee-78ca-420f-9e41-1e34f052c25e-kube-api-access-x55pc\") pod \"coredns-668d6bf9bc-pbnhp\" (UID: \"70bebcee-78ca-420f-9e41-1e34f052c25e\") " pod="kube-system/coredns-668d6bf9bc-pbnhp" May 14 23:52:20.189706 kubelet[3367]: I0514 23:52:20.189588 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70bebcee-78ca-420f-9e41-1e34f052c25e-config-volume\") pod \"coredns-668d6bf9bc-pbnhp\" (UID: \"70bebcee-78ca-420f-9e41-1e34f052c25e\") " pod="kube-system/coredns-668d6bf9bc-pbnhp" May 14 23:52:20.428419 containerd[1766]: time="2025-05-14T23:52:20.428082699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kdh7p,Uid:8dc09ba1-61f9-438e-a7f2-97050519c6db,Namespace:kube-system,Attempt:0,}" May 14 23:52:20.435666 containerd[1766]: time="2025-05-14T23:52:20.435330185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pbnhp,Uid:70bebcee-78ca-420f-9e41-1e34f052c25e,Namespace:kube-system,Attempt:0,}" May 14 23:52:20.801986 kubelet[3367]: I0514 23:52:20.801720 3367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qvqk7" podStartSLOduration=16.082920425 podStartE2EDuration="39.801425202s" podCreationTimestamp="2025-05-14 23:51:41 +0000 UTC" firstStartedPulling="2025-05-14 23:51:46.701398758 +0000 UTC m=+10.173380779" lastFinishedPulling="2025-05-14 23:52:10.419903535 +0000 UTC m=+33.891885556" observedRunningTime="2025-05-14 23:52:20.800677761 +0000 UTC m=+44.272659862" watchObservedRunningTime="2025-05-14 23:52:20.801425202 +0000 UTC m=+44.273407223" May 14 23:52:22.106198 systemd-networkd[1520]: cilium_host: Link UP May 14 23:52:22.106304 systemd-networkd[1520]: cilium_net: Link UP May 14 23:52:22.106307 systemd-networkd[1520]: cilium_net: Gained carrier May 14 23:52:22.106526 systemd-networkd[1520]: cilium_host: Gained carrier May 14 23:52:22.318891 systemd-networkd[1520]: cilium_vxlan: Link UP May 14 23:52:22.318897 systemd-networkd[1520]: cilium_vxlan: Gained carrier May 14 23:52:22.517474 systemd-networkd[1520]: cilium_net: Gained IPv6LL May 14 23:52:22.621477 kernel: NET: Registered PF_ALG protocol family May 14 23:52:22.869527 systemd-networkd[1520]: cilium_host: Gained IPv6LL May 14 23:52:23.324467 systemd-networkd[1520]: lxc_health: Link UP May 14 23:52:23.332611 systemd-networkd[1520]: lxc_health: Gained carrier May 14 23:52:23.541835 kernel: eth0: renamed from tmp94928 May 14 23:52:23.546617 systemd-networkd[1520]: lxcae15a39b2e22: Link UP May 14 23:52:23.548540 systemd-networkd[1520]: lxcae15a39b2e22: Gained carrier May 14 23:52:23.562407 systemd-networkd[1520]: lxc8aa07fc5fd70: Link UP May 14 23:52:23.570516 kernel: eth0: renamed from tmpf4978 May 14 23:52:23.577731 systemd-networkd[1520]: lxc8aa07fc5fd70: Gained carrier May 14 23:52:24.341488 systemd-networkd[1520]: cilium_vxlan: Gained IPv6LL May 14 23:52:24.598575 systemd-networkd[1520]: lxc8aa07fc5fd70: Gained IPv6LL May 14 23:52:24.918531 systemd-networkd[1520]: lxcae15a39b2e22: Gained IPv6LL May 14 23:52:24.981604 systemd-networkd[1520]: lxc_health: Gained IPv6LL May 14 23:52:27.248373 containerd[1766]: time="2025-05-14T23:52:27.247930718Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:52:27.248373 containerd[1766]: time="2025-05-14T23:52:27.248060118Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:52:27.248373 containerd[1766]: time="2025-05-14T23:52:27.248093278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:52:27.248373 containerd[1766]: time="2025-05-14T23:52:27.248221678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:52:27.274543 systemd[1]: Started cri-containerd-f4978fa103eb6be4162f88581480ddd79133672eb48b93470258071171b65d30.scope - libcontainer container f4978fa103eb6be4162f88581480ddd79133672eb48b93470258071171b65d30. May 14 23:52:27.280692 containerd[1766]: time="2025-05-14T23:52:27.279474343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:52:27.280692 containerd[1766]: time="2025-05-14T23:52:27.279684583Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:52:27.280692 containerd[1766]: time="2025-05-14T23:52:27.279704303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:52:27.281801 containerd[1766]: time="2025-05-14T23:52:27.280651584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:52:27.309187 systemd[1]: run-containerd-runc-k8s.io-9492899e9414d25ed77f1b1b778ad507e1aedfc412e39670cf4a8bfd7cdbb5ef-runc.DkSodM.mount: Deactivated successfully. May 14 23:52:27.318232 systemd[1]: Started cri-containerd-9492899e9414d25ed77f1b1b778ad507e1aedfc412e39670cf4a8bfd7cdbb5ef.scope - libcontainer container 9492899e9414d25ed77f1b1b778ad507e1aedfc412e39670cf4a8bfd7cdbb5ef. May 14 23:52:27.334076 containerd[1766]: time="2025-05-14T23:52:27.333885547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pbnhp,Uid:70bebcee-78ca-420f-9e41-1e34f052c25e,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4978fa103eb6be4162f88581480ddd79133672eb48b93470258071171b65d30\"" May 14 23:52:27.342660 containerd[1766]: time="2025-05-14T23:52:27.342594674Z" level=info msg="CreateContainer within sandbox \"f4978fa103eb6be4162f88581480ddd79133672eb48b93470258071171b65d30\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 23:52:27.370030 containerd[1766]: time="2025-05-14T23:52:27.369976257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kdh7p,Uid:8dc09ba1-61f9-438e-a7f2-97050519c6db,Namespace:kube-system,Attempt:0,} returns sandbox id \"9492899e9414d25ed77f1b1b778ad507e1aedfc412e39670cf4a8bfd7cdbb5ef\"" May 14 23:52:27.373925 containerd[1766]: time="2025-05-14T23:52:27.373767340Z" level=info msg="CreateContainer within sandbox \"9492899e9414d25ed77f1b1b778ad507e1aedfc412e39670cf4a8bfd7cdbb5ef\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 23:52:27.387324 containerd[1766]: time="2025-05-14T23:52:27.387279951Z" level=info msg="CreateContainer within sandbox \"f4978fa103eb6be4162f88581480ddd79133672eb48b93470258071171b65d30\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a076f0d6f9d39d1601b9be8753cbddc077e8266b1dab37c4dfd43776903bdcc7\"" May 14 23:52:27.388782 containerd[1766]: time="2025-05-14T23:52:27.388560992Z" level=info msg="StartContainer for \"a076f0d6f9d39d1601b9be8753cbddc077e8266b1dab37c4dfd43776903bdcc7\"" May 14 23:52:27.417562 systemd[1]: Started cri-containerd-a076f0d6f9d39d1601b9be8753cbddc077e8266b1dab37c4dfd43776903bdcc7.scope - libcontainer container a076f0d6f9d39d1601b9be8753cbddc077e8266b1dab37c4dfd43776903bdcc7. May 14 23:52:27.424201 containerd[1766]: time="2025-05-14T23:52:27.423588300Z" level=info msg="CreateContainer within sandbox \"9492899e9414d25ed77f1b1b778ad507e1aedfc412e39670cf4a8bfd7cdbb5ef\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5c9c5accd525f44b422fbea9ef2b77bef104049a077cba163566cd1945d21809\"" May 14 23:52:27.424600 containerd[1766]: time="2025-05-14T23:52:27.424557901Z" level=info msg="StartContainer for \"5c9c5accd525f44b422fbea9ef2b77bef104049a077cba163566cd1945d21809\"" May 14 23:52:27.462352 systemd[1]: Started cri-containerd-5c9c5accd525f44b422fbea9ef2b77bef104049a077cba163566cd1945d21809.scope - libcontainer container 5c9c5accd525f44b422fbea9ef2b77bef104049a077cba163566cd1945d21809. May 14 23:52:27.471520 containerd[1766]: time="2025-05-14T23:52:27.471469579Z" level=info msg="StartContainer for \"a076f0d6f9d39d1601b9be8753cbddc077e8266b1dab37c4dfd43776903bdcc7\" returns successfully" May 14 23:52:27.495747 containerd[1766]: time="2025-05-14T23:52:27.495690439Z" level=info msg="StartContainer for \"5c9c5accd525f44b422fbea9ef2b77bef104049a077cba163566cd1945d21809\" returns successfully" May 14 23:52:27.836450 kubelet[3367]: I0514 23:52:27.835865 3367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-pbnhp" podStartSLOduration=46.835844396 podStartE2EDuration="46.835844396s" podCreationTimestamp="2025-05-14 23:51:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:52:27.814852379 +0000 UTC m=+51.286834440" watchObservedRunningTime="2025-05-14 23:52:27.835844396 +0000 UTC m=+51.307826417" May 14 23:52:27.869196 kubelet[3367]: I0514 23:52:27.868335 3367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-kdh7p" podStartSLOduration=46.868315663 podStartE2EDuration="46.868315663s" podCreationTimestamp="2025-05-14 23:51:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:52:27.836648117 +0000 UTC m=+51.308630138" watchObservedRunningTime="2025-05-14 23:52:27.868315663 +0000 UTC m=+51.340297684" May 14 23:52:30.778315 sudo[2333]: pam_unix(sudo:session): session closed for user root May 14 23:52:30.861402 sshd[2332]: Connection closed by 10.200.16.10 port 47648 May 14 23:52:30.861967 sshd-session[2330]: pam_unix(sshd:session): session closed for user core May 14 23:52:30.865278 systemd[1]: sshd@6-10.200.20.40:22-10.200.16.10:47648.service: Deactivated successfully. May 14 23:52:30.867284 systemd[1]: session-9.scope: Deactivated successfully. May 14 23:52:30.867858 systemd[1]: session-9.scope: Consumed 7.996s CPU time, 260.5M memory peak. May 14 23:52:30.869069 systemd-logind[1736]: Session 9 logged out. Waiting for processes to exit. May 14 23:52:30.869895 systemd-logind[1736]: Removed session 9. May 14 23:54:34.315119 systemd[1]: Started sshd@7-10.200.20.40:22-10.200.16.10:42288.service - OpenSSH per-connection server daemon (10.200.16.10:42288). May 14 23:54:34.734833 sshd[4862]: Accepted publickey for core from 10.200.16.10 port 42288 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:54:34.736258 sshd-session[4862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:34.740955 systemd-logind[1736]: New session 10 of user core. May 14 23:54:34.745522 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 23:54:35.252312 sshd[4864]: Connection closed by 10.200.16.10 port 42288 May 14 23:54:35.252903 sshd-session[4862]: pam_unix(sshd:session): session closed for user core May 14 23:54:35.256111 systemd[1]: sshd@7-10.200.20.40:22-10.200.16.10:42288.service: Deactivated successfully. May 14 23:54:35.258130 systemd[1]: session-10.scope: Deactivated successfully. May 14 23:54:35.259052 systemd-logind[1736]: Session 10 logged out. Waiting for processes to exit. May 14 23:54:35.260031 systemd-logind[1736]: Removed session 10. May 14 23:54:40.354618 systemd[1]: Started sshd@8-10.200.20.40:22-10.200.16.10:39050.service - OpenSSH per-connection server daemon (10.200.16.10:39050). May 14 23:54:40.804127 sshd[4899]: Accepted publickey for core from 10.200.16.10 port 39050 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:54:40.805448 sshd-session[4899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:40.810555 systemd-logind[1736]: New session 11 of user core. May 14 23:54:40.820505 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 23:54:40.847247 update_engine[1744]: I20250514 23:54:40.847189 1744 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 14 23:54:40.847247 update_engine[1744]: I20250514 23:54:40.847240 1744 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 14 23:54:40.847630 update_engine[1744]: I20250514 23:54:40.847429 1744 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 14 23:54:40.849347 update_engine[1744]: I20250514 23:54:40.847786 1744 omaha_request_params.cc:62] Current group set to beta May 14 23:54:40.849347 update_engine[1744]: I20250514 23:54:40.847887 1744 update_attempter.cc:499] Already updated boot flags. Skipping. May 14 23:54:40.849347 update_engine[1744]: I20250514 23:54:40.847896 1744 update_attempter.cc:643] Scheduling an action processor start. May 14 23:54:40.849347 update_engine[1744]: I20250514 23:54:40.847910 1744 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 14 23:54:40.849347 update_engine[1744]: I20250514 23:54:40.847935 1744 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 14 23:54:40.849347 update_engine[1744]: I20250514 23:54:40.847980 1744 omaha_request_action.cc:271] Posting an Omaha request to disabled May 14 23:54:40.849347 update_engine[1744]: I20250514 23:54:40.847987 1744 omaha_request_action.cc:272] Request: May 14 23:54:40.849347 update_engine[1744]: May 14 23:54:40.849347 update_engine[1744]: May 14 23:54:40.849347 update_engine[1744]: May 14 23:54:40.849347 update_engine[1744]: May 14 23:54:40.849347 update_engine[1744]: May 14 23:54:40.849347 update_engine[1744]: May 14 23:54:40.849347 update_engine[1744]: May 14 23:54:40.849347 update_engine[1744]: May 14 23:54:40.849347 update_engine[1744]: I20250514 23:54:40.847993 1744 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 23:54:40.849347 update_engine[1744]: I20250514 23:54:40.848990 1744 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 23:54:40.849347 update_engine[1744]: I20250514 23:54:40.849318 1744 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 23:54:40.849795 locksmithd[1787]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 14 23:54:40.952079 update_engine[1744]: E20250514 23:54:40.952016 1744 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 23:54:40.952211 update_engine[1744]: I20250514 23:54:40.952123 1744 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 14 23:54:41.190793 sshd[4901]: Connection closed by 10.200.16.10 port 39050 May 14 23:54:41.191437 sshd-session[4899]: pam_unix(sshd:session): session closed for user core May 14 23:54:41.194748 systemd-logind[1736]: Session 11 logged out. Waiting for processes to exit. May 14 23:54:41.195444 systemd[1]: sshd@8-10.200.20.40:22-10.200.16.10:39050.service: Deactivated successfully. May 14 23:54:41.198284 systemd[1]: session-11.scope: Deactivated successfully. May 14 23:54:41.199611 systemd-logind[1736]: Removed session 11. May 14 23:54:46.280636 systemd[1]: Started sshd@9-10.200.20.40:22-10.200.16.10:39066.service - OpenSSH per-connection server daemon (10.200.16.10:39066). May 14 23:54:46.731374 sshd[4915]: Accepted publickey for core from 10.200.16.10 port 39066 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:54:46.733114 sshd-session[4915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:46.738244 systemd-logind[1736]: New session 12 of user core. May 14 23:54:46.746557 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 23:54:47.121904 sshd[4920]: Connection closed by 10.200.16.10 port 39066 May 14 23:54:47.122507 sshd-session[4915]: pam_unix(sshd:session): session closed for user core May 14 23:54:47.125583 systemd-logind[1736]: Session 12 logged out. Waiting for processes to exit. May 14 23:54:47.125825 systemd[1]: sshd@9-10.200.20.40:22-10.200.16.10:39066.service: Deactivated successfully. May 14 23:54:47.128259 systemd[1]: session-12.scope: Deactivated successfully. May 14 23:54:47.130179 systemd-logind[1736]: Removed session 12. May 14 23:54:50.849402 update_engine[1744]: I20250514 23:54:50.848815 1744 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 23:54:50.849402 update_engine[1744]: I20250514 23:54:50.849053 1744 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 23:54:50.849402 update_engine[1744]: I20250514 23:54:50.849303 1744 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 23:54:50.886566 update_engine[1744]: E20250514 23:54:50.886504 1744 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 23:54:50.886701 update_engine[1744]: I20250514 23:54:50.886590 1744 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 14 23:54:52.204181 systemd[1]: Started sshd@10-10.200.20.40:22-10.200.16.10:40416.service - OpenSSH per-connection server daemon (10.200.16.10:40416). May 14 23:54:52.628130 sshd[4935]: Accepted publickey for core from 10.200.16.10 port 40416 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:54:52.629534 sshd-session[4935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:52.634902 systemd-logind[1736]: New session 13 of user core. May 14 23:54:52.641489 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 23:54:53.004800 sshd[4937]: Connection closed by 10.200.16.10 port 40416 May 14 23:54:53.005534 sshd-session[4935]: pam_unix(sshd:session): session closed for user core May 14 23:54:53.008960 systemd[1]: sshd@10-10.200.20.40:22-10.200.16.10:40416.service: Deactivated successfully. May 14 23:54:53.011195 systemd[1]: session-13.scope: Deactivated successfully. May 14 23:54:53.012246 systemd-logind[1736]: Session 13 logged out. Waiting for processes to exit. May 14 23:54:53.013228 systemd-logind[1736]: Removed session 13. May 14 23:54:53.088609 systemd[1]: Started sshd@11-10.200.20.40:22-10.200.16.10:40424.service - OpenSSH per-connection server daemon (10.200.16.10:40424). May 14 23:54:53.507217 sshd[4950]: Accepted publickey for core from 10.200.16.10 port 40424 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:54:53.508551 sshd-session[4950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:53.513697 systemd-logind[1736]: New session 14 of user core. May 14 23:54:53.519495 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 23:54:53.917569 sshd[4952]: Connection closed by 10.200.16.10 port 40424 May 14 23:54:53.916681 sshd-session[4950]: pam_unix(sshd:session): session closed for user core May 14 23:54:53.919618 systemd-logind[1736]: Session 14 logged out. Waiting for processes to exit. May 14 23:54:53.920073 systemd[1]: sshd@11-10.200.20.40:22-10.200.16.10:40424.service: Deactivated successfully. May 14 23:54:53.922486 systemd[1]: session-14.scope: Deactivated successfully. May 14 23:54:53.924724 systemd-logind[1736]: Removed session 14. May 14 23:54:54.006589 systemd[1]: Started sshd@12-10.200.20.40:22-10.200.16.10:40432.service - OpenSSH per-connection server daemon (10.200.16.10:40432). May 14 23:54:54.457250 sshd[4962]: Accepted publickey for core from 10.200.16.10 port 40432 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:54:54.458580 sshd-session[4962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:54.462815 systemd-logind[1736]: New session 15 of user core. May 14 23:54:54.469515 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 23:54:54.847456 sshd[4964]: Connection closed by 10.200.16.10 port 40432 May 14 23:54:54.847260 sshd-session[4962]: pam_unix(sshd:session): session closed for user core May 14 23:54:54.850881 systemd[1]: sshd@12-10.200.20.40:22-10.200.16.10:40432.service: Deactivated successfully. May 14 23:54:54.852922 systemd[1]: session-15.scope: Deactivated successfully. May 14 23:54:54.853713 systemd-logind[1736]: Session 15 logged out. Waiting for processes to exit. May 14 23:54:54.854694 systemd-logind[1736]: Removed session 15. May 14 23:54:59.927597 systemd[1]: Started sshd@13-10.200.20.40:22-10.200.16.10:48238.service - OpenSSH per-connection server daemon (10.200.16.10:48238). May 14 23:55:00.352008 sshd[4979]: Accepted publickey for core from 10.200.16.10 port 48238 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:55:00.353375 sshd-session[4979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:55:00.357455 systemd-logind[1736]: New session 16 of user core. May 14 23:55:00.371484 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 23:55:00.727418 sshd[4981]: Connection closed by 10.200.16.10 port 48238 May 14 23:55:00.727983 sshd-session[4979]: pam_unix(sshd:session): session closed for user core May 14 23:55:00.732146 systemd[1]: sshd@13-10.200.20.40:22-10.200.16.10:48238.service: Deactivated successfully. May 14 23:55:00.732243 systemd-logind[1736]: Session 16 logged out. Waiting for processes to exit. May 14 23:55:00.734121 systemd[1]: session-16.scope: Deactivated successfully. May 14 23:55:00.735480 systemd-logind[1736]: Removed session 16. May 14 23:55:00.846698 update_engine[1744]: I20250514 23:55:00.846628 1744 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 23:55:00.847050 update_engine[1744]: I20250514 23:55:00.846863 1744 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 23:55:00.847159 update_engine[1744]: I20250514 23:55:00.847124 1744 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 23:55:00.857584 update_engine[1744]: E20250514 23:55:00.857538 1744 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 23:55:00.857652 update_engine[1744]: I20250514 23:55:00.857609 1744 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 14 23:55:05.818595 systemd[1]: Started sshd@14-10.200.20.40:22-10.200.16.10:48240.service - OpenSSH per-connection server daemon (10.200.16.10:48240). May 14 23:55:06.266889 sshd[4993]: Accepted publickey for core from 10.200.16.10 port 48240 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:55:06.268198 sshd-session[4993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:55:06.272979 systemd-logind[1736]: New session 17 of user core. May 14 23:55:06.279491 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 23:55:06.660353 sshd[4995]: Connection closed by 10.200.16.10 port 48240 May 14 23:55:06.660941 sshd-session[4993]: pam_unix(sshd:session): session closed for user core May 14 23:55:06.664589 systemd[1]: sshd@14-10.200.20.40:22-10.200.16.10:48240.service: Deactivated successfully. May 14 23:55:06.666563 systemd[1]: session-17.scope: Deactivated successfully. May 14 23:55:06.667320 systemd-logind[1736]: Session 17 logged out. Waiting for processes to exit. May 14 23:55:06.668446 systemd-logind[1736]: Removed session 17. May 14 23:55:06.751604 systemd[1]: Started sshd@15-10.200.20.40:22-10.200.16.10:48244.service - OpenSSH per-connection server daemon (10.200.16.10:48244). May 14 23:55:07.201694 sshd[5007]: Accepted publickey for core from 10.200.16.10 port 48244 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:55:07.203054 sshd-session[5007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:55:07.207878 systemd-logind[1736]: New session 18 of user core. May 14 23:55:07.218504 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 23:55:07.616649 sshd[5009]: Connection closed by 10.200.16.10 port 48244 May 14 23:55:07.617193 sshd-session[5007]: pam_unix(sshd:session): session closed for user core May 14 23:55:07.620666 systemd[1]: sshd@15-10.200.20.40:22-10.200.16.10:48244.service: Deactivated successfully. May 14 23:55:07.622715 systemd[1]: session-18.scope: Deactivated successfully. May 14 23:55:07.623962 systemd-logind[1736]: Session 18 logged out. Waiting for processes to exit. May 14 23:55:07.625086 systemd-logind[1736]: Removed session 18. May 14 23:55:07.706767 systemd[1]: Started sshd@16-10.200.20.40:22-10.200.16.10:48256.service - OpenSSH per-connection server daemon (10.200.16.10:48256). May 14 23:55:08.162718 sshd[5019]: Accepted publickey for core from 10.200.16.10 port 48256 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:55:08.163973 sshd-session[5019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:55:08.167968 systemd-logind[1736]: New session 19 of user core. May 14 23:55:08.182501 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 23:55:09.163185 sshd[5021]: Connection closed by 10.200.16.10 port 48256 May 14 23:55:09.163822 sshd-session[5019]: pam_unix(sshd:session): session closed for user core May 14 23:55:09.167098 systemd[1]: sshd@16-10.200.20.40:22-10.200.16.10:48256.service: Deactivated successfully. May 14 23:55:09.169427 systemd[1]: session-19.scope: Deactivated successfully. May 14 23:55:09.170466 systemd-logind[1736]: Session 19 logged out. Waiting for processes to exit. May 14 23:55:09.171404 systemd-logind[1736]: Removed session 19. May 14 23:55:09.249640 systemd[1]: Started sshd@17-10.200.20.40:22-10.200.16.10:45874.service - OpenSSH per-connection server daemon (10.200.16.10:45874). May 14 23:55:09.704756 sshd[5038]: Accepted publickey for core from 10.200.16.10 port 45874 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:55:09.705380 sshd-session[5038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:55:09.711078 systemd-logind[1736]: New session 20 of user core. May 14 23:55:09.716502 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 23:55:10.199589 sshd[5040]: Connection closed by 10.200.16.10 port 45874 May 14 23:55:10.200013 sshd-session[5038]: pam_unix(sshd:session): session closed for user core May 14 23:55:10.203970 systemd[1]: sshd@17-10.200.20.40:22-10.200.16.10:45874.service: Deactivated successfully. May 14 23:55:10.206224 systemd[1]: session-20.scope: Deactivated successfully. May 14 23:55:10.207064 systemd-logind[1736]: Session 20 logged out. Waiting for processes to exit. May 14 23:55:10.208233 systemd-logind[1736]: Removed session 20. May 14 23:55:10.282070 systemd[1]: Started sshd@18-10.200.20.40:22-10.200.16.10:45876.service - OpenSSH per-connection server daemon (10.200.16.10:45876). May 14 23:55:10.742970 sshd[5050]: Accepted publickey for core from 10.200.16.10 port 45876 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:55:10.744687 sshd-session[5050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:55:10.750003 systemd-logind[1736]: New session 21 of user core. May 14 23:55:10.756495 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 23:55:10.847105 update_engine[1744]: I20250514 23:55:10.847016 1744 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 23:55:10.847489 update_engine[1744]: I20250514 23:55:10.847262 1744 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 23:55:10.847565 update_engine[1744]: I20250514 23:55:10.847533 1744 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 23:55:10.894216 update_engine[1744]: E20250514 23:55:10.894160 1744 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 23:55:10.894374 update_engine[1744]: I20250514 23:55:10.894251 1744 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 14 23:55:10.894374 update_engine[1744]: I20250514 23:55:10.894263 1744 omaha_request_action.cc:617] Omaha request response: May 14 23:55:10.894432 update_engine[1744]: E20250514 23:55:10.894374 1744 omaha_request_action.cc:636] Omaha request network transfer failed. May 14 23:55:10.894432 update_engine[1744]: I20250514 23:55:10.894395 1744 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 14 23:55:10.894432 update_engine[1744]: I20250514 23:55:10.894402 1744 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 14 23:55:10.894432 update_engine[1744]: I20250514 23:55:10.894407 1744 update_attempter.cc:306] Processing Done. May 14 23:55:10.894432 update_engine[1744]: E20250514 23:55:10.894421 1744 update_attempter.cc:619] Update failed. May 14 23:55:10.894432 update_engine[1744]: I20250514 23:55:10.894428 1744 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 14 23:55:10.894432 update_engine[1744]: I20250514 23:55:10.894433 1744 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 14 23:55:10.894561 update_engine[1744]: I20250514 23:55:10.894438 1744 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 14 23:55:10.894561 update_engine[1744]: I20250514 23:55:10.894502 1744 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 14 23:55:10.894561 update_engine[1744]: I20250514 23:55:10.894524 1744 omaha_request_action.cc:271] Posting an Omaha request to disabled May 14 23:55:10.894561 update_engine[1744]: I20250514 23:55:10.894529 1744 omaha_request_action.cc:272] Request: May 14 23:55:10.894561 update_engine[1744]: May 14 23:55:10.894561 update_engine[1744]: May 14 23:55:10.894561 update_engine[1744]: May 14 23:55:10.894561 update_engine[1744]: May 14 23:55:10.894561 update_engine[1744]: May 14 23:55:10.894561 update_engine[1744]: May 14 23:55:10.894561 update_engine[1744]: I20250514 23:55:10.894536 1744 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 23:55:10.894747 update_engine[1744]: I20250514 23:55:10.894672 1744 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 23:55:10.894945 update_engine[1744]: I20250514 23:55:10.894892 1744 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 23:55:10.895089 locksmithd[1787]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 14 23:55:10.966233 update_engine[1744]: E20250514 23:55:10.966163 1744 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 23:55:10.966394 update_engine[1744]: I20250514 23:55:10.966274 1744 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 14 23:55:10.966394 update_engine[1744]: I20250514 23:55:10.966291 1744 omaha_request_action.cc:617] Omaha request response: May 14 23:55:10.966394 update_engine[1744]: I20250514 23:55:10.966303 1744 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 14 23:55:10.966394 update_engine[1744]: I20250514 23:55:10.966311 1744 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 14 23:55:10.966394 update_engine[1744]: I20250514 23:55:10.966320 1744 update_attempter.cc:306] Processing Done. May 14 23:55:10.966394 update_engine[1744]: I20250514 23:55:10.966327 1744 update_attempter.cc:310] Error event sent. May 14 23:55:10.966394 update_engine[1744]: I20250514 23:55:10.966372 1744 update_check_scheduler.cc:74] Next update check in 44m36s May 14 23:55:10.966708 locksmithd[1787]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 14 23:55:11.130625 sshd[5052]: Connection closed by 10.200.16.10 port 45876 May 14 23:55:11.131373 sshd-session[5050]: pam_unix(sshd:session): session closed for user core May 14 23:55:11.134903 systemd[1]: sshd@18-10.200.20.40:22-10.200.16.10:45876.service: Deactivated successfully. May 14 23:55:11.137432 systemd[1]: session-21.scope: Deactivated successfully. May 14 23:55:11.138593 systemd-logind[1736]: Session 21 logged out. Waiting for processes to exit. May 14 23:55:11.139598 systemd-logind[1736]: Removed session 21. May 14 23:55:16.224588 systemd[1]: Started sshd@19-10.200.20.40:22-10.200.16.10:45880.service - OpenSSH per-connection server daemon (10.200.16.10:45880). May 14 23:55:16.675815 sshd[5066]: Accepted publickey for core from 10.200.16.10 port 45880 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:55:16.677093 sshd-session[5066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:55:16.681191 systemd-logind[1736]: New session 22 of user core. May 14 23:55:16.693073 systemd[1]: Started session-22.scope - Session 22 of User core. May 14 23:55:17.061030 sshd[5068]: Connection closed by 10.200.16.10 port 45880 May 14 23:55:17.060807 sshd-session[5066]: pam_unix(sshd:session): session closed for user core May 14 23:55:17.064447 systemd[1]: sshd@19-10.200.20.40:22-10.200.16.10:45880.service: Deactivated successfully. May 14 23:55:17.066743 systemd[1]: session-22.scope: Deactivated successfully. May 14 23:55:17.067679 systemd-logind[1736]: Session 22 logged out. Waiting for processes to exit. May 14 23:55:17.069086 systemd-logind[1736]: Removed session 22. May 14 23:55:22.146566 systemd[1]: Started sshd@20-10.200.20.40:22-10.200.16.10:59430.service - OpenSSH per-connection server daemon (10.200.16.10:59430). May 14 23:55:22.591685 sshd[5082]: Accepted publickey for core from 10.200.16.10 port 59430 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:55:22.593483 sshd-session[5082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:55:22.598172 systemd-logind[1736]: New session 23 of user core. May 14 23:55:22.605486 systemd[1]: Started session-23.scope - Session 23 of User core. May 14 23:55:22.976385 sshd[5084]: Connection closed by 10.200.16.10 port 59430 May 14 23:55:22.976917 sshd-session[5082]: pam_unix(sshd:session): session closed for user core May 14 23:55:22.980103 systemd[1]: sshd@20-10.200.20.40:22-10.200.16.10:59430.service: Deactivated successfully. May 14 23:55:22.982623 systemd[1]: session-23.scope: Deactivated successfully. May 14 23:55:22.983787 systemd-logind[1736]: Session 23 logged out. Waiting for processes to exit. May 14 23:55:22.985100 systemd-logind[1736]: Removed session 23. May 14 23:55:28.055583 systemd[1]: Started sshd@21-10.200.20.40:22-10.200.16.10:59436.service - OpenSSH per-connection server daemon (10.200.16.10:59436). May 14 23:55:28.472235 sshd[5096]: Accepted publickey for core from 10.200.16.10 port 59436 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:55:28.473540 sshd-session[5096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:55:28.478011 systemd-logind[1736]: New session 24 of user core. May 14 23:55:28.485494 systemd[1]: Started session-24.scope - Session 24 of User core. May 14 23:55:28.850577 sshd[5098]: Connection closed by 10.200.16.10 port 59436 May 14 23:55:28.851240 sshd-session[5096]: pam_unix(sshd:session): session closed for user core May 14 23:55:28.854697 systemd[1]: sshd@21-10.200.20.40:22-10.200.16.10:59436.service: Deactivated successfully. May 14 23:55:28.856836 systemd[1]: session-24.scope: Deactivated successfully. May 14 23:55:28.857757 systemd-logind[1736]: Session 24 logged out. Waiting for processes to exit. May 14 23:55:28.859038 systemd-logind[1736]: Removed session 24. May 14 23:55:28.936209 systemd[1]: Started sshd@22-10.200.20.40:22-10.200.16.10:39464.service - OpenSSH per-connection server daemon (10.200.16.10:39464). May 14 23:55:29.396968 sshd[5110]: Accepted publickey for core from 10.200.16.10 port 39464 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:55:29.398375 sshd-session[5110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:55:29.403176 systemd-logind[1736]: New session 25 of user core. May 14 23:55:29.410543 systemd[1]: Started session-25.scope - Session 25 of User core. May 14 23:55:32.044850 containerd[1766]: time="2025-05-14T23:55:32.044591834Z" level=info msg="StopContainer for \"15ea7793051fc1860e07e097921228d31b7ca5ffd655be6a18d78efc5a47439d\" with timeout 30 (s)" May 14 23:55:32.047837 containerd[1766]: time="2025-05-14T23:55:32.047574796Z" level=info msg="Stop container \"15ea7793051fc1860e07e097921228d31b7ca5ffd655be6a18d78efc5a47439d\" with signal terminated" May 14 23:55:32.058539 systemd[1]: cri-containerd-15ea7793051fc1860e07e097921228d31b7ca5ffd655be6a18d78efc5a47439d.scope: Deactivated successfully. May 14 23:55:32.067687 containerd[1766]: time="2025-05-14T23:55:32.067505452Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 23:55:32.077747 containerd[1766]: time="2025-05-14T23:55:32.077628700Z" level=info msg="StopContainer for \"e1a31b236578ab464f8e914578262a5560715a1b61669190cb4619cd7e27d9ab\" with timeout 2 (s)" May 14 23:55:32.078085 containerd[1766]: time="2025-05-14T23:55:32.078014020Z" level=info msg="Stop container \"e1a31b236578ab464f8e914578262a5560715a1b61669190cb4619cd7e27d9ab\" with signal terminated" May 14 23:55:32.088184 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15ea7793051fc1860e07e097921228d31b7ca5ffd655be6a18d78efc5a47439d-rootfs.mount: Deactivated successfully. May 14 23:55:32.090545 systemd-networkd[1520]: lxc_health: Link DOWN May 14 23:55:32.090548 systemd-networkd[1520]: lxc_health: Lost carrier May 14 23:55:32.102400 systemd[1]: cri-containerd-e1a31b236578ab464f8e914578262a5560715a1b61669190cb4619cd7e27d9ab.scope: Deactivated successfully. May 14 23:55:32.102782 systemd[1]: cri-containerd-e1a31b236578ab464f8e914578262a5560715a1b61669190cb4619cd7e27d9ab.scope: Consumed 6.582s CPU time, 141.8M memory peak, 128K read from disk, 12.9M written to disk. May 14 23:55:32.121618 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1a31b236578ab464f8e914578262a5560715a1b61669190cb4619cd7e27d9ab-rootfs.mount: Deactivated successfully. May 14 23:55:32.179980 containerd[1766]: time="2025-05-14T23:55:32.179914701Z" level=info msg="shim disconnected" id=e1a31b236578ab464f8e914578262a5560715a1b61669190cb4619cd7e27d9ab namespace=k8s.io May 14 23:55:32.179980 containerd[1766]: time="2025-05-14T23:55:32.179972421Z" level=warning msg="cleaning up after shim disconnected" id=e1a31b236578ab464f8e914578262a5560715a1b61669190cb4619cd7e27d9ab namespace=k8s.io May 14 23:55:32.179980 containerd[1766]: time="2025-05-14T23:55:32.179982981Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:55:32.180546 containerd[1766]: time="2025-05-14T23:55:32.180314302Z" level=info msg="shim disconnected" id=15ea7793051fc1860e07e097921228d31b7ca5ffd655be6a18d78efc5a47439d namespace=k8s.io May 14 23:55:32.180546 containerd[1766]: time="2025-05-14T23:55:32.180363302Z" level=warning msg="cleaning up after shim disconnected" id=15ea7793051fc1860e07e097921228d31b7ca5ffd655be6a18d78efc5a47439d namespace=k8s.io May 14 23:55:32.180546 containerd[1766]: time="2025-05-14T23:55:32.180370622Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:55:32.198328 containerd[1766]: time="2025-05-14T23:55:32.198288716Z" level=info msg="StopContainer for \"15ea7793051fc1860e07e097921228d31b7ca5ffd655be6a18d78efc5a47439d\" returns successfully" May 14 23:55:32.199556 containerd[1766]: time="2025-05-14T23:55:32.199213036Z" level=info msg="StopPodSandbox for \"365300bde33653ce9f3d5ceec2ac07f13b9c9f848206e0046f357e3e6e6f0835\"" May 14 23:55:32.199626 containerd[1766]: time="2025-05-14T23:55:32.199575317Z" level=info msg="Container to stop \"15ea7793051fc1860e07e097921228d31b7ca5ffd655be6a18d78efc5a47439d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 23:55:32.199757 containerd[1766]: time="2025-05-14T23:55:32.199497397Z" level=info msg="StopContainer for \"e1a31b236578ab464f8e914578262a5560715a1b61669190cb4619cd7e27d9ab\" returns successfully" May 14 23:55:32.201153 containerd[1766]: time="2025-05-14T23:55:32.201117918Z" level=info msg="StopPodSandbox for \"b26aeea3ac38574f86e77342d058b6679eb2263d5e7115a928e5a2be3e704ae5\"" May 14 23:55:32.201222 containerd[1766]: time="2025-05-14T23:55:32.201156718Z" level=info msg="Container to stop \"e1a31b236578ab464f8e914578262a5560715a1b61669190cb4619cd7e27d9ab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 23:55:32.201222 containerd[1766]: time="2025-05-14T23:55:32.201167478Z" level=info msg="Container to stop \"5fed1aa469e342100b3de26b55ba3aa224af32d476942b7ac7a034ab46fd8d38\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 23:55:32.201222 containerd[1766]: time="2025-05-14T23:55:32.201175598Z" level=info msg="Container to stop \"c38a5813a0515fbe36ab3903b8ae4d130f5fabaf37ee3c9ca161d5fa7a9b58ee\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 23:55:32.201222 containerd[1766]: time="2025-05-14T23:55:32.201183438Z" level=info msg="Container to stop \"600e528c9e65240849b732a91fc04b203b09a9ed56c8bfd5ba1f63a58089c40a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 23:55:32.201222 containerd[1766]: time="2025-05-14T23:55:32.201191278Z" level=info msg="Container to stop \"09feaaad4064e5884366bfd6d094900bf95844b00577c6aaf22a1c94447d8e26\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 23:55:32.202997 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-365300bde33653ce9f3d5ceec2ac07f13b9c9f848206e0046f357e3e6e6f0835-shm.mount: Deactivated successfully. May 14 23:55:32.206610 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b26aeea3ac38574f86e77342d058b6679eb2263d5e7115a928e5a2be3e704ae5-shm.mount: Deactivated successfully. May 14 23:55:32.210679 systemd[1]: cri-containerd-365300bde33653ce9f3d5ceec2ac07f13b9c9f848206e0046f357e3e6e6f0835.scope: Deactivated successfully. May 14 23:55:32.212407 systemd[1]: cri-containerd-b26aeea3ac38574f86e77342d058b6679eb2263d5e7115a928e5a2be3e704ae5.scope: Deactivated successfully. May 14 23:55:32.249145 containerd[1766]: time="2025-05-14T23:55:32.248855916Z" level=info msg="shim disconnected" id=365300bde33653ce9f3d5ceec2ac07f13b9c9f848206e0046f357e3e6e6f0835 namespace=k8s.io May 14 23:55:32.249145 containerd[1766]: time="2025-05-14T23:55:32.249033196Z" level=warning msg="cleaning up after shim disconnected" id=365300bde33653ce9f3d5ceec2ac07f13b9c9f848206e0046f357e3e6e6f0835 namespace=k8s.io May 14 23:55:32.249145 containerd[1766]: time="2025-05-14T23:55:32.249041916Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:55:32.249377 containerd[1766]: time="2025-05-14T23:55:32.249114956Z" level=info msg="shim disconnected" id=b26aeea3ac38574f86e77342d058b6679eb2263d5e7115a928e5a2be3e704ae5 namespace=k8s.io May 14 23:55:32.249377 containerd[1766]: time="2025-05-14T23:55:32.249210076Z" level=warning msg="cleaning up after shim disconnected" id=b26aeea3ac38574f86e77342d058b6679eb2263d5e7115a928e5a2be3e704ae5 namespace=k8s.io May 14 23:55:32.249377 containerd[1766]: time="2025-05-14T23:55:32.249218316Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:55:32.261405 containerd[1766]: time="2025-05-14T23:55:32.260917525Z" level=warning msg="cleanup warnings time=\"2025-05-14T23:55:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 14 23:55:32.261994 containerd[1766]: time="2025-05-14T23:55:32.261966006Z" level=info msg="TearDown network for sandbox \"b26aeea3ac38574f86e77342d058b6679eb2263d5e7115a928e5a2be3e704ae5\" successfully" May 14 23:55:32.262090 containerd[1766]: time="2025-05-14T23:55:32.262075526Z" level=info msg="StopPodSandbox for \"b26aeea3ac38574f86e77342d058b6679eb2263d5e7115a928e5a2be3e704ae5\" returns successfully" May 14 23:55:32.267394 containerd[1766]: time="2025-05-14T23:55:32.267314930Z" level=info msg="TearDown network for sandbox \"365300bde33653ce9f3d5ceec2ac07f13b9c9f848206e0046f357e3e6e6f0835\" successfully" May 14 23:55:32.267559 containerd[1766]: time="2025-05-14T23:55:32.267481811Z" level=info msg="StopPodSandbox for \"365300bde33653ce9f3d5ceec2ac07f13b9c9f848206e0046f357e3e6e6f0835\" returns successfully" May 14 23:55:32.402297 kubelet[3367]: I0514 23:55:32.401792 3367 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/72a538d3-817d-4308-b25c-cfb0d11e33b8-cilium-config-path\") pod \"72a538d3-817d-4308-b25c-cfb0d11e33b8\" (UID: \"72a538d3-817d-4308-b25c-cfb0d11e33b8\") " May 14 23:55:32.402297 kubelet[3367]: I0514 23:55:32.401840 3367 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9802231f-c231-4831-96a2-3393633b9c5c-cilium-config-path\") pod \"9802231f-c231-4831-96a2-3393633b9c5c\" (UID: \"9802231f-c231-4831-96a2-3393633b9c5c\") " May 14 23:55:32.402297 kubelet[3367]: I0514 23:55:32.401859 3367 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9802231f-c231-4831-96a2-3393633b9c5c-cilium-run\") pod \"9802231f-c231-4831-96a2-3393633b9c5c\" (UID: \"9802231f-c231-4831-96a2-3393633b9c5c\") " May 14 23:55:32.402297 kubelet[3367]: I0514 23:55:32.401875 3367 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9802231f-c231-4831-96a2-3393633b9c5c-xtables-lock\") pod \"9802231f-c231-4831-96a2-3393633b9c5c\" (UID: \"9802231f-c231-4831-96a2-3393633b9c5c\") " May 14 23:55:32.402297 kubelet[3367]: I0514 23:55:32.401893 3367 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9802231f-c231-4831-96a2-3393633b9c5c-bpf-maps\") pod \"9802231f-c231-4831-96a2-3393633b9c5c\" (UID: \"9802231f-c231-4831-96a2-3393633b9c5c\") " May 14 23:55:32.402297 kubelet[3367]: I0514 23:55:32.401911 3367 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9802231f-c231-4831-96a2-3393633b9c5c-clustermesh-secrets\") pod \"9802231f-c231-4831-96a2-3393633b9c5c\" (UID: \"9802231f-c231-4831-96a2-3393633b9c5c\") " May 14 23:55:32.402833 kubelet[3367]: I0514 23:55:32.401926 3367 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9802231f-c231-4831-96a2-3393633b9c5c-hostproc\") pod \"9802231f-c231-4831-96a2-3393633b9c5c\" (UID: \"9802231f-c231-4831-96a2-3393633b9c5c\") " May 14 23:55:32.402833 kubelet[3367]: I0514 23:55:32.401942 3367 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9802231f-c231-4831-96a2-3393633b9c5c-etc-cni-netd\") pod \"9802231f-c231-4831-96a2-3393633b9c5c\" (UID: \"9802231f-c231-4831-96a2-3393633b9c5c\") " May 14 23:55:32.402833 kubelet[3367]: I0514 23:55:32.401956 3367 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9802231f-c231-4831-96a2-3393633b9c5c-cilium-cgroup\") pod \"9802231f-c231-4831-96a2-3393633b9c5c\" (UID: \"9802231f-c231-4831-96a2-3393633b9c5c\") " May 14 23:55:32.402833 kubelet[3367]: I0514 23:55:32.401975 3367 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t77x2\" (UniqueName: \"kubernetes.io/projected/72a538d3-817d-4308-b25c-cfb0d11e33b8-kube-api-access-t77x2\") pod \"72a538d3-817d-4308-b25c-cfb0d11e33b8\" (UID: \"72a538d3-817d-4308-b25c-cfb0d11e33b8\") " May 14 23:55:32.402833 kubelet[3367]: I0514 23:55:32.401994 3367 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9802231f-c231-4831-96a2-3393633b9c5c-host-proc-sys-kernel\") pod \"9802231f-c231-4831-96a2-3393633b9c5c\" (UID: \"9802231f-c231-4831-96a2-3393633b9c5c\") " May 14 23:55:32.402833 kubelet[3367]: I0514 23:55:32.402010 3367 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9802231f-c231-4831-96a2-3393633b9c5c-hubble-tls\") pod \"9802231f-c231-4831-96a2-3393633b9c5c\" (UID: \"9802231f-c231-4831-96a2-3393633b9c5c\") " May 14 23:55:32.402963 kubelet[3367]: I0514 23:55:32.402023 3367 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9802231f-c231-4831-96a2-3393633b9c5c-lib-modules\") pod \"9802231f-c231-4831-96a2-3393633b9c5c\" (UID: \"9802231f-c231-4831-96a2-3393633b9c5c\") " May 14 23:55:32.402963 kubelet[3367]: I0514 23:55:32.402041 3367 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mf4p7\" (UniqueName: \"kubernetes.io/projected/9802231f-c231-4831-96a2-3393633b9c5c-kube-api-access-mf4p7\") pod \"9802231f-c231-4831-96a2-3393633b9c5c\" (UID: \"9802231f-c231-4831-96a2-3393633b9c5c\") " May 14 23:55:32.402963 kubelet[3367]: I0514 23:55:32.402055 3367 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9802231f-c231-4831-96a2-3393633b9c5c-host-proc-sys-net\") pod \"9802231f-c231-4831-96a2-3393633b9c5c\" (UID: \"9802231f-c231-4831-96a2-3393633b9c5c\") " May 14 23:55:32.402963 kubelet[3367]: I0514 23:55:32.402076 3367 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9802231f-c231-4831-96a2-3393633b9c5c-cni-path\") pod \"9802231f-c231-4831-96a2-3393633b9c5c\" (UID: \"9802231f-c231-4831-96a2-3393633b9c5c\") " May 14 23:55:32.402963 kubelet[3367]: I0514 23:55:32.402141 3367 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9802231f-c231-4831-96a2-3393633b9c5c-cni-path" (OuterVolumeSpecName: "cni-path") pod "9802231f-c231-4831-96a2-3393633b9c5c" (UID: "9802231f-c231-4831-96a2-3393633b9c5c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 23:55:32.405224 kubelet[3367]: I0514 23:55:32.404198 3367 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72a538d3-817d-4308-b25c-cfb0d11e33b8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "72a538d3-817d-4308-b25c-cfb0d11e33b8" (UID: "72a538d3-817d-4308-b25c-cfb0d11e33b8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 14 23:55:32.405224 kubelet[3367]: I0514 23:55:32.404253 3367 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9802231f-c231-4831-96a2-3393633b9c5c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9802231f-c231-4831-96a2-3393633b9c5c" (UID: "9802231f-c231-4831-96a2-3393633b9c5c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 23:55:32.405224 kubelet[3367]: I0514 23:55:32.404272 3367 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9802231f-c231-4831-96a2-3393633b9c5c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9802231f-c231-4831-96a2-3393633b9c5c" (UID: "9802231f-c231-4831-96a2-3393633b9c5c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 23:55:32.405224 kubelet[3367]: I0514 23:55:32.404285 3367 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9802231f-c231-4831-96a2-3393633b9c5c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9802231f-c231-4831-96a2-3393633b9c5c" (UID: "9802231f-c231-4831-96a2-3393633b9c5c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 23:55:32.405224 kubelet[3367]: I0514 23:55:32.404299 3367 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9802231f-c231-4831-96a2-3393633b9c5c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9802231f-c231-4831-96a2-3393633b9c5c" (UID: "9802231f-c231-4831-96a2-3393633b9c5c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 23:55:32.405415 kubelet[3367]: I0514 23:55:32.405005 3367 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9802231f-c231-4831-96a2-3393633b9c5c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9802231f-c231-4831-96a2-3393633b9c5c" (UID: "9802231f-c231-4831-96a2-3393633b9c5c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 14 23:55:32.406500 kubelet[3367]: I0514 23:55:32.406452 3367 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9802231f-c231-4831-96a2-3393633b9c5c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9802231f-c231-4831-96a2-3393633b9c5c" (UID: "9802231f-c231-4831-96a2-3393633b9c5c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 14 23:55:32.406571 kubelet[3367]: I0514 23:55:32.406509 3367 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9802231f-c231-4831-96a2-3393633b9c5c-hostproc" (OuterVolumeSpecName: "hostproc") pod "9802231f-c231-4831-96a2-3393633b9c5c" (UID: "9802231f-c231-4831-96a2-3393633b9c5c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 23:55:32.406571 kubelet[3367]: I0514 23:55:32.406526 3367 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9802231f-c231-4831-96a2-3393633b9c5c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9802231f-c231-4831-96a2-3393633b9c5c" (UID: "9802231f-c231-4831-96a2-3393633b9c5c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 23:55:32.406571 kubelet[3367]: I0514 23:55:32.406540 3367 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9802231f-c231-4831-96a2-3393633b9c5c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9802231f-c231-4831-96a2-3393633b9c5c" (UID: "9802231f-c231-4831-96a2-3393633b9c5c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 23:55:32.406571 kubelet[3367]: I0514 23:55:32.406553 3367 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9802231f-c231-4831-96a2-3393633b9c5c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9802231f-c231-4831-96a2-3393633b9c5c" (UID: "9802231f-c231-4831-96a2-3393633b9c5c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 23:55:32.407585 kubelet[3367]: I0514 23:55:32.407557 3367 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72a538d3-817d-4308-b25c-cfb0d11e33b8-kube-api-access-t77x2" (OuterVolumeSpecName: "kube-api-access-t77x2") pod "72a538d3-817d-4308-b25c-cfb0d11e33b8" (UID: "72a538d3-817d-4308-b25c-cfb0d11e33b8"). InnerVolumeSpecName "kube-api-access-t77x2". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 14 23:55:32.408578 kubelet[3367]: I0514 23:55:32.408540 3367 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9802231f-c231-4831-96a2-3393633b9c5c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9802231f-c231-4831-96a2-3393633b9c5c" (UID: "9802231f-c231-4831-96a2-3393633b9c5c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 14 23:55:32.408655 kubelet[3367]: I0514 23:55:32.408583 3367 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9802231f-c231-4831-96a2-3393633b9c5c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9802231f-c231-4831-96a2-3393633b9c5c" (UID: "9802231f-c231-4831-96a2-3393633b9c5c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 23:55:32.409599 kubelet[3367]: I0514 23:55:32.409574 3367 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9802231f-c231-4831-96a2-3393633b9c5c-kube-api-access-mf4p7" (OuterVolumeSpecName: "kube-api-access-mf4p7") pod "9802231f-c231-4831-96a2-3393633b9c5c" (UID: "9802231f-c231-4831-96a2-3393633b9c5c"). InnerVolumeSpecName "kube-api-access-mf4p7". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 14 23:55:32.502964 kubelet[3367]: I0514 23:55:32.502734 3367 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9802231f-c231-4831-96a2-3393633b9c5c-host-proc-sys-kernel\") on node \"ci-4230.1.1-n-bf7705109b\" DevicePath \"\"" May 14 23:55:32.502964 kubelet[3367]: I0514 23:55:32.502766 3367 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9802231f-c231-4831-96a2-3393633b9c5c-etc-cni-netd\") on node \"ci-4230.1.1-n-bf7705109b\" DevicePath \"\"" May 14 23:55:32.502964 kubelet[3367]: I0514 23:55:32.502779 3367 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9802231f-c231-4831-96a2-3393633b9c5c-cilium-cgroup\") on node \"ci-4230.1.1-n-bf7705109b\" DevicePath \"\"" May 14 23:55:32.502964 kubelet[3367]: I0514 23:55:32.502809 3367 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t77x2\" (UniqueName: \"kubernetes.io/projected/72a538d3-817d-4308-b25c-cfb0d11e33b8-kube-api-access-t77x2\") on node \"ci-4230.1.1-n-bf7705109b\" DevicePath \"\"" May 14 23:55:32.502964 kubelet[3367]: I0514 23:55:32.502819 3367 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9802231f-c231-4831-96a2-3393633b9c5c-lib-modules\") on node \"ci-4230.1.1-n-bf7705109b\" DevicePath \"\"" May 14 23:55:32.502964 kubelet[3367]: I0514 23:55:32.502828 3367 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9802231f-c231-4831-96a2-3393633b9c5c-hubble-tls\") on node \"ci-4230.1.1-n-bf7705109b\" DevicePath \"\"" May 14 23:55:32.502964 kubelet[3367]: I0514 23:55:32.502835 3367 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mf4p7\" (UniqueName: \"kubernetes.io/projected/9802231f-c231-4831-96a2-3393633b9c5c-kube-api-access-mf4p7\") on node \"ci-4230.1.1-n-bf7705109b\" DevicePath \"\"" May 14 23:55:32.502964 kubelet[3367]: I0514 23:55:32.502844 3367 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9802231f-c231-4831-96a2-3393633b9c5c-host-proc-sys-net\") on node \"ci-4230.1.1-n-bf7705109b\" DevicePath \"\"" May 14 23:55:32.503375 kubelet[3367]: I0514 23:55:32.502854 3367 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9802231f-c231-4831-96a2-3393633b9c5c-cni-path\") on node \"ci-4230.1.1-n-bf7705109b\" DevicePath \"\"" May 14 23:55:32.503375 kubelet[3367]: I0514 23:55:32.502872 3367 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9802231f-c231-4831-96a2-3393633b9c5c-cilium-run\") on node \"ci-4230.1.1-n-bf7705109b\" DevicePath \"\"" May 14 23:55:32.503375 kubelet[3367]: I0514 23:55:32.502881 3367 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9802231f-c231-4831-96a2-3393633b9c5c-xtables-lock\") on node \"ci-4230.1.1-n-bf7705109b\" DevicePath \"\"" May 14 23:55:32.503375 kubelet[3367]: I0514 23:55:32.502889 3367 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/72a538d3-817d-4308-b25c-cfb0d11e33b8-cilium-config-path\") on node \"ci-4230.1.1-n-bf7705109b\" DevicePath \"\"" May 14 23:55:32.503375 kubelet[3367]: I0514 23:55:32.502897 3367 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9802231f-c231-4831-96a2-3393633b9c5c-cilium-config-path\") on node \"ci-4230.1.1-n-bf7705109b\" DevicePath \"\"" May 14 23:55:32.503375 kubelet[3367]: I0514 23:55:32.502907 3367 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9802231f-c231-4831-96a2-3393633b9c5c-clustermesh-secrets\") on node \"ci-4230.1.1-n-bf7705109b\" DevicePath \"\"" May 14 23:55:32.503375 kubelet[3367]: I0514 23:55:32.502915 3367 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9802231f-c231-4831-96a2-3393633b9c5c-hostproc\") on node \"ci-4230.1.1-n-bf7705109b\" DevicePath \"\"" May 14 23:55:32.503375 kubelet[3367]: I0514 23:55:32.502923 3367 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9802231f-c231-4831-96a2-3393633b9c5c-bpf-maps\") on node \"ci-4230.1.1-n-bf7705109b\" DevicePath \"\"" May 14 23:55:32.631928 systemd[1]: Removed slice kubepods-burstable-pod9802231f_c231_4831_96a2_3393633b9c5c.slice - libcontainer container kubepods-burstable-pod9802231f_c231_4831_96a2_3393633b9c5c.slice. May 14 23:55:32.632363 systemd[1]: kubepods-burstable-pod9802231f_c231_4831_96a2_3393633b9c5c.slice: Consumed 6.654s CPU time, 142.3M memory peak, 128K read from disk, 12.9M written to disk. May 14 23:55:32.633955 systemd[1]: Removed slice kubepods-besteffort-pod72a538d3_817d_4308_b25c_cfb0d11e33b8.slice - libcontainer container kubepods-besteffort-pod72a538d3_817d_4308_b25c_cfb0d11e33b8.slice. May 14 23:55:33.047217 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-365300bde33653ce9f3d5ceec2ac07f13b9c9f848206e0046f357e3e6e6f0835-rootfs.mount: Deactivated successfully. May 14 23:55:33.047316 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b26aeea3ac38574f86e77342d058b6679eb2263d5e7115a928e5a2be3e704ae5-rootfs.mount: Deactivated successfully. May 14 23:55:33.047394 systemd[1]: var-lib-kubelet-pods-9802231f\x2dc231\x2d4831\x2d96a2\x2d3393633b9c5c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 14 23:55:33.047454 systemd[1]: var-lib-kubelet-pods-9802231f\x2dc231\x2d4831\x2d96a2\x2d3393633b9c5c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 14 23:55:33.047502 systemd[1]: var-lib-kubelet-pods-72a538d3\x2d817d\x2d4308\x2db25c\x2dcfb0d11e33b8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt77x2.mount: Deactivated successfully. May 14 23:55:33.047550 systemd[1]: var-lib-kubelet-pods-9802231f\x2dc231\x2d4831\x2d96a2\x2d3393633b9c5c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmf4p7.mount: Deactivated successfully. May 14 23:55:33.138120 kubelet[3367]: I0514 23:55:33.138013 3367 scope.go:117] "RemoveContainer" containerID="15ea7793051fc1860e07e097921228d31b7ca5ffd655be6a18d78efc5a47439d" May 14 23:55:33.139758 containerd[1766]: time="2025-05-14T23:55:33.139661462Z" level=info msg="RemoveContainer for \"15ea7793051fc1860e07e097921228d31b7ca5ffd655be6a18d78efc5a47439d\"" May 14 23:55:33.153116 containerd[1766]: time="2025-05-14T23:55:33.152936272Z" level=info msg="RemoveContainer for \"15ea7793051fc1860e07e097921228d31b7ca5ffd655be6a18d78efc5a47439d\" returns successfully" May 14 23:55:33.153359 kubelet[3367]: I0514 23:55:33.153320 3367 scope.go:117] "RemoveContainer" containerID="15ea7793051fc1860e07e097921228d31b7ca5ffd655be6a18d78efc5a47439d" May 14 23:55:33.153879 containerd[1766]: time="2025-05-14T23:55:33.153796713Z" level=error msg="ContainerStatus for \"15ea7793051fc1860e07e097921228d31b7ca5ffd655be6a18d78efc5a47439d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"15ea7793051fc1860e07e097921228d31b7ca5ffd655be6a18d78efc5a47439d\": not found" May 14 23:55:33.153984 kubelet[3367]: E0514 23:55:33.153953 3367 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"15ea7793051fc1860e07e097921228d31b7ca5ffd655be6a18d78efc5a47439d\": not found" containerID="15ea7793051fc1860e07e097921228d31b7ca5ffd655be6a18d78efc5a47439d" May 14 23:55:33.154796 kubelet[3367]: I0514 23:55:33.153996 3367 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"15ea7793051fc1860e07e097921228d31b7ca5ffd655be6a18d78efc5a47439d"} err="failed to get container status \"15ea7793051fc1860e07e097921228d31b7ca5ffd655be6a18d78efc5a47439d\": rpc error: code = NotFound desc = an error occurred when try to find container \"15ea7793051fc1860e07e097921228d31b7ca5ffd655be6a18d78efc5a47439d\": not found" May 14 23:55:33.154796 kubelet[3367]: I0514 23:55:33.154701 3367 scope.go:117] "RemoveContainer" containerID="e1a31b236578ab464f8e914578262a5560715a1b61669190cb4619cd7e27d9ab" May 14 23:55:33.156811 containerd[1766]: time="2025-05-14T23:55:33.156441715Z" level=info msg="RemoveContainer for \"e1a31b236578ab464f8e914578262a5560715a1b61669190cb4619cd7e27d9ab\"" May 14 23:55:33.167432 containerd[1766]: time="2025-05-14T23:55:33.167322164Z" level=info msg="RemoveContainer for \"e1a31b236578ab464f8e914578262a5560715a1b61669190cb4619cd7e27d9ab\" returns successfully" May 14 23:55:33.167614 kubelet[3367]: I0514 23:55:33.167535 3367 scope.go:117] "RemoveContainer" containerID="c38a5813a0515fbe36ab3903b8ae4d130f5fabaf37ee3c9ca161d5fa7a9b58ee" May 14 23:55:33.168670 containerd[1766]: time="2025-05-14T23:55:33.168591445Z" level=info msg="RemoveContainer for \"c38a5813a0515fbe36ab3903b8ae4d130f5fabaf37ee3c9ca161d5fa7a9b58ee\"" May 14 23:55:33.178303 containerd[1766]: time="2025-05-14T23:55:33.178235852Z" level=info msg="RemoveContainer for \"c38a5813a0515fbe36ab3903b8ae4d130f5fabaf37ee3c9ca161d5fa7a9b58ee\" returns successfully" May 14 23:55:33.178640 kubelet[3367]: I0514 23:55:33.178471 3367 scope.go:117] "RemoveContainer" containerID="5fed1aa469e342100b3de26b55ba3aa224af32d476942b7ac7a034ab46fd8d38" May 14 23:55:33.179684 containerd[1766]: time="2025-05-14T23:55:33.179610733Z" level=info msg="RemoveContainer for \"5fed1aa469e342100b3de26b55ba3aa224af32d476942b7ac7a034ab46fd8d38\"" May 14 23:55:33.188139 containerd[1766]: time="2025-05-14T23:55:33.188098820Z" level=info msg="RemoveContainer for \"5fed1aa469e342100b3de26b55ba3aa224af32d476942b7ac7a034ab46fd8d38\" returns successfully" May 14 23:55:33.188479 kubelet[3367]: I0514 23:55:33.188371 3367 scope.go:117] "RemoveContainer" containerID="600e528c9e65240849b732a91fc04b203b09a9ed56c8bfd5ba1f63a58089c40a" May 14 23:55:33.189542 containerd[1766]: time="2025-05-14T23:55:33.189509381Z" level=info msg="RemoveContainer for \"600e528c9e65240849b732a91fc04b203b09a9ed56c8bfd5ba1f63a58089c40a\"" May 14 23:55:33.198645 containerd[1766]: time="2025-05-14T23:55:33.198497068Z" level=info msg="RemoveContainer for \"600e528c9e65240849b732a91fc04b203b09a9ed56c8bfd5ba1f63a58089c40a\" returns successfully" May 14 23:55:33.199254 kubelet[3367]: I0514 23:55:33.198958 3367 scope.go:117] "RemoveContainer" containerID="09feaaad4064e5884366bfd6d094900bf95844b00577c6aaf22a1c94447d8e26" May 14 23:55:33.200325 containerd[1766]: time="2025-05-14T23:55:33.200284430Z" level=info msg="RemoveContainer for \"09feaaad4064e5884366bfd6d094900bf95844b00577c6aaf22a1c94447d8e26\"" May 14 23:55:33.209133 containerd[1766]: time="2025-05-14T23:55:33.209093317Z" level=info msg="RemoveContainer for \"09feaaad4064e5884366bfd6d094900bf95844b00577c6aaf22a1c94447d8e26\" returns successfully" May 14 23:55:33.209573 kubelet[3367]: I0514 23:55:33.209475 3367 scope.go:117] "RemoveContainer" containerID="e1a31b236578ab464f8e914578262a5560715a1b61669190cb4619cd7e27d9ab" May 14 23:55:33.209912 containerd[1766]: time="2025-05-14T23:55:33.209825037Z" level=error msg="ContainerStatus for \"e1a31b236578ab464f8e914578262a5560715a1b61669190cb4619cd7e27d9ab\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e1a31b236578ab464f8e914578262a5560715a1b61669190cb4619cd7e27d9ab\": not found" May 14 23:55:33.210011 kubelet[3367]: E0514 23:55:33.209979 3367 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e1a31b236578ab464f8e914578262a5560715a1b61669190cb4619cd7e27d9ab\": not found" containerID="e1a31b236578ab464f8e914578262a5560715a1b61669190cb4619cd7e27d9ab" May 14 23:55:33.210065 kubelet[3367]: I0514 23:55:33.210026 3367 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e1a31b236578ab464f8e914578262a5560715a1b61669190cb4619cd7e27d9ab"} err="failed to get container status \"e1a31b236578ab464f8e914578262a5560715a1b61669190cb4619cd7e27d9ab\": rpc error: code = NotFound desc = an error occurred when try to find container \"e1a31b236578ab464f8e914578262a5560715a1b61669190cb4619cd7e27d9ab\": not found" May 14 23:55:33.210065 kubelet[3367]: I0514 23:55:33.210049 3367 scope.go:117] "RemoveContainer" containerID="c38a5813a0515fbe36ab3903b8ae4d130f5fabaf37ee3c9ca161d5fa7a9b58ee" May 14 23:55:33.210428 kubelet[3367]: E0514 23:55:33.210393 3367 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c38a5813a0515fbe36ab3903b8ae4d130f5fabaf37ee3c9ca161d5fa7a9b58ee\": not found" containerID="c38a5813a0515fbe36ab3903b8ae4d130f5fabaf37ee3c9ca161d5fa7a9b58ee" May 14 23:55:33.210428 kubelet[3367]: I0514 23:55:33.210414 3367 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c38a5813a0515fbe36ab3903b8ae4d130f5fabaf37ee3c9ca161d5fa7a9b58ee"} err="failed to get container status \"c38a5813a0515fbe36ab3903b8ae4d130f5fabaf37ee3c9ca161d5fa7a9b58ee\": rpc error: code = NotFound desc = an error occurred when try to find container \"c38a5813a0515fbe36ab3903b8ae4d130f5fabaf37ee3c9ca161d5fa7a9b58ee\": not found" May 14 23:55:33.210428 kubelet[3367]: I0514 23:55:33.210428 3367 scope.go:117] "RemoveContainer" containerID="5fed1aa469e342100b3de26b55ba3aa224af32d476942b7ac7a034ab46fd8d38" May 14 23:55:33.210508 containerd[1766]: time="2025-05-14T23:55:33.210238638Z" level=error msg="ContainerStatus for \"c38a5813a0515fbe36ab3903b8ae4d130f5fabaf37ee3c9ca161d5fa7a9b58ee\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c38a5813a0515fbe36ab3903b8ae4d130f5fabaf37ee3c9ca161d5fa7a9b58ee\": not found" May 14 23:55:33.210667 containerd[1766]: time="2025-05-14T23:55:33.210613718Z" level=error msg="ContainerStatus for \"5fed1aa469e342100b3de26b55ba3aa224af32d476942b7ac7a034ab46fd8d38\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5fed1aa469e342100b3de26b55ba3aa224af32d476942b7ac7a034ab46fd8d38\": not found" May 14 23:55:33.210821 kubelet[3367]: E0514 23:55:33.210749 3367 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5fed1aa469e342100b3de26b55ba3aa224af32d476942b7ac7a034ab46fd8d38\": not found" containerID="5fed1aa469e342100b3de26b55ba3aa224af32d476942b7ac7a034ab46fd8d38" May 14 23:55:33.210821 kubelet[3367]: I0514 23:55:33.210768 3367 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5fed1aa469e342100b3de26b55ba3aa224af32d476942b7ac7a034ab46fd8d38"} err="failed to get container status \"5fed1aa469e342100b3de26b55ba3aa224af32d476942b7ac7a034ab46fd8d38\": rpc error: code = NotFound desc = an error occurred when try to find container \"5fed1aa469e342100b3de26b55ba3aa224af32d476942b7ac7a034ab46fd8d38\": not found" May 14 23:55:33.210821 kubelet[3367]: I0514 23:55:33.210782 3367 scope.go:117] "RemoveContainer" containerID="600e528c9e65240849b732a91fc04b203b09a9ed56c8bfd5ba1f63a58089c40a" May 14 23:55:33.211125 kubelet[3367]: E0514 23:55:33.211065 3367 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"600e528c9e65240849b732a91fc04b203b09a9ed56c8bfd5ba1f63a58089c40a\": not found" containerID="600e528c9e65240849b732a91fc04b203b09a9ed56c8bfd5ba1f63a58089c40a" May 14 23:55:33.211125 kubelet[3367]: I0514 23:55:33.211081 3367 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"600e528c9e65240849b732a91fc04b203b09a9ed56c8bfd5ba1f63a58089c40a"} err="failed to get container status \"600e528c9e65240849b732a91fc04b203b09a9ed56c8bfd5ba1f63a58089c40a\": rpc error: code = NotFound desc = an error occurred when try to find container \"600e528c9e65240849b732a91fc04b203b09a9ed56c8bfd5ba1f63a58089c40a\": not found" May 14 23:55:33.211125 kubelet[3367]: I0514 23:55:33.211095 3367 scope.go:117] "RemoveContainer" containerID="09feaaad4064e5884366bfd6d094900bf95844b00577c6aaf22a1c94447d8e26" May 14 23:55:33.211204 containerd[1766]: time="2025-05-14T23:55:33.210973318Z" level=error msg="ContainerStatus for \"600e528c9e65240849b732a91fc04b203b09a9ed56c8bfd5ba1f63a58089c40a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"600e528c9e65240849b732a91fc04b203b09a9ed56c8bfd5ba1f63a58089c40a\": not found" May 14 23:55:33.211560 containerd[1766]: time="2025-05-14T23:55:33.211454478Z" level=error msg="ContainerStatus for \"09feaaad4064e5884366bfd6d094900bf95844b00577c6aaf22a1c94447d8e26\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"09feaaad4064e5884366bfd6d094900bf95844b00577c6aaf22a1c94447d8e26\": not found" May 14 23:55:33.211664 kubelet[3367]: E0514 23:55:33.211586 3367 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"09feaaad4064e5884366bfd6d094900bf95844b00577c6aaf22a1c94447d8e26\": not found" containerID="09feaaad4064e5884366bfd6d094900bf95844b00577c6aaf22a1c94447d8e26" May 14 23:55:33.211664 kubelet[3367]: I0514 23:55:33.211605 3367 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"09feaaad4064e5884366bfd6d094900bf95844b00577c6aaf22a1c94447d8e26"} err="failed to get container status \"09feaaad4064e5884366bfd6d094900bf95844b00577c6aaf22a1c94447d8e26\": rpc error: code = NotFound desc = an error occurred when try to find container \"09feaaad4064e5884366bfd6d094900bf95844b00577c6aaf22a1c94447d8e26\": not found" May 14 23:55:34.050508 sshd[5112]: Connection closed by 10.200.16.10 port 39464 May 14 23:55:34.050221 sshd-session[5110]: pam_unix(sshd:session): session closed for user core May 14 23:55:34.053392 systemd[1]: sshd@22-10.200.20.40:22-10.200.16.10:39464.service: Deactivated successfully. May 14 23:55:34.056033 systemd[1]: session-25.scope: Deactivated successfully. May 14 23:55:34.056237 systemd[1]: session-25.scope: Consumed 1.747s CPU time, 23.6M memory peak. May 14 23:55:34.057659 systemd-logind[1736]: Session 25 logged out. Waiting for processes to exit. May 14 23:55:34.059240 systemd-logind[1736]: Removed session 25. May 14 23:55:34.134585 systemd[1]: Started sshd@23-10.200.20.40:22-10.200.16.10:39478.service - OpenSSH per-connection server daemon (10.200.16.10:39478). May 14 23:55:34.557400 sshd[5274]: Accepted publickey for core from 10.200.16.10 port 39478 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:55:34.558647 sshd-session[5274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:55:34.564204 systemd-logind[1736]: New session 26 of user core. May 14 23:55:34.567489 systemd[1]: Started session-26.scope - Session 26 of User core. May 14 23:55:34.629316 kubelet[3367]: I0514 23:55:34.629229 3367 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72a538d3-817d-4308-b25c-cfb0d11e33b8" path="/var/lib/kubelet/pods/72a538d3-817d-4308-b25c-cfb0d11e33b8/volumes" May 14 23:55:34.629663 kubelet[3367]: I0514 23:55:34.629643 3367 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9802231f-c231-4831-96a2-3393633b9c5c" path="/var/lib/kubelet/pods/9802231f-c231-4831-96a2-3393633b9c5c/volumes" May 14 23:55:35.587458 kubelet[3367]: I0514 23:55:35.586671 3367 memory_manager.go:355] "RemoveStaleState removing state" podUID="9802231f-c231-4831-96a2-3393633b9c5c" containerName="cilium-agent" May 14 23:55:35.587458 kubelet[3367]: I0514 23:55:35.586701 3367 memory_manager.go:355] "RemoveStaleState removing state" podUID="72a538d3-817d-4308-b25c-cfb0d11e33b8" containerName="cilium-operator" May 14 23:55:35.597704 systemd[1]: Created slice kubepods-burstable-poda82ab6f9_a312_454d_b3e1_70b3ab803316.slice - libcontainer container kubepods-burstable-poda82ab6f9_a312_454d_b3e1_70b3ab803316.slice. May 14 23:55:35.635448 sshd[5276]: Connection closed by 10.200.16.10 port 39478 May 14 23:55:35.636007 sshd-session[5274]: pam_unix(sshd:session): session closed for user core May 14 23:55:35.642757 systemd[1]: sshd@23-10.200.20.40:22-10.200.16.10:39478.service: Deactivated successfully. May 14 23:55:35.646330 systemd[1]: session-26.scope: Deactivated successfully. May 14 23:55:35.648363 systemd-logind[1736]: Session 26 logged out. Waiting for processes to exit. May 14 23:55:35.650112 systemd-logind[1736]: Removed session 26. May 14 23:55:35.716201 systemd[1]: Started sshd@24-10.200.20.40:22-10.200.16.10:39492.service - OpenSSH per-connection server daemon (10.200.16.10:39492). May 14 23:55:35.722774 kubelet[3367]: I0514 23:55:35.722735 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a82ab6f9-a312-454d-b3e1-70b3ab803316-lib-modules\") pod \"cilium-rb2m4\" (UID: \"a82ab6f9-a312-454d-b3e1-70b3ab803316\") " pod="kube-system/cilium-rb2m4" May 14 23:55:35.723040 kubelet[3367]: I0514 23:55:35.722806 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a82ab6f9-a312-454d-b3e1-70b3ab803316-cilium-ipsec-secrets\") pod \"cilium-rb2m4\" (UID: \"a82ab6f9-a312-454d-b3e1-70b3ab803316\") " pod="kube-system/cilium-rb2m4" May 14 23:55:35.723040 kubelet[3367]: I0514 23:55:35.722829 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a82ab6f9-a312-454d-b3e1-70b3ab803316-etc-cni-netd\") pod \"cilium-rb2m4\" (UID: \"a82ab6f9-a312-454d-b3e1-70b3ab803316\") " pod="kube-system/cilium-rb2m4" May 14 23:55:35.723040 kubelet[3367]: I0514 23:55:35.722844 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a82ab6f9-a312-454d-b3e1-70b3ab803316-xtables-lock\") pod \"cilium-rb2m4\" (UID: \"a82ab6f9-a312-454d-b3e1-70b3ab803316\") " pod="kube-system/cilium-rb2m4" May 14 23:55:35.723040 kubelet[3367]: I0514 23:55:35.722892 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a82ab6f9-a312-454d-b3e1-70b3ab803316-hubble-tls\") pod \"cilium-rb2m4\" (UID: \"a82ab6f9-a312-454d-b3e1-70b3ab803316\") " pod="kube-system/cilium-rb2m4" May 14 23:55:35.723040 kubelet[3367]: I0514 23:55:35.722914 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a82ab6f9-a312-454d-b3e1-70b3ab803316-cilium-cgroup\") pod \"cilium-rb2m4\" (UID: \"a82ab6f9-a312-454d-b3e1-70b3ab803316\") " pod="kube-system/cilium-rb2m4" May 14 23:55:35.723040 kubelet[3367]: I0514 23:55:35.722930 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a82ab6f9-a312-454d-b3e1-70b3ab803316-cilium-run\") pod \"cilium-rb2m4\" (UID: \"a82ab6f9-a312-454d-b3e1-70b3ab803316\") " pod="kube-system/cilium-rb2m4" May 14 23:55:35.723173 kubelet[3367]: I0514 23:55:35.722971 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a82ab6f9-a312-454d-b3e1-70b3ab803316-clustermesh-secrets\") pod \"cilium-rb2m4\" (UID: \"a82ab6f9-a312-454d-b3e1-70b3ab803316\") " pod="kube-system/cilium-rb2m4" May 14 23:55:35.723173 kubelet[3367]: I0514 23:55:35.722990 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a82ab6f9-a312-454d-b3e1-70b3ab803316-host-proc-sys-net\") pod \"cilium-rb2m4\" (UID: \"a82ab6f9-a312-454d-b3e1-70b3ab803316\") " pod="kube-system/cilium-rb2m4" May 14 23:55:35.723173 kubelet[3367]: I0514 23:55:35.723007 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a82ab6f9-a312-454d-b3e1-70b3ab803316-host-proc-sys-kernel\") pod \"cilium-rb2m4\" (UID: \"a82ab6f9-a312-454d-b3e1-70b3ab803316\") " pod="kube-system/cilium-rb2m4" May 14 23:55:35.723173 kubelet[3367]: I0514 23:55:35.723034 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a82ab6f9-a312-454d-b3e1-70b3ab803316-cilium-config-path\") pod \"cilium-rb2m4\" (UID: \"a82ab6f9-a312-454d-b3e1-70b3ab803316\") " pod="kube-system/cilium-rb2m4" May 14 23:55:35.723173 kubelet[3367]: I0514 23:55:35.723051 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a82ab6f9-a312-454d-b3e1-70b3ab803316-bpf-maps\") pod \"cilium-rb2m4\" (UID: \"a82ab6f9-a312-454d-b3e1-70b3ab803316\") " pod="kube-system/cilium-rb2m4" May 14 23:55:35.723173 kubelet[3367]: I0514 23:55:35.723068 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a82ab6f9-a312-454d-b3e1-70b3ab803316-hostproc\") pod \"cilium-rb2m4\" (UID: \"a82ab6f9-a312-454d-b3e1-70b3ab803316\") " pod="kube-system/cilium-rb2m4" May 14 23:55:35.723295 kubelet[3367]: I0514 23:55:35.723083 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a82ab6f9-a312-454d-b3e1-70b3ab803316-cni-path\") pod \"cilium-rb2m4\" (UID: \"a82ab6f9-a312-454d-b3e1-70b3ab803316\") " pod="kube-system/cilium-rb2m4" May 14 23:55:35.723295 kubelet[3367]: I0514 23:55:35.723106 3367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65fc6\" (UniqueName: \"kubernetes.io/projected/a82ab6f9-a312-454d-b3e1-70b3ab803316-kube-api-access-65fc6\") pod \"cilium-rb2m4\" (UID: \"a82ab6f9-a312-454d-b3e1-70b3ab803316\") " pod="kube-system/cilium-rb2m4" May 14 23:55:35.902133 containerd[1766]: time="2025-05-14T23:55:35.902096730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rb2m4,Uid:a82ab6f9-a312-454d-b3e1-70b3ab803316,Namespace:kube-system,Attempt:0,}" May 14 23:55:35.946384 containerd[1766]: time="2025-05-14T23:55:35.946278685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:55:35.946557 containerd[1766]: time="2025-05-14T23:55:35.946532446Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:55:35.946679 containerd[1766]: time="2025-05-14T23:55:35.946658286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:55:35.946912 containerd[1766]: time="2025-05-14T23:55:35.946849206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:55:35.964506 systemd[1]: Started cri-containerd-06754baab8972b7cbf6ffa55195c3e9d93505d3e05d0fc02b8cabbef9118ccdf.scope - libcontainer container 06754baab8972b7cbf6ffa55195c3e9d93505d3e05d0fc02b8cabbef9118ccdf. May 14 23:55:35.984393 containerd[1766]: time="2025-05-14T23:55:35.984328075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rb2m4,Uid:a82ab6f9-a312-454d-b3e1-70b3ab803316,Namespace:kube-system,Attempt:0,} returns sandbox id \"06754baab8972b7cbf6ffa55195c3e9d93505d3e05d0fc02b8cabbef9118ccdf\"" May 14 23:55:35.988286 containerd[1766]: time="2025-05-14T23:55:35.988254119Z" level=info msg="CreateContainer within sandbox \"06754baab8972b7cbf6ffa55195c3e9d93505d3e05d0fc02b8cabbef9118ccdf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 23:55:36.023388 containerd[1766]: time="2025-05-14T23:55:36.023325226Z" level=info msg="CreateContainer within sandbox \"06754baab8972b7cbf6ffa55195c3e9d93505d3e05d0fc02b8cabbef9118ccdf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f9017808056a9a44b0df8567801571dc93231a5460e9c7ee5252788be064b430\"" May 14 23:55:36.026266 containerd[1766]: time="2025-05-14T23:55:36.023913267Z" level=info msg="StartContainer for \"f9017808056a9a44b0df8567801571dc93231a5460e9c7ee5252788be064b430\"" May 14 23:55:36.051534 systemd[1]: Started cri-containerd-f9017808056a9a44b0df8567801571dc93231a5460e9c7ee5252788be064b430.scope - libcontainer container f9017808056a9a44b0df8567801571dc93231a5460e9c7ee5252788be064b430. May 14 23:55:36.078181 containerd[1766]: time="2025-05-14T23:55:36.078132350Z" level=info msg="StartContainer for \"f9017808056a9a44b0df8567801571dc93231a5460e9c7ee5252788be064b430\" returns successfully" May 14 23:55:36.083025 systemd[1]: cri-containerd-f9017808056a9a44b0df8567801571dc93231a5460e9c7ee5252788be064b430.scope: Deactivated successfully. May 14 23:55:36.172684 containerd[1766]: time="2025-05-14T23:55:36.171790344Z" level=info msg="shim disconnected" id=f9017808056a9a44b0df8567801571dc93231a5460e9c7ee5252788be064b430 namespace=k8s.io May 14 23:55:36.172684 containerd[1766]: time="2025-05-14T23:55:36.171848984Z" level=warning msg="cleaning up after shim disconnected" id=f9017808056a9a44b0df8567801571dc93231a5460e9c7ee5252788be064b430 namespace=k8s.io May 14 23:55:36.172684 containerd[1766]: time="2025-05-14T23:55:36.171862384Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:55:36.177504 sshd[5287]: Accepted publickey for core from 10.200.16.10 port 39492 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:55:36.179397 sshd-session[5287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:55:36.185136 systemd-logind[1736]: New session 27 of user core. May 14 23:55:36.189494 systemd[1]: Started session-27.scope - Session 27 of User core. May 14 23:55:36.502132 sshd[5395]: Connection closed by 10.200.16.10 port 39492 May 14 23:55:36.503145 sshd-session[5287]: pam_unix(sshd:session): session closed for user core May 14 23:55:36.508119 systemd-logind[1736]: Session 27 logged out. Waiting for processes to exit. May 14 23:55:36.508620 systemd[1]: sshd@24-10.200.20.40:22-10.200.16.10:39492.service: Deactivated successfully. May 14 23:55:36.510987 systemd[1]: session-27.scope: Deactivated successfully. May 14 23:55:36.512581 systemd-logind[1736]: Removed session 27. May 14 23:55:36.595649 systemd[1]: Started sshd@25-10.200.20.40:22-10.200.16.10:39498.service - OpenSSH per-connection server daemon (10.200.16.10:39498). May 14 23:55:36.671208 containerd[1766]: time="2025-05-14T23:55:36.671163780Z" level=info msg="StopPodSandbox for \"b26aeea3ac38574f86e77342d058b6679eb2263d5e7115a928e5a2be3e704ae5\"" May 14 23:55:36.671469 containerd[1766]: time="2025-05-14T23:55:36.671448620Z" level=info msg="TearDown network for sandbox \"b26aeea3ac38574f86e77342d058b6679eb2263d5e7115a928e5a2be3e704ae5\" successfully" May 14 23:55:36.671567 containerd[1766]: time="2025-05-14T23:55:36.671552660Z" level=info msg="StopPodSandbox for \"b26aeea3ac38574f86e77342d058b6679eb2263d5e7115a928e5a2be3e704ae5\" returns successfully" May 14 23:55:36.672460 containerd[1766]: time="2025-05-14T23:55:36.672429541Z" level=info msg="RemovePodSandbox for \"b26aeea3ac38574f86e77342d058b6679eb2263d5e7115a928e5a2be3e704ae5\"" May 14 23:55:36.672550 containerd[1766]: time="2025-05-14T23:55:36.672462101Z" level=info msg="Forcibly stopping sandbox \"b26aeea3ac38574f86e77342d058b6679eb2263d5e7115a928e5a2be3e704ae5\"" May 14 23:55:36.672550 containerd[1766]: time="2025-05-14T23:55:36.672528141Z" level=info msg="TearDown network for sandbox \"b26aeea3ac38574f86e77342d058b6679eb2263d5e7115a928e5a2be3e704ae5\" successfully" May 14 23:55:36.685062 containerd[1766]: time="2025-05-14T23:55:36.685017351Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b26aeea3ac38574f86e77342d058b6679eb2263d5e7115a928e5a2be3e704ae5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:55:36.685238 containerd[1766]: time="2025-05-14T23:55:36.685217951Z" level=info msg="RemovePodSandbox \"b26aeea3ac38574f86e77342d058b6679eb2263d5e7115a928e5a2be3e704ae5\" returns successfully" May 14 23:55:36.685772 containerd[1766]: time="2025-05-14T23:55:36.685742591Z" level=info msg="StopPodSandbox for \"365300bde33653ce9f3d5ceec2ac07f13b9c9f848206e0046f357e3e6e6f0835\"" May 14 23:55:36.685844 containerd[1766]: time="2025-05-14T23:55:36.685818071Z" level=info msg="TearDown network for sandbox \"365300bde33653ce9f3d5ceec2ac07f13b9c9f848206e0046f357e3e6e6f0835\" successfully" May 14 23:55:36.685844 containerd[1766]: time="2025-05-14T23:55:36.685828071Z" level=info msg="StopPodSandbox for \"365300bde33653ce9f3d5ceec2ac07f13b9c9f848206e0046f357e3e6e6f0835\" returns successfully" May 14 23:55:36.686189 containerd[1766]: time="2025-05-14T23:55:36.686120631Z" level=info msg="RemovePodSandbox for \"365300bde33653ce9f3d5ceec2ac07f13b9c9f848206e0046f357e3e6e6f0835\"" May 14 23:55:36.686189 containerd[1766]: time="2025-05-14T23:55:36.686148712Z" level=info msg="Forcibly stopping sandbox \"365300bde33653ce9f3d5ceec2ac07f13b9c9f848206e0046f357e3e6e6f0835\"" May 14 23:55:36.686189 containerd[1766]: time="2025-05-14T23:55:36.686190432Z" level=info msg="TearDown network for sandbox \"365300bde33653ce9f3d5ceec2ac07f13b9c9f848206e0046f357e3e6e6f0835\" successfully" May 14 23:55:36.693366 containerd[1766]: time="2025-05-14T23:55:36.693318437Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"365300bde33653ce9f3d5ceec2ac07f13b9c9f848206e0046f357e3e6e6f0835\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:55:36.693472 containerd[1766]: time="2025-05-14T23:55:36.693375517Z" level=info msg="RemovePodSandbox \"365300bde33653ce9f3d5ceec2ac07f13b9c9f848206e0046f357e3e6e6f0835\" returns successfully" May 14 23:55:36.763450 kubelet[3367]: E0514 23:55:36.763252 3367 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 23:55:37.053161 sshd[5402]: Accepted publickey for core from 10.200.16.10 port 39498 ssh2: RSA SHA256:8W/Tf+7cj1biafetzZP1V2wDwegtsataOBfSdAJjdnY May 14 23:55:37.054711 sshd-session[5402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:55:37.060079 systemd-logind[1736]: New session 28 of user core. May 14 23:55:37.065503 systemd[1]: Started session-28.scope - Session 28 of User core. May 14 23:55:37.162276 containerd[1766]: time="2025-05-14T23:55:37.162086729Z" level=info msg="CreateContainer within sandbox \"06754baab8972b7cbf6ffa55195c3e9d93505d3e05d0fc02b8cabbef9118ccdf\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 23:55:37.195379 containerd[1766]: time="2025-05-14T23:55:37.195203115Z" level=info msg="CreateContainer within sandbox \"06754baab8972b7cbf6ffa55195c3e9d93505d3e05d0fc02b8cabbef9118ccdf\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"52cc61b6d9869124d280a34f6ee0fad066a6e97d27d2d7b4327e41a26d9b0356\"" May 14 23:55:37.196025 containerd[1766]: time="2025-05-14T23:55:37.195726875Z" level=info msg="StartContainer for \"52cc61b6d9869124d280a34f6ee0fad066a6e97d27d2d7b4327e41a26d9b0356\"" May 14 23:55:37.223557 systemd[1]: Started cri-containerd-52cc61b6d9869124d280a34f6ee0fad066a6e97d27d2d7b4327e41a26d9b0356.scope - libcontainer container 52cc61b6d9869124d280a34f6ee0fad066a6e97d27d2d7b4327e41a26d9b0356. May 14 23:55:37.248444 containerd[1766]: time="2025-05-14T23:55:37.248281437Z" level=info msg="StartContainer for \"52cc61b6d9869124d280a34f6ee0fad066a6e97d27d2d7b4327e41a26d9b0356\" returns successfully" May 14 23:55:37.251622 systemd[1]: cri-containerd-52cc61b6d9869124d280a34f6ee0fad066a6e97d27d2d7b4327e41a26d9b0356.scope: Deactivated successfully. May 14 23:55:37.313953 containerd[1766]: time="2025-05-14T23:55:37.313755009Z" level=info msg="shim disconnected" id=52cc61b6d9869124d280a34f6ee0fad066a6e97d27d2d7b4327e41a26d9b0356 namespace=k8s.io May 14 23:55:37.313953 containerd[1766]: time="2025-05-14T23:55:37.313833409Z" level=warning msg="cleaning up after shim disconnected" id=52cc61b6d9869124d280a34f6ee0fad066a6e97d27d2d7b4327e41a26d9b0356 namespace=k8s.io May 14 23:55:37.313953 containerd[1766]: time="2025-05-14T23:55:37.313842409Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:55:37.830623 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52cc61b6d9869124d280a34f6ee0fad066a6e97d27d2d7b4327e41a26d9b0356-rootfs.mount: Deactivated successfully. May 14 23:55:38.165655 containerd[1766]: time="2025-05-14T23:55:38.165592684Z" level=info msg="CreateContainer within sandbox \"06754baab8972b7cbf6ffa55195c3e9d93505d3e05d0fc02b8cabbef9118ccdf\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 23:55:38.209801 containerd[1766]: time="2025-05-14T23:55:38.209711439Z" level=info msg="CreateContainer within sandbox \"06754baab8972b7cbf6ffa55195c3e9d93505d3e05d0fc02b8cabbef9118ccdf\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"20fa45df19540d457085a6b2b04f42288c2e84e6714658c8fd7d23fa27ffccdb\"" May 14 23:55:38.211653 containerd[1766]: time="2025-05-14T23:55:38.210282319Z" level=info msg="StartContainer for \"20fa45df19540d457085a6b2b04f42288c2e84e6714658c8fd7d23fa27ffccdb\"" May 14 23:55:38.238648 systemd[1]: Started cri-containerd-20fa45df19540d457085a6b2b04f42288c2e84e6714658c8fd7d23fa27ffccdb.scope - libcontainer container 20fa45df19540d457085a6b2b04f42288c2e84e6714658c8fd7d23fa27ffccdb. May 14 23:55:38.271662 systemd[1]: cri-containerd-20fa45df19540d457085a6b2b04f42288c2e84e6714658c8fd7d23fa27ffccdb.scope: Deactivated successfully. May 14 23:55:38.275611 containerd[1766]: time="2025-05-14T23:55:38.275572211Z" level=info msg="StartContainer for \"20fa45df19540d457085a6b2b04f42288c2e84e6714658c8fd7d23fa27ffccdb\" returns successfully" May 14 23:55:38.304165 containerd[1766]: time="2025-05-14T23:55:38.304102593Z" level=info msg="shim disconnected" id=20fa45df19540d457085a6b2b04f42288c2e84e6714658c8fd7d23fa27ffccdb namespace=k8s.io May 14 23:55:38.304165 containerd[1766]: time="2025-05-14T23:55:38.304161193Z" level=warning msg="cleaning up after shim disconnected" id=20fa45df19540d457085a6b2b04f42288c2e84e6714658c8fd7d23fa27ffccdb namespace=k8s.io May 14 23:55:38.304165 containerd[1766]: time="2025-05-14T23:55:38.304170073Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:55:38.831610 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20fa45df19540d457085a6b2b04f42288c2e84e6714658c8fd7d23fa27ffccdb-rootfs.mount: Deactivated successfully. May 14 23:55:39.168774 containerd[1766]: time="2025-05-14T23:55:39.168647958Z" level=info msg="CreateContainer within sandbox \"06754baab8972b7cbf6ffa55195c3e9d93505d3e05d0fc02b8cabbef9118ccdf\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 23:55:39.208526 containerd[1766]: time="2025-05-14T23:55:39.208474310Z" level=info msg="CreateContainer within sandbox \"06754baab8972b7cbf6ffa55195c3e9d93505d3e05d0fc02b8cabbef9118ccdf\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"af8fc43d287959de39e4e5e519ddd7b4e8371671168cf162f5ddddfb932b03ce\"" May 14 23:55:39.209004 containerd[1766]: time="2025-05-14T23:55:39.208974350Z" level=info msg="StartContainer for \"af8fc43d287959de39e4e5e519ddd7b4e8371671168cf162f5ddddfb932b03ce\"" May 14 23:55:39.239494 systemd[1]: Started cri-containerd-af8fc43d287959de39e4e5e519ddd7b4e8371671168cf162f5ddddfb932b03ce.scope - libcontainer container af8fc43d287959de39e4e5e519ddd7b4e8371671168cf162f5ddddfb932b03ce. May 14 23:55:39.259906 systemd[1]: cri-containerd-af8fc43d287959de39e4e5e519ddd7b4e8371671168cf162f5ddddfb932b03ce.scope: Deactivated successfully. May 14 23:55:39.268225 containerd[1766]: time="2025-05-14T23:55:39.268133557Z" level=info msg="StartContainer for \"af8fc43d287959de39e4e5e519ddd7b4e8371671168cf162f5ddddfb932b03ce\" returns successfully" May 14 23:55:39.299938 containerd[1766]: time="2025-05-14T23:55:39.299883102Z" level=info msg="shim disconnected" id=af8fc43d287959de39e4e5e519ddd7b4e8371671168cf162f5ddddfb932b03ce namespace=k8s.io May 14 23:55:39.300183 containerd[1766]: time="2025-05-14T23:55:39.300164063Z" level=warning msg="cleaning up after shim disconnected" id=af8fc43d287959de39e4e5e519ddd7b4e8371671168cf162f5ddddfb932b03ce namespace=k8s.io May 14 23:55:39.300245 containerd[1766]: time="2025-05-14T23:55:39.300232743Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:55:39.830852 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af8fc43d287959de39e4e5e519ddd7b4e8371671168cf162f5ddddfb932b03ce-rootfs.mount: Deactivated successfully. May 14 23:55:40.173267 containerd[1766]: time="2025-05-14T23:55:40.173213594Z" level=info msg="CreateContainer within sandbox \"06754baab8972b7cbf6ffa55195c3e9d93505d3e05d0fc02b8cabbef9118ccdf\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 23:55:40.214055 containerd[1766]: time="2025-05-14T23:55:40.214007947Z" level=info msg="CreateContainer within sandbox \"06754baab8972b7cbf6ffa55195c3e9d93505d3e05d0fc02b8cabbef9118ccdf\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5676bd125b68d9ab0e2c9ed33fe3e57e5d5727327997fa8a7a45c5d6592e666d\"" May 14 23:55:40.214584 containerd[1766]: time="2025-05-14T23:55:40.214535467Z" level=info msg="StartContainer for \"5676bd125b68d9ab0e2c9ed33fe3e57e5d5727327997fa8a7a45c5d6592e666d\"" May 14 23:55:40.244492 systemd[1]: Started cri-containerd-5676bd125b68d9ab0e2c9ed33fe3e57e5d5727327997fa8a7a45c5d6592e666d.scope - libcontainer container 5676bd125b68d9ab0e2c9ed33fe3e57e5d5727327997fa8a7a45c5d6592e666d. May 14 23:55:40.272929 containerd[1766]: time="2025-05-14T23:55:40.272879953Z" level=info msg="StartContainer for \"5676bd125b68d9ab0e2c9ed33fe3e57e5d5727327997fa8a7a45c5d6592e666d\" returns successfully" May 14 23:55:40.804366 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 14 23:55:41.160387 kubelet[3367]: I0514 23:55:41.159923 3367 setters.go:602] "Node became not ready" node="ci-4230.1.1-n-bf7705109b" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-14T23:55:41Z","lastTransitionTime":"2025-05-14T23:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 14 23:55:43.438894 systemd-networkd[1520]: lxc_health: Link UP May 14 23:55:43.446094 systemd-networkd[1520]: lxc_health: Gained carrier May 14 23:55:43.593076 systemd[1]: run-containerd-runc-k8s.io-5676bd125b68d9ab0e2c9ed33fe3e57e5d5727327997fa8a7a45c5d6592e666d-runc.9Rkagj.mount: Deactivated successfully. May 14 23:55:43.923286 kubelet[3367]: I0514 23:55:43.923223 3367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rb2m4" podStartSLOduration=8.923204528 podStartE2EDuration="8.923204528s" podCreationTimestamp="2025-05-14 23:55:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:55:41.20148825 +0000 UTC m=+244.673470271" watchObservedRunningTime="2025-05-14 23:55:43.923204528 +0000 UTC m=+247.395186549" May 14 23:55:44.789538 systemd-networkd[1520]: lxc_health: Gained IPv6LL May 14 23:55:50.098826 sshd[5406]: Connection closed by 10.200.16.10 port 39498 May 14 23:55:50.099704 sshd-session[5402]: pam_unix(sshd:session): session closed for user core May 14 23:55:50.103229 systemd-logind[1736]: Session 28 logged out. Waiting for processes to exit. May 14 23:55:50.104510 systemd[1]: sshd@25-10.200.20.40:22-10.200.16.10:39498.service: Deactivated successfully. May 14 23:55:50.107861 systemd[1]: session-28.scope: Deactivated successfully. May 14 23:55:50.109883 systemd-logind[1736]: Removed session 28.