May 14 17:58:13.049201 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] May 14 17:58:13.049219 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Wed May 14 16:42:23 -00 2025 May 14 17:58:13.049225 kernel: KASLR enabled May 14 17:58:13.049229 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') May 14 17:58:13.049234 kernel: printk: legacy bootconsole [pl11] enabled May 14 17:58:13.049238 kernel: efi: EFI v2.7 by EDK II May 14 17:58:13.049243 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20d018 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 May 14 17:58:13.049247 kernel: random: crng init done May 14 17:58:13.049251 kernel: secureboot: Secure boot disabled May 14 17:58:13.049254 kernel: ACPI: Early table checksum verification disabled May 14 17:58:13.049258 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) May 14 17:58:13.049262 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 14 17:58:13.049266 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) May 14 17:58:13.049271 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) May 14 17:58:13.049276 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) May 14 17:58:13.049280 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 14 17:58:13.049284 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 14 17:58:13.049289 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 14 17:58:13.049293 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) May 14 17:58:13.049297 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) May 14 17:58:13.049301 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) May 14 17:58:13.049305 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 14 17:58:13.049309 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 May 14 17:58:13.049313 kernel: ACPI: Use ACPI SPCR as default console: Yes May 14 17:58:13.049318 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug May 14 17:58:13.049322 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug May 14 17:58:13.049326 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug May 14 17:58:13.049330 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug May 14 17:58:13.049334 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug May 14 17:58:13.049339 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug May 14 17:58:13.049343 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug May 14 17:58:13.049347 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug May 14 17:58:13.049351 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug May 14 17:58:13.049355 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug May 14 17:58:13.049359 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug May 14 17:58:13.049363 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug May 14 17:58:13.049367 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] May 14 17:58:13.049371 kernel: NODE_DATA(0) allocated [mem 0x1bf7fddc0-0x1bf804fff] May 14 17:58:13.049376 kernel: Zone ranges: May 14 17:58:13.049380 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] May 14 17:58:13.049386 kernel: DMA32 empty May 14 17:58:13.049391 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] May 14 17:58:13.049395 kernel: Device empty May 14 17:58:13.049399 kernel: Movable zone start for each node May 14 17:58:13.049404 kernel: Early memory node ranges May 14 17:58:13.049409 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] May 14 17:58:13.049413 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] May 14 17:58:13.049417 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] May 14 17:58:13.049422 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] May 14 17:58:13.049426 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] May 14 17:58:13.049430 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] May 14 17:58:13.049435 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] May 14 17:58:13.049439 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] May 14 17:58:13.049443 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] May 14 17:58:13.049447 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] May 14 17:58:13.049452 kernel: On node 0, zone DMA: 36 pages in unavailable ranges May 14 17:58:13.049456 kernel: psci: probing for conduit method from ACPI. May 14 17:58:13.049461 kernel: psci: PSCIv1.1 detected in firmware. May 14 17:58:13.049465 kernel: psci: Using standard PSCI v0.2 function IDs May 14 17:58:13.049470 kernel: psci: MIGRATE_INFO_TYPE not supported. May 14 17:58:13.049474 kernel: psci: SMC Calling Convention v1.4 May 14 17:58:13.049478 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 May 14 17:58:13.049482 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 May 14 17:58:13.049487 kernel: percpu: Embedded 33 pages/cpu s98136 r8192 d28840 u135168 May 14 17:58:13.049491 kernel: pcpu-alloc: s98136 r8192 d28840 u135168 alloc=33*4096 May 14 17:58:13.049495 kernel: pcpu-alloc: [0] 0 [0] 1 May 14 17:58:13.049500 kernel: Detected PIPT I-cache on CPU0 May 14 17:58:13.049504 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) May 14 17:58:13.049510 kernel: CPU features: detected: GIC system register CPU interface May 14 17:58:13.049514 kernel: CPU features: detected: Spectre-v4 May 14 17:58:13.049518 kernel: CPU features: detected: Spectre-BHB May 14 17:58:13.049522 kernel: CPU features: kernel page table isolation forced ON by KASLR May 14 17:58:13.049527 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 14 17:58:13.049531 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 May 14 17:58:13.049535 kernel: CPU features: detected: SSBS not fully self-synchronizing May 14 17:58:13.049540 kernel: alternatives: applying boot alternatives May 14 17:58:13.049545 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=fb5d39925446c9958629410eadbe2d2aa0566996d55f4385bdd8a5ce4ad5f562 May 14 17:58:13.049550 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 17:58:13.049554 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 14 17:58:13.049559 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 17:58:13.049563 kernel: Fallback order for Node 0: 0 May 14 17:58:13.049568 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 May 14 17:58:13.049572 kernel: Policy zone: Normal May 14 17:58:13.049576 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 17:58:13.049581 kernel: software IO TLB: area num 2. May 14 17:58:13.049585 kernel: software IO TLB: mapped [mem 0x000000003a460000-0x000000003e460000] (64MB) May 14 17:58:13.049589 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 14 17:58:13.049594 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 17:58:13.049599 kernel: rcu: RCU event tracing is enabled. May 14 17:58:13.049603 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 14 17:58:13.049608 kernel: Trampoline variant of Tasks RCU enabled. May 14 17:58:13.049613 kernel: Tracing variant of Tasks RCU enabled. May 14 17:58:13.049617 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 17:58:13.049621 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 14 17:58:13.049626 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 17:58:13.049630 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 17:58:13.049634 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 14 17:58:13.049639 kernel: GICv3: 960 SPIs implemented May 14 17:58:13.049643 kernel: GICv3: 0 Extended SPIs implemented May 14 17:58:13.049647 kernel: Root IRQ handler: gic_handle_irq May 14 17:58:13.049652 kernel: GICv3: GICv3 features: 16 PPIs, RSS May 14 17:58:13.049656 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 May 14 17:58:13.049661 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 May 14 17:58:13.049665 kernel: ITS: No ITS available, not enabling LPIs May 14 17:58:13.049670 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 14 17:58:13.049674 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). May 14 17:58:13.049679 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 14 17:58:13.049683 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns May 14 17:58:13.049687 kernel: Console: colour dummy device 80x25 May 14 17:58:13.049692 kernel: printk: legacy console [tty1] enabled May 14 17:58:13.049696 kernel: ACPI: Core revision 20240827 May 14 17:58:13.049701 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) May 14 17:58:13.049707 kernel: pid_max: default: 32768 minimum: 301 May 14 17:58:13.049711 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 14 17:58:13.049716 kernel: landlock: Up and running. May 14 17:58:13.049720 kernel: SELinux: Initializing. May 14 17:58:13.049724 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 17:58:13.049729 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 17:58:13.049737 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x1a0000e, misc 0x31e1 May 14 17:58:13.049742 kernel: Hyper-V: Host Build 10.0.26100.1254-1-0 May 14 17:58:13.049747 kernel: Hyper-V: enabling crash_kexec_post_notifiers May 14 17:58:13.049751 kernel: rcu: Hierarchical SRCU implementation. May 14 17:58:13.049756 kernel: rcu: Max phase no-delay instances is 400. May 14 17:58:13.049761 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 14 17:58:13.049766 kernel: Remapping and enabling EFI services. May 14 17:58:13.049771 kernel: smp: Bringing up secondary CPUs ... May 14 17:58:13.049776 kernel: Detected PIPT I-cache on CPU1 May 14 17:58:13.049780 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 May 14 17:58:13.049785 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] May 14 17:58:13.049790 kernel: smp: Brought up 1 node, 2 CPUs May 14 17:58:13.049795 kernel: SMP: Total of 2 processors activated. May 14 17:58:13.049800 kernel: CPU: All CPU(s) started at EL1 May 14 17:58:13.049804 kernel: CPU features: detected: 32-bit EL0 Support May 14 17:58:13.049809 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence May 14 17:58:13.049814 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 14 17:58:13.049819 kernel: CPU features: detected: Common not Private translations May 14 17:58:13.049823 kernel: CPU features: detected: CRC32 instructions May 14 17:58:13.049828 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) May 14 17:58:13.049833 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 14 17:58:13.049838 kernel: CPU features: detected: LSE atomic instructions May 14 17:58:13.049843 kernel: CPU features: detected: Privileged Access Never May 14 17:58:13.049847 kernel: CPU features: detected: Speculation barrier (SB) May 14 17:58:13.049852 kernel: CPU features: detected: TLB range maintenance instructions May 14 17:58:13.049857 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 14 17:58:13.049861 kernel: CPU features: detected: Scalable Vector Extension May 14 17:58:13.049866 kernel: alternatives: applying system-wide alternatives May 14 17:58:13.049871 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 May 14 17:58:13.049885 kernel: SVE: maximum available vector length 16 bytes per vector May 14 17:58:13.049890 kernel: SVE: default vector length 16 bytes per vector May 14 17:58:13.049895 kernel: Memory: 3976108K/4194160K available (11072K kernel code, 2276K rwdata, 8928K rodata, 39424K init, 1034K bss, 213432K reserved, 0K cma-reserved) May 14 17:58:13.049900 kernel: devtmpfs: initialized May 14 17:58:13.049905 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 17:58:13.049909 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 14 17:58:13.049914 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 14 17:58:13.049919 kernel: 0 pages in range for non-PLT usage May 14 17:58:13.049923 kernel: 508544 pages in range for PLT usage May 14 17:58:13.049929 kernel: pinctrl core: initialized pinctrl subsystem May 14 17:58:13.049933 kernel: SMBIOS 3.1.0 present. May 14 17:58:13.049938 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 May 14 17:58:13.049943 kernel: DMI: Memory slots populated: 2/2 May 14 17:58:13.049948 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 17:58:13.049952 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 14 17:58:13.049957 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 14 17:58:13.049962 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 14 17:58:13.049967 kernel: audit: initializing netlink subsys (disabled) May 14 17:58:13.049972 kernel: audit: type=2000 audit(0.064:1): state=initialized audit_enabled=0 res=1 May 14 17:58:13.049977 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 17:58:13.049982 kernel: cpuidle: using governor menu May 14 17:58:13.049986 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 14 17:58:13.049991 kernel: ASID allocator initialised with 32768 entries May 14 17:58:13.049996 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 17:58:13.050000 kernel: Serial: AMBA PL011 UART driver May 14 17:58:13.050005 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 14 17:58:13.050010 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 14 17:58:13.050015 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 14 17:58:13.050020 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 14 17:58:13.050025 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 14 17:58:13.050029 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 14 17:58:13.050034 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 14 17:58:13.050039 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 14 17:58:13.050044 kernel: ACPI: Added _OSI(Module Device) May 14 17:58:13.050048 kernel: ACPI: Added _OSI(Processor Device) May 14 17:58:13.050053 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 17:58:13.050058 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 17:58:13.050063 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 17:58:13.050068 kernel: ACPI: Interpreter enabled May 14 17:58:13.050072 kernel: ACPI: Using GIC for interrupt routing May 14 17:58:13.050077 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA May 14 17:58:13.050082 kernel: printk: legacy console [ttyAMA0] enabled May 14 17:58:13.050086 kernel: printk: legacy bootconsole [pl11] disabled May 14 17:58:13.050091 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA May 14 17:58:13.050096 kernel: ACPI: CPU0 has been hot-added May 14 17:58:13.050101 kernel: ACPI: CPU1 has been hot-added May 14 17:58:13.050106 kernel: iommu: Default domain type: Translated May 14 17:58:13.050111 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 14 17:58:13.050115 kernel: efivars: Registered efivars operations May 14 17:58:13.050120 kernel: vgaarb: loaded May 14 17:58:13.050124 kernel: clocksource: Switched to clocksource arch_sys_counter May 14 17:58:13.050129 kernel: VFS: Disk quotas dquot_6.6.0 May 14 17:58:13.050134 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 17:58:13.050139 kernel: pnp: PnP ACPI init May 14 17:58:13.050144 kernel: pnp: PnP ACPI: found 0 devices May 14 17:58:13.050149 kernel: NET: Registered PF_INET protocol family May 14 17:58:13.050153 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 14 17:58:13.050158 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 14 17:58:13.050163 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 17:58:13.050168 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 17:58:13.050173 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 14 17:58:13.050177 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 14 17:58:13.050182 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 17:58:13.050188 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 17:58:13.050192 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 17:58:13.050197 kernel: PCI: CLS 0 bytes, default 64 May 14 17:58:13.050201 kernel: kvm [1]: HYP mode not available May 14 17:58:13.050206 kernel: Initialise system trusted keyrings May 14 17:58:13.050211 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 14 17:58:13.050215 kernel: Key type asymmetric registered May 14 17:58:13.050220 kernel: Asymmetric key parser 'x509' registered May 14 17:58:13.050224 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 14 17:58:13.050230 kernel: io scheduler mq-deadline registered May 14 17:58:13.050235 kernel: io scheduler kyber registered May 14 17:58:13.050239 kernel: io scheduler bfq registered May 14 17:58:13.050244 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 17:58:13.050249 kernel: thunder_xcv, ver 1.0 May 14 17:58:13.050253 kernel: thunder_bgx, ver 1.0 May 14 17:58:13.050258 kernel: nicpf, ver 1.0 May 14 17:58:13.050263 kernel: nicvf, ver 1.0 May 14 17:58:13.050361 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 14 17:58:13.050411 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-14T17:58:12 UTC (1747245492) May 14 17:58:13.050418 kernel: efifb: probing for efifb May 14 17:58:13.050423 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k May 14 17:58:13.050428 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 May 14 17:58:13.050432 kernel: efifb: scrolling: redraw May 14 17:58:13.050437 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 14 17:58:13.050442 kernel: Console: switching to colour frame buffer device 128x48 May 14 17:58:13.050447 kernel: fb0: EFI VGA frame buffer device May 14 17:58:13.050452 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... May 14 17:58:13.050457 kernel: hid: raw HID events driver (C) Jiri Kosina May 14 17:58:13.050462 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available May 14 17:58:13.050466 kernel: watchdog: NMI not fully supported May 14 17:58:13.050471 kernel: watchdog: Hard watchdog permanently disabled May 14 17:58:13.050476 kernel: NET: Registered PF_INET6 protocol family May 14 17:58:13.050480 kernel: Segment Routing with IPv6 May 14 17:58:13.050485 kernel: In-situ OAM (IOAM) with IPv6 May 14 17:58:13.050490 kernel: NET: Registered PF_PACKET protocol family May 14 17:58:13.050495 kernel: Key type dns_resolver registered May 14 17:58:13.050500 kernel: registered taskstats version 1 May 14 17:58:13.050504 kernel: Loading compiled-in X.509 certificates May 14 17:58:13.050509 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: c0c250ba312a1bb9bceb2432c486db6e5999df1a' May 14 17:58:13.050514 kernel: Demotion targets for Node 0: null May 14 17:58:13.050519 kernel: Key type .fscrypt registered May 14 17:58:13.050523 kernel: Key type fscrypt-provisioning registered May 14 17:58:13.050528 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 17:58:13.050533 kernel: ima: Allocated hash algorithm: sha1 May 14 17:58:13.050538 kernel: ima: No architecture policies found May 14 17:58:13.050543 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 14 17:58:13.050548 kernel: clk: Disabling unused clocks May 14 17:58:13.050552 kernel: PM: genpd: Disabling unused power domains May 14 17:58:13.050557 kernel: Warning: unable to open an initial console. May 14 17:58:13.050562 kernel: Freeing unused kernel memory: 39424K May 14 17:58:13.050566 kernel: Run /init as init process May 14 17:58:13.050571 kernel: with arguments: May 14 17:58:13.050576 kernel: /init May 14 17:58:13.050581 kernel: with environment: May 14 17:58:13.050585 kernel: HOME=/ May 14 17:58:13.050590 kernel: TERM=linux May 14 17:58:13.050595 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 17:58:13.050600 systemd[1]: Successfully made /usr/ read-only. May 14 17:58:13.050607 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 17:58:13.050613 systemd[1]: Detected virtualization microsoft. May 14 17:58:13.050618 systemd[1]: Detected architecture arm64. May 14 17:58:13.050623 systemd[1]: Running in initrd. May 14 17:58:13.050628 systemd[1]: No hostname configured, using default hostname. May 14 17:58:13.050634 systemd[1]: Hostname set to . May 14 17:58:13.050639 systemd[1]: Initializing machine ID from random generator. May 14 17:58:13.050644 systemd[1]: Queued start job for default target initrd.target. May 14 17:58:13.050649 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 17:58:13.050654 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 17:58:13.050660 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 14 17:58:13.050666 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 17:58:13.050671 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 14 17:58:13.050677 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 14 17:58:13.050682 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 14 17:58:13.050688 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 14 17:58:13.050693 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 17:58:13.050699 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 17:58:13.050704 systemd[1]: Reached target paths.target - Path Units. May 14 17:58:13.050709 systemd[1]: Reached target slices.target - Slice Units. May 14 17:58:13.050714 systemd[1]: Reached target swap.target - Swaps. May 14 17:58:13.050719 systemd[1]: Reached target timers.target - Timer Units. May 14 17:58:13.050725 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 14 17:58:13.050730 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 17:58:13.050735 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 14 17:58:13.050740 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 14 17:58:13.050746 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 17:58:13.050751 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 17:58:13.050756 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 17:58:13.050762 systemd[1]: Reached target sockets.target - Socket Units. May 14 17:58:13.050767 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 14 17:58:13.050772 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 17:58:13.050777 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 14 17:58:13.050782 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 14 17:58:13.050788 systemd[1]: Starting systemd-fsck-usr.service... May 14 17:58:13.050793 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 17:58:13.050799 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 17:58:13.050813 systemd-journald[224]: Collecting audit messages is disabled. May 14 17:58:13.050827 systemd-journald[224]: Journal started May 14 17:58:13.050841 systemd-journald[224]: Runtime Journal (/run/log/journal/30d9e736f9ad4b10bd0738c7b550f078) is 8M, max 78.5M, 70.5M free. May 14 17:58:13.059905 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 17:58:13.064359 systemd-modules-load[226]: Inserted module 'overlay' May 14 17:58:13.082888 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 17:58:13.082915 systemd[1]: Started systemd-journald.service - Journal Service. May 14 17:58:13.089807 kernel: Bridge firewalling registered May 14 17:58:13.090102 systemd-modules-load[226]: Inserted module 'br_netfilter' May 14 17:58:13.097726 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 14 17:58:13.101921 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 17:58:13.118439 systemd[1]: Finished systemd-fsck-usr.service. May 14 17:58:13.124735 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 17:58:13.133843 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 17:58:13.140216 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 17:58:13.156267 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 17:58:13.160766 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 17:58:13.181969 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 17:58:13.193293 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 17:58:13.201606 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 17:58:13.207506 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 17:58:13.216840 systemd-tmpfiles[253]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 14 17:58:13.226280 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 17:58:13.238067 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 14 17:58:13.261007 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 17:58:13.271777 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 17:58:13.287377 dracut-cmdline[264]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=fb5d39925446c9958629410eadbe2d2aa0566996d55f4385bdd8a5ce4ad5f562 May 14 17:58:13.318905 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 17:58:13.319983 systemd-resolved[265]: Positive Trust Anchors: May 14 17:58:13.319991 systemd-resolved[265]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 17:58:13.320010 systemd-resolved[265]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 17:58:13.321629 systemd-resolved[265]: Defaulting to hostname 'linux'. May 14 17:58:13.333082 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 17:58:13.343189 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 17:58:13.431890 kernel: SCSI subsystem initialized May 14 17:58:13.437888 kernel: Loading iSCSI transport class v2.0-870. May 14 17:58:13.444895 kernel: iscsi: registered transport (tcp) May 14 17:58:13.456820 kernel: iscsi: registered transport (qla4xxx) May 14 17:58:13.456829 kernel: QLogic iSCSI HBA Driver May 14 17:58:13.469276 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 17:58:13.487993 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 17:58:13.494192 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 17:58:13.541008 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 14 17:58:13.546904 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 14 17:58:13.608896 kernel: raid6: neonx8 gen() 18562 MB/s May 14 17:58:13.625884 kernel: raid6: neonx4 gen() 18558 MB/s May 14 17:58:13.644884 kernel: raid6: neonx2 gen() 17078 MB/s May 14 17:58:13.664884 kernel: raid6: neonx1 gen() 15021 MB/s May 14 17:58:13.683884 kernel: raid6: int64x8 gen() 10536 MB/s May 14 17:58:13.702899 kernel: raid6: int64x4 gen() 10614 MB/s May 14 17:58:13.722887 kernel: raid6: int64x2 gen() 8970 MB/s May 14 17:58:13.744137 kernel: raid6: int64x1 gen() 7012 MB/s May 14 17:58:13.744145 kernel: raid6: using algorithm neonx8 gen() 18562 MB/s May 14 17:58:13.765877 kernel: raid6: .... xor() 14906 MB/s, rmw enabled May 14 17:58:13.765884 kernel: raid6: using neon recovery algorithm May 14 17:58:13.773459 kernel: xor: measuring software checksum speed May 14 17:58:13.773466 kernel: 8regs : 28621 MB/sec May 14 17:58:13.776301 kernel: 32regs : 28791 MB/sec May 14 17:58:13.779568 kernel: arm64_neon : 37707 MB/sec May 14 17:58:13.782522 kernel: xor: using function: arm64_neon (37707 MB/sec) May 14 17:58:13.819914 kernel: Btrfs loaded, zoned=no, fsverity=no May 14 17:58:13.824153 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 14 17:58:13.834008 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 17:58:13.856583 systemd-udevd[476]: Using default interface naming scheme 'v255'. May 14 17:58:13.860478 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 17:58:13.872173 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 14 17:58:13.896492 dracut-pre-trigger[487]: rd.md=0: removing MD RAID activation May 14 17:58:13.914131 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 14 17:58:13.920004 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 17:58:13.966890 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 17:58:13.981961 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 14 17:58:14.029903 kernel: hv_vmbus: Vmbus version:5.3 May 14 17:58:14.042566 kernel: hv_vmbus: registering driver hyperv_keyboard May 14 17:58:14.042606 kernel: pps_core: LinuxPPS API ver. 1 registered May 14 17:58:14.042614 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 May 14 17:58:14.050911 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 14 17:58:14.059039 kernel: hv_vmbus: registering driver hid_hyperv May 14 17:58:14.064123 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 17:58:14.064222 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 17:58:13.950266 kernel: PTP clock support registered May 14 17:58:13.958613 kernel: hv_utils: Registering HyperV Utility Driver May 14 17:58:13.958624 kernel: hv_vmbus: registering driver hv_utils May 14 17:58:13.958629 kernel: hv_utils: Heartbeat IC version 3.0 May 14 17:58:13.958636 kernel: hv_vmbus: registering driver hv_netvsc May 14 17:58:13.958641 kernel: hv_utils: Shutdown IC version 3.2 May 14 17:58:13.958647 kernel: hv_utils: TimeSync IC version 4.0 May 14 17:58:13.958652 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 May 14 17:58:13.958657 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on May 14 17:58:13.959857 systemd-journald[224]: Time jumped backwards, rotating. May 14 17:58:13.959896 kernel: hv_vmbus: registering driver hv_storvsc May 14 17:58:13.940316 systemd-resolved[265]: Clock change detected. Flushing caches. May 14 17:58:13.969243 kernel: scsi host1: storvsc_host_t May 14 17:58:13.969277 kernel: scsi host0: storvsc_host_t May 14 17:58:13.943995 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 17:58:13.982975 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 May 14 17:58:13.983004 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 May 14 17:58:13.968167 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 17:58:13.993041 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 17:58:13.999760 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 17:58:13.999836 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 17:58:14.011803 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 17:58:14.029958 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) May 14 17:58:14.071107 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks May 14 17:58:14.071201 kernel: sd 0:0:0:0: [sda] Write Protect is off May 14 17:58:14.071269 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 May 14 17:58:14.071337 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA May 14 17:58:14.074232 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#130 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 May 14 17:58:14.074349 kernel: hv_netvsc 002248bb-e90a-0022-48bb-e90a002248bb eth0: VF slot 1 added May 14 17:58:14.074418 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 May 14 17:58:14.074471 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 14 17:58:14.074477 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 14 17:58:14.074541 kernel: hv_vmbus: registering driver hv_pci May 14 17:58:14.016356 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 17:58:14.119361 kernel: sr 0:0:0:2: [sr0] scsi-1 drive May 14 17:58:14.119485 kernel: hv_pci a0127564-19ac-4ad8-a27b-cb836a2f0d3c: PCI VMBus probing: Using version 0x10004 May 14 17:58:14.158361 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 14 17:58:14.158373 kernel: hv_pci a0127564-19ac-4ad8-a27b-cb836a2f0d3c: PCI host bridge to bus 19ac:00 May 14 17:58:14.158456 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 May 14 17:58:14.158534 kernel: pci_bus 19ac:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] May 14 17:58:14.158610 kernel: pci_bus 19ac:00: No busn resource found for root bus, will use [bus 00-ff] May 14 17:58:14.158666 kernel: pci 19ac:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint May 14 17:58:14.158734 kernel: pci 19ac:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] May 14 17:58:14.158792 kernel: pci 19ac:00:02.0: enabling Extended Tags May 14 17:58:14.158849 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#101 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 May 14 17:58:14.158903 kernel: pci 19ac:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 19ac:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) May 14 17:58:14.159585 kernel: pci_bus 19ac:00: busn_res: [bus 00-ff] end is updated to 00 May 14 17:58:14.159680 kernel: pci 19ac:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned May 14 17:58:14.159763 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#76 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 May 14 17:58:14.099794 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 17:58:14.216184 kernel: mlx5_core 19ac:00:02.0: enabling device (0000 -> 0002) May 14 17:58:14.398934 kernel: mlx5_core 19ac:00:02.0: PTM is not supported by PCIe May 14 17:58:14.399049 kernel: mlx5_core 19ac:00:02.0: firmware version: 16.30.5006 May 14 17:58:14.399119 kernel: hv_netvsc 002248bb-e90a-0022-48bb-e90a002248bb eth0: VF registering: eth1 May 14 17:58:14.399183 kernel: mlx5_core 19ac:00:02.0 eth1: joined to eth0 May 14 17:58:14.399257 kernel: mlx5_core 19ac:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) May 14 17:58:14.405615 kernel: mlx5_core 19ac:00:02.0 enP6572s1: renamed from eth1 May 14 17:58:14.710968 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. May 14 17:58:14.767901 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. May 14 17:58:14.800351 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. May 14 17:58:14.805432 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. May 14 17:58:14.824084 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. May 14 17:58:14.829120 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 14 17:58:14.838264 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 14 17:58:14.847030 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 17:58:14.856412 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 17:58:14.869105 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 14 17:58:14.883363 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 14 17:58:14.902979 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 May 14 17:58:14.911357 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 14 17:58:14.920465 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 14 17:58:14.927994 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#105 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 May 14 17:58:14.933713 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 14 17:58:15.940038 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 May 14 17:58:15.950460 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 14 17:58:15.950726 disk-uuid[660]: The operation has completed successfully. May 14 17:58:16.014849 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 17:58:16.018409 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 14 17:58:16.045278 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 14 17:58:16.068748 sh[825]: Success May 14 17:58:16.101296 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 17:58:16.101325 kernel: device-mapper: uevent: version 1.0.3 May 14 17:58:16.105983 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 14 17:58:16.115985 kernel: device-mapper: verity: sha256 using shash "sha256-ce" May 14 17:58:16.306642 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 14 17:58:16.311745 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 14 17:58:16.325397 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 14 17:58:16.350722 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 14 17:58:16.350752 kernel: BTRFS: device fsid e21bbf34-4c71-4257-bd6f-908a2b81e5ab devid 1 transid 41 /dev/mapper/usr (254:0) scanned by mount (843) May 14 17:58:16.355452 kernel: BTRFS info (device dm-0): first mount of filesystem e21bbf34-4c71-4257-bd6f-908a2b81e5ab May 14 17:58:16.359442 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 14 17:58:16.362370 kernel: BTRFS info (device dm-0): using free-space-tree May 14 17:58:16.823708 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 14 17:58:16.827820 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 14 17:58:16.834844 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 14 17:58:16.835429 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 14 17:58:16.857482 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 14 17:58:16.880974 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 (8:6) scanned by mount (879) May 14 17:58:16.890783 kernel: BTRFS info (device sda6): first mount of filesystem 6d47052f-e956-47a0-903a-525ae08a05f2 May 14 17:58:16.890801 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 14 17:58:16.894149 kernel: BTRFS info (device sda6): using free-space-tree May 14 17:58:16.925006 kernel: BTRFS info (device sda6): last unmount of filesystem 6d47052f-e956-47a0-903a-525ae08a05f2 May 14 17:58:16.925496 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 14 17:58:16.934207 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 14 17:58:16.957502 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 17:58:16.968245 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 17:58:16.999268 systemd-networkd[1012]: lo: Link UP May 14 17:58:16.999276 systemd-networkd[1012]: lo: Gained carrier May 14 17:58:17.000624 systemd-networkd[1012]: Enumeration completed May 14 17:58:17.002165 systemd-networkd[1012]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 17:58:17.002168 systemd-networkd[1012]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 17:58:17.004794 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 17:58:17.009340 systemd[1]: Reached target network.target - Network. May 14 17:58:17.070974 kernel: mlx5_core 19ac:00:02.0 enP6572s1: Link up May 14 17:58:17.099977 kernel: hv_netvsc 002248bb-e90a-0022-48bb-e90a002248bb eth0: Data path switched to VF: enP6572s1 May 14 17:58:17.100210 systemd-networkd[1012]: enP6572s1: Link UP May 14 17:58:17.100293 systemd-networkd[1012]: eth0: Link UP May 14 17:58:17.100391 systemd-networkd[1012]: eth0: Gained carrier May 14 17:58:17.100398 systemd-networkd[1012]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 17:58:17.107083 systemd-networkd[1012]: enP6572s1: Gained carrier May 14 17:58:17.127986 systemd-networkd[1012]: eth0: DHCPv4 address 10.200.20.4/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 14 17:58:17.946783 ignition[989]: Ignition 2.21.0 May 14 17:58:17.949343 ignition[989]: Stage: fetch-offline May 14 17:58:17.949429 ignition[989]: no configs at "/usr/lib/ignition/base.d" May 14 17:58:17.952896 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 14 17:58:17.949435 ignition[989]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 14 17:58:17.960321 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 14 17:58:17.949530 ignition[989]: parsed url from cmdline: "" May 14 17:58:17.949532 ignition[989]: no config URL provided May 14 17:58:17.949536 ignition[989]: reading system config file "/usr/lib/ignition/user.ign" May 14 17:58:17.949541 ignition[989]: no config at "/usr/lib/ignition/user.ign" May 14 17:58:17.949544 ignition[989]: failed to fetch config: resource requires networking May 14 17:58:17.949663 ignition[989]: Ignition finished successfully May 14 17:58:17.989606 ignition[1023]: Ignition 2.21.0 May 14 17:58:17.989610 ignition[1023]: Stage: fetch May 14 17:58:17.989759 ignition[1023]: no configs at "/usr/lib/ignition/base.d" May 14 17:58:17.989765 ignition[1023]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 14 17:58:17.989825 ignition[1023]: parsed url from cmdline: "" May 14 17:58:17.989827 ignition[1023]: no config URL provided May 14 17:58:17.989830 ignition[1023]: reading system config file "/usr/lib/ignition/user.ign" May 14 17:58:17.989835 ignition[1023]: no config at "/usr/lib/ignition/user.ign" May 14 17:58:17.989864 ignition[1023]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 May 14 17:58:18.054255 ignition[1023]: GET result: OK May 14 17:58:18.054322 ignition[1023]: config has been read from IMDS userdata May 14 17:58:18.054339 ignition[1023]: parsing config with SHA512: 33b7e92c63843f9c130d19c39cc3b118fff20123dbe190d628ee71ef9f418b8680b1cdf98719a1cf5622b8ab0544bcd6f61acc99901d0a2de012b6b603c9f6df May 14 17:58:18.057010 unknown[1023]: fetched base config from "system" May 14 17:58:18.057234 ignition[1023]: fetch: fetch complete May 14 17:58:18.057015 unknown[1023]: fetched base config from "system" May 14 17:58:18.057241 ignition[1023]: fetch: fetch passed May 14 17:58:18.057023 unknown[1023]: fetched user config from "azure" May 14 17:58:18.057281 ignition[1023]: Ignition finished successfully May 14 17:58:18.058971 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 14 17:58:18.065491 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 14 17:58:18.105050 ignition[1029]: Ignition 2.21.0 May 14 17:58:18.105058 ignition[1029]: Stage: kargs May 14 17:58:18.106358 ignition[1029]: no configs at "/usr/lib/ignition/base.d" May 14 17:58:18.106368 ignition[1029]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 14 17:58:18.107453 ignition[1029]: kargs: kargs passed May 14 17:58:18.118003 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 14 17:58:18.107506 ignition[1029]: Ignition finished successfully May 14 17:58:18.125600 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 14 17:58:18.150625 ignition[1036]: Ignition 2.21.0 May 14 17:58:18.150638 ignition[1036]: Stage: disks May 14 17:58:18.153948 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 14 17:58:18.150761 ignition[1036]: no configs at "/usr/lib/ignition/base.d" May 14 17:58:18.158632 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 14 17:58:18.150768 ignition[1036]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 14 17:58:18.165890 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 14 17:58:18.151611 ignition[1036]: disks: disks passed May 14 17:58:18.174943 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 17:58:18.151647 ignition[1036]: Ignition finished successfully May 14 17:58:18.182544 systemd[1]: Reached target sysinit.target - System Initialization. May 14 17:58:18.190555 systemd[1]: Reached target basic.target - Basic System. May 14 17:58:18.199042 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 14 17:58:18.215850 systemd-networkd[1012]: enP6572s1: Gained IPv6LL May 14 17:58:18.289213 systemd-fsck[1044]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks May 14 17:58:18.296351 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 14 17:58:18.301873 systemd[1]: Mounting sysroot.mount - /sysroot... May 14 17:58:18.492975 kernel: EXT4-fs (sda9): mounted filesystem a9c1ea72-ce96-48c1-8c16-d7102e51beed r/w with ordered data mode. Quota mode: none. May 14 17:58:18.494165 systemd[1]: Mounted sysroot.mount - /sysroot. May 14 17:58:18.500738 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 14 17:58:18.526429 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 17:58:18.530367 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 14 17:58:18.543627 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 14 17:58:18.553330 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 17:58:18.553355 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 14 17:58:18.558958 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 14 17:58:18.571653 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 14 17:58:18.599349 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 (8:6) scanned by mount (1058) May 14 17:58:18.599379 kernel: BTRFS info (device sda6): first mount of filesystem 6d47052f-e956-47a0-903a-525ae08a05f2 May 14 17:58:18.604012 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 14 17:58:18.607196 kernel: BTRFS info (device sda6): using free-space-tree May 14 17:58:18.609699 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 17:58:19.106230 systemd-networkd[1012]: eth0: Gained IPv6LL May 14 17:58:19.181130 coreos-metadata[1060]: May 14 17:58:19.181 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 14 17:58:19.188708 coreos-metadata[1060]: May 14 17:58:19.188 INFO Fetch successful May 14 17:58:19.192589 coreos-metadata[1060]: May 14 17:58:19.192 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 May 14 17:58:19.207897 coreos-metadata[1060]: May 14 17:58:19.207 INFO Fetch successful May 14 17:58:19.282858 coreos-metadata[1060]: May 14 17:58:19.282 INFO wrote hostname ci-4334.0.0-a-9340e225f6 to /sysroot/etc/hostname May 14 17:58:19.289918 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 14 17:58:19.370633 initrd-setup-root[1088]: cut: /sysroot/etc/passwd: No such file or directory May 14 17:58:19.403799 initrd-setup-root[1095]: cut: /sysroot/etc/group: No such file or directory May 14 17:58:19.409137 initrd-setup-root[1102]: cut: /sysroot/etc/shadow: No such file or directory May 14 17:58:19.414182 initrd-setup-root[1109]: cut: /sysroot/etc/gshadow: No such file or directory May 14 17:58:20.354064 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 14 17:58:20.359102 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 14 17:58:20.379305 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 14 17:58:20.386070 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 14 17:58:20.398754 kernel: BTRFS info (device sda6): last unmount of filesystem 6d47052f-e956-47a0-903a-525ae08a05f2 May 14 17:58:20.409064 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 14 17:58:20.420686 ignition[1177]: INFO : Ignition 2.21.0 May 14 17:58:20.420686 ignition[1177]: INFO : Stage: mount May 14 17:58:20.427181 ignition[1177]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 17:58:20.427181 ignition[1177]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 14 17:58:20.427181 ignition[1177]: INFO : mount: mount passed May 14 17:58:20.427181 ignition[1177]: INFO : Ignition finished successfully May 14 17:58:20.427066 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 14 17:58:20.432055 systemd[1]: Starting ignition-files.service - Ignition (files)... May 14 17:58:20.459420 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 17:58:20.485635 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 (8:6) scanned by mount (1188) May 14 17:58:20.485665 kernel: BTRFS info (device sda6): first mount of filesystem 6d47052f-e956-47a0-903a-525ae08a05f2 May 14 17:58:20.489977 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 14 17:58:20.492899 kernel: BTRFS info (device sda6): using free-space-tree May 14 17:58:20.495123 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 17:58:20.518824 ignition[1206]: INFO : Ignition 2.21.0 May 14 17:58:20.518824 ignition[1206]: INFO : Stage: files May 14 17:58:20.525952 ignition[1206]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 17:58:20.525952 ignition[1206]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 14 17:58:20.525952 ignition[1206]: DEBUG : files: compiled without relabeling support, skipping May 14 17:58:20.525952 ignition[1206]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 17:58:20.525952 ignition[1206]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 17:58:20.525952 ignition[1206]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 17:58:20.525952 ignition[1206]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 17:58:20.525952 ignition[1206]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 17:58:20.523244 unknown[1206]: wrote ssh authorized keys file for user: core May 14 17:58:20.568499 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 14 17:58:20.568499 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 14 17:58:20.653162 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 17:58:20.995532 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 14 17:58:20.995532 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 17:58:21.010135 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 14 17:58:21.297377 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 14 17:58:21.357683 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 17:58:21.364996 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 14 17:58:21.364996 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 14 17:58:21.364996 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 17:58:21.364996 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 17:58:21.364996 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 17:58:21.364996 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 17:58:21.364996 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 17:58:21.364996 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 17:58:21.364996 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 17:58:21.364996 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 17:58:21.364996 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 17:58:21.443887 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 17:58:21.443887 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 17:58:21.443887 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 14 17:58:21.766282 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 14 17:58:21.941958 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 17:58:21.950145 ignition[1206]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 14 17:58:21.956514 ignition[1206]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 17:58:21.965742 ignition[1206]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 17:58:21.965742 ignition[1206]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 14 17:58:21.965742 ignition[1206]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 14 17:58:21.965742 ignition[1206]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 14 17:58:21.965742 ignition[1206]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 17:58:21.965742 ignition[1206]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 17:58:21.965742 ignition[1206]: INFO : files: files passed May 14 17:58:21.965742 ignition[1206]: INFO : Ignition finished successfully May 14 17:58:21.973791 systemd[1]: Finished ignition-files.service - Ignition (files). May 14 17:58:21.984223 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 14 17:58:22.016364 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 14 17:58:22.022561 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 17:58:22.048529 initrd-setup-root-after-ignition[1235]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 17:58:22.048529 initrd-setup-root-after-ignition[1235]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 14 17:58:22.022621 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 14 17:58:22.080610 initrd-setup-root-after-ignition[1239]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 17:58:22.045560 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 17:58:22.054356 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 14 17:58:22.065266 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 14 17:58:22.120242 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 17:58:22.120320 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 14 17:58:22.129359 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 14 17:58:22.138000 systemd[1]: Reached target initrd.target - Initrd Default Target. May 14 17:58:22.146079 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 14 17:58:22.146590 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 14 17:58:22.175872 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 17:58:22.182232 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 14 17:58:22.209701 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 14 17:58:22.214445 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 17:58:22.223630 systemd[1]: Stopped target timers.target - Timer Units. May 14 17:58:22.231783 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 17:58:22.231872 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 17:58:22.243438 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 14 17:58:22.247595 systemd[1]: Stopped target basic.target - Basic System. May 14 17:58:22.255851 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 14 17:58:22.264254 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 14 17:58:22.272561 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 14 17:58:22.281877 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 14 17:58:22.291072 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 14 17:58:22.299277 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 14 17:58:22.308366 systemd[1]: Stopped target sysinit.target - System Initialization. May 14 17:58:22.316276 systemd[1]: Stopped target local-fs.target - Local File Systems. May 14 17:58:22.325086 systemd[1]: Stopped target swap.target - Swaps. May 14 17:58:22.332136 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 17:58:22.332229 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 14 17:58:22.343102 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 14 17:58:22.347659 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 17:58:22.356054 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 14 17:58:22.359632 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 17:58:22.364853 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 17:58:22.364927 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 14 17:58:22.377374 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 17:58:22.377457 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 17:58:22.382652 systemd[1]: ignition-files.service: Deactivated successfully. May 14 17:58:22.382720 systemd[1]: Stopped ignition-files.service - Ignition (files). May 14 17:58:22.453240 ignition[1259]: INFO : Ignition 2.21.0 May 14 17:58:22.453240 ignition[1259]: INFO : Stage: umount May 14 17:58:22.453240 ignition[1259]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 17:58:22.453240 ignition[1259]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 14 17:58:22.453240 ignition[1259]: INFO : umount: umount passed May 14 17:58:22.453240 ignition[1259]: INFO : Ignition finished successfully May 14 17:58:22.390298 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 14 17:58:22.390360 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 14 17:58:22.401179 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 14 17:58:22.413935 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 17:58:22.414059 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 14 17:58:22.424056 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 14 17:58:22.435140 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 17:58:22.435272 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 14 17:58:22.448673 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 17:58:22.448946 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 14 17:58:22.466152 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 17:58:22.467011 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 17:58:22.467094 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 14 17:58:22.472488 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 17:58:22.472608 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 14 17:58:22.480853 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 17:58:22.480910 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 14 17:58:22.487910 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 17:58:22.487975 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 14 17:58:22.495451 systemd[1]: ignition-fetch.service: Deactivated successfully. May 14 17:58:22.495491 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 14 17:58:22.499522 systemd[1]: Stopped target network.target - Network. May 14 17:58:22.506901 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 17:58:22.506941 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 14 17:58:22.516257 systemd[1]: Stopped target paths.target - Path Units. May 14 17:58:22.523552 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 17:58:22.526978 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 17:58:22.532006 systemd[1]: Stopped target slices.target - Slice Units. May 14 17:58:22.540175 systemd[1]: Stopped target sockets.target - Socket Units. May 14 17:58:22.547345 systemd[1]: iscsid.socket: Deactivated successfully. May 14 17:58:22.547381 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 14 17:58:22.555112 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 17:58:22.555135 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 17:58:22.563394 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 17:58:22.563433 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 14 17:58:22.571133 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 14 17:58:22.571159 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 14 17:58:22.579616 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 14 17:58:22.586912 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 14 17:58:22.604884 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 17:58:22.785210 kernel: hv_netvsc 002248bb-e90a-0022-48bb-e90a002248bb eth0: Data path switched from VF: enP6572s1 May 14 17:58:22.605018 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 14 17:58:22.617677 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 14 17:58:22.617847 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 17:58:22.617942 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 14 17:58:22.628809 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 14 17:58:22.629281 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 14 17:58:22.636523 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 17:58:22.636556 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 14 17:58:22.645017 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 14 17:58:22.656923 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 17:58:22.656977 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 17:58:22.665784 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 17:58:22.665818 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 17:58:22.677772 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 17:58:22.677810 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 14 17:58:22.682587 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 14 17:58:22.682627 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 17:58:22.696814 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 17:58:22.707779 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 17:58:22.707825 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 14 17:58:22.731509 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 17:58:22.731631 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 17:58:22.739242 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 17:58:22.739297 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 14 17:58:22.746834 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 17:58:22.746857 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 14 17:58:22.754718 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 17:58:22.754752 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 14 17:58:22.766722 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 17:58:22.766756 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 14 17:58:22.785163 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 17:58:22.785198 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 17:58:22.794562 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 14 17:58:22.808887 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 14 17:58:22.808951 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 14 17:58:22.822436 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 17:58:22.822475 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 17:58:22.831428 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 17:58:22.831465 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 17:58:22.840250 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 14 17:58:22.840290 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 14 17:58:22.840316 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 17:58:22.840524 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 17:58:22.840591 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 14 17:58:22.878238 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 17:58:22.878502 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 14 17:58:25.479582 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 17:58:25.479679 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 14 17:58:25.483784 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 14 17:58:25.491111 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 17:58:25.491158 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 14 17:58:25.499389 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 14 17:58:25.535759 systemd[1]: Switching root. May 14 17:58:25.596520 systemd-journald[224]: Journal stopped May 14 17:58:32.223197 systemd-journald[224]: Received SIGTERM from PID 1 (systemd). May 14 17:58:32.223216 kernel: SELinux: policy capability network_peer_controls=1 May 14 17:58:32.223223 kernel: SELinux: policy capability open_perms=1 May 14 17:58:32.223230 kernel: SELinux: policy capability extended_socket_class=1 May 14 17:58:32.223235 kernel: SELinux: policy capability always_check_network=0 May 14 17:58:32.223240 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 17:58:32.223246 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 17:58:32.223251 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 17:58:32.223257 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 17:58:32.223262 kernel: SELinux: policy capability userspace_initial_context=0 May 14 17:58:32.223268 kernel: audit: type=1403 audit(1747245508.761:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 17:58:32.223274 systemd[1]: Successfully loaded SELinux policy in 304.242ms. May 14 17:58:32.223280 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.712ms. May 14 17:58:32.223286 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 17:58:32.223293 systemd[1]: Detected virtualization microsoft. May 14 17:58:32.223302 systemd[1]: Detected architecture arm64. May 14 17:58:32.223307 systemd[1]: Detected first boot. May 14 17:58:32.223313 systemd[1]: Hostname set to . May 14 17:58:32.223319 systemd[1]: Initializing machine ID from random generator. May 14 17:58:32.223325 zram_generator::config[1303]: No configuration found. May 14 17:58:32.223331 kernel: NET: Registered PF_VSOCK protocol family May 14 17:58:32.223337 systemd[1]: Populated /etc with preset unit settings. May 14 17:58:32.223344 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 14 17:58:32.223350 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 17:58:32.223355 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 14 17:58:32.223361 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 17:58:32.223367 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 14 17:58:32.223373 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 14 17:58:32.223379 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 14 17:58:32.223386 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 14 17:58:32.223392 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 14 17:58:32.223398 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 14 17:58:32.223404 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 14 17:58:32.223410 systemd[1]: Created slice user.slice - User and Session Slice. May 14 17:58:32.223416 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 17:58:32.223422 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 17:58:32.223428 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 14 17:58:32.223435 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 14 17:58:32.223441 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 14 17:58:32.223448 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 17:58:32.223455 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 14 17:58:32.223461 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 17:58:32.223467 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 17:58:32.223473 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 14 17:58:32.223479 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 14 17:58:32.223486 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 14 17:58:32.223492 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 14 17:58:32.223498 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 17:58:32.223504 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 17:58:32.223510 systemd[1]: Reached target slices.target - Slice Units. May 14 17:58:32.223516 systemd[1]: Reached target swap.target - Swaps. May 14 17:58:32.223522 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 14 17:58:32.223528 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 14 17:58:32.223535 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 14 17:58:32.223541 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 17:58:32.223547 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 17:58:32.223553 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 17:58:32.223560 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 14 17:58:32.223567 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 14 17:58:32.223573 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 14 17:58:32.223579 systemd[1]: Mounting media.mount - External Media Directory... May 14 17:58:32.223585 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 14 17:58:32.223591 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 14 17:58:32.223597 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 14 17:58:32.223603 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 17:58:32.223610 systemd[1]: Reached target machines.target - Containers. May 14 17:58:32.223617 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 14 17:58:32.223623 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 17:58:32.223629 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 17:58:32.223635 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 14 17:58:32.223641 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 17:58:32.223647 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 17:58:32.223653 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 17:58:32.223659 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 14 17:58:32.223666 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 17:58:32.223673 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 17:58:32.223679 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 17:58:32.223685 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 14 17:58:32.223691 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 17:58:32.223697 systemd[1]: Stopped systemd-fsck-usr.service. May 14 17:58:32.223704 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 17:58:32.223710 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 17:58:32.223717 kernel: fuse: init (API version 7.41) May 14 17:58:32.223722 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 17:58:32.223729 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 17:58:32.223734 kernel: loop: module loaded May 14 17:58:32.223740 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 14 17:58:32.223746 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 14 17:58:32.223752 kernel: ACPI: bus type drm_connector registered May 14 17:58:32.223768 systemd-journald[1407]: Collecting audit messages is disabled. May 14 17:58:32.223782 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 17:58:32.223789 systemd-journald[1407]: Journal started May 14 17:58:32.223804 systemd-journald[1407]: Runtime Journal (/run/log/journal/5e8d852119c2489fb0df07224c9f2e21) is 8M, max 78.5M, 70.5M free. May 14 17:58:31.545658 systemd[1]: Queued start job for default target multi-user.target. May 14 17:58:31.557359 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 14 17:58:31.557630 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 17:58:31.557863 systemd[1]: systemd-journald.service: Consumed 2.339s CPU time. May 14 17:58:32.238714 systemd[1]: verity-setup.service: Deactivated successfully. May 14 17:58:32.238755 systemd[1]: Stopped verity-setup.service. May 14 17:58:32.251007 systemd[1]: Started systemd-journald.service - Journal Service. May 14 17:58:32.251569 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 14 17:58:32.255759 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 14 17:58:32.260162 systemd[1]: Mounted media.mount - External Media Directory. May 14 17:58:32.263926 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 14 17:58:32.268482 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 14 17:58:32.272810 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 14 17:58:32.276613 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 14 17:58:32.281209 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 17:58:32.286524 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 17:58:32.286648 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 14 17:58:32.291397 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 17:58:32.291525 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 17:58:32.295875 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 17:58:32.296143 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 17:58:32.300415 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 17:58:32.300530 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 17:58:32.305430 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 17:58:32.305542 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 14 17:58:32.310111 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 17:58:32.310235 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 17:58:32.314622 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 17:58:32.319624 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 17:58:32.325022 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 14 17:58:32.330251 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 14 17:58:32.335499 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 17:58:32.348554 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 17:58:32.353860 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 14 17:58:32.363545 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 14 17:58:32.368291 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 17:58:32.368320 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 17:58:32.373089 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 14 17:58:32.378899 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 14 17:58:32.382849 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 17:58:32.383730 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 14 17:58:32.397425 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 14 17:58:32.402017 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 17:58:32.404715 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 14 17:58:32.408888 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 17:58:32.409562 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 17:58:32.414314 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 14 17:58:32.423469 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 14 17:58:32.428652 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 14 17:58:32.435114 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 14 17:58:32.442363 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 14 17:58:32.451900 systemd-journald[1407]: Time spent on flushing to /var/log/journal/5e8d852119c2489fb0df07224c9f2e21 is 39.514ms for 945 entries. May 14 17:58:32.451900 systemd-journald[1407]: System Journal (/var/log/journal/5e8d852119c2489fb0df07224c9f2e21) is 11.8M, max 2.6G, 2.6G free. May 14 17:58:32.878125 kernel: loop0: detected capacity change from 0 to 194096 May 14 17:58:32.878159 systemd-journald[1407]: Received client request to flush runtime journal. May 14 17:58:32.878184 systemd-journald[1407]: /var/log/journal/5e8d852119c2489fb0df07224c9f2e21/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. May 14 17:58:32.878200 systemd-journald[1407]: Rotating system journal. May 14 17:58:32.879656 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 17:58:32.879682 kernel: loop1: detected capacity change from 0 to 28640 May 14 17:58:32.453409 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 14 17:58:32.462154 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 14 17:58:32.486200 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 17:58:32.877277 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 17:58:32.878307 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 14 17:58:32.885325 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 14 17:58:33.493865 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 14 17:58:33.499600 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 17:58:33.567014 kernel: loop2: detected capacity change from 0 to 107312 May 14 17:58:33.635471 systemd-tmpfiles[1459]: ACLs are not supported, ignoring. May 14 17:58:33.635484 systemd-tmpfiles[1459]: ACLs are not supported, ignoring. May 14 17:58:33.651808 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 17:58:34.849967 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 14 17:58:34.856258 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 17:58:34.879064 systemd-udevd[1464]: Using default interface naming scheme 'v255'. May 14 17:58:34.933984 kernel: loop3: detected capacity change from 0 to 138376 May 14 17:58:35.060265 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 17:58:35.068728 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 17:58:35.117465 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 14 17:58:35.148868 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 14 17:58:35.192016 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 May 14 17:58:35.204979 kernel: mousedev: PS/2 mouse device common for all mice May 14 17:58:35.217341 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 14 17:58:35.281571 kernel: hv_vmbus: registering driver hyperv_fb May 14 17:58:35.281628 kernel: hyperv_fb: Synthvid Version major 3, minor 5 May 14 17:58:35.289215 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 May 14 17:58:35.295135 kernel: Console: switching to colour dummy device 80x25 May 14 17:58:35.302955 kernel: Console: switching to colour frame buffer device 128x48 May 14 17:58:35.312982 kernel: loop4: detected capacity change from 0 to 194096 May 14 17:58:35.322722 kernel: loop5: detected capacity change from 0 to 28640 May 14 17:58:35.330985 kernel: loop6: detected capacity change from 0 to 107312 May 14 17:58:35.338985 kernel: loop7: detected capacity change from 0 to 138376 May 14 17:58:35.343458 (sd-merge)[1537]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. May 14 17:58:35.344099 (sd-merge)[1537]: Merged extensions into '/usr'. May 14 17:58:35.346448 systemd[1]: Reload requested from client PID 1442 ('systemd-sysext') (unit systemd-sysext.service)... May 14 17:58:35.346460 systemd[1]: Reloading... May 14 17:58:35.363973 kernel: hv_vmbus: registering driver hv_balloon May 14 17:58:35.370926 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 May 14 17:58:35.370991 kernel: hv_balloon: Memory hot add disabled on ARM64 May 14 17:58:35.403017 zram_generator::config[1566]: No configuration found. May 14 17:58:35.598457 systemd-networkd[1480]: lo: Link UP May 14 17:58:35.598465 systemd-networkd[1480]: lo: Gained carrier May 14 17:58:35.601021 systemd-networkd[1480]: Enumeration completed May 14 17:58:35.601849 systemd-networkd[1480]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 17:58:35.601979 systemd-networkd[1480]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 17:58:35.607534 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 17:58:35.618287 kernel: MACsec IEEE 802.1AE May 14 17:58:35.645975 kernel: mlx5_core 19ac:00:02.0 enP6572s1: Link up May 14 17:58:35.666592 systemd-networkd[1480]: enP6572s1: Link UP May 14 17:58:35.667056 kernel: hv_netvsc 002248bb-e90a-0022-48bb-e90a002248bb eth0: Data path switched to VF: enP6572s1 May 14 17:58:35.666649 systemd-networkd[1480]: eth0: Link UP May 14 17:58:35.666651 systemd-networkd[1480]: eth0: Gained carrier May 14 17:58:35.666662 systemd-networkd[1480]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 17:58:35.670162 systemd-networkd[1480]: enP6572s1: Gained carrier May 14 17:58:35.677988 systemd-networkd[1480]: eth0: DHCPv4 address 10.200.20.4/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 14 17:58:35.767169 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. May 14 17:58:35.772008 systemd[1]: Reloading finished in 425 ms. May 14 17:58:35.789674 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 17:58:35.794714 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 14 17:58:35.827816 systemd[1]: Starting ensure-sysext.service... May 14 17:58:35.833139 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 14 17:58:35.840171 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 14 17:58:35.847008 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 14 17:58:35.858122 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 17:58:35.863370 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 17:58:35.882183 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 14 17:58:35.887508 systemd[1]: Reload requested from client PID 1679 ('systemctl') (unit ensure-sysext.service)... May 14 17:58:35.887518 systemd[1]: Reloading... May 14 17:58:35.896130 systemd-tmpfiles[1684]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 14 17:58:35.896152 systemd-tmpfiles[1684]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 14 17:58:35.896337 systemd-tmpfiles[1684]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 17:58:35.896466 systemd-tmpfiles[1684]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 17:58:35.896868 systemd-tmpfiles[1684]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 17:58:35.897061 systemd-tmpfiles[1684]: ACLs are not supported, ignoring. May 14 17:58:35.897093 systemd-tmpfiles[1684]: ACLs are not supported, ignoring. May 14 17:58:35.920491 systemd-tmpfiles[1684]: Detected autofs mount point /boot during canonicalization of boot. May 14 17:58:35.920500 systemd-tmpfiles[1684]: Skipping /boot May 14 17:58:35.932255 systemd-tmpfiles[1684]: Detected autofs mount point /boot during canonicalization of boot. May 14 17:58:35.932266 systemd-tmpfiles[1684]: Skipping /boot May 14 17:58:35.948983 zram_generator::config[1724]: No configuration found. May 14 17:58:36.008694 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 17:58:36.081830 systemd[1]: Reloading finished in 194 ms. May 14 17:58:36.124724 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 14 17:58:36.184432 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 17:58:36.185317 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 17:58:36.194564 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 17:58:36.201132 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 17:58:36.205177 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 17:58:36.205480 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 17:58:36.206913 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 17:58:36.207127 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 17:58:36.212070 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 17:58:36.212179 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 17:58:36.217339 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 17:58:36.217451 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 17:58:36.224684 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 17:58:36.225721 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 17:58:36.234220 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 17:58:36.241343 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 17:58:36.245485 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 17:58:36.245630 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 17:58:36.246758 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 17:58:36.252299 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 17:58:36.252419 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 17:58:36.257434 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 17:58:36.257555 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 17:58:36.262804 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 17:58:36.262920 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 17:58:36.277196 systemd[1]: Finished ensure-sysext.service. May 14 17:58:36.282515 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 17:58:36.292496 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 14 17:58:36.298265 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 17:58:36.300081 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 17:58:36.306901 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 17:58:36.313596 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 17:58:36.319647 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 17:58:36.327510 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 17:58:36.327543 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 17:58:36.334638 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 14 17:58:36.340302 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 17:58:36.344890 systemd[1]: Reached target time-set.target - System Time Set. May 14 17:58:36.349467 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 14 17:58:36.354497 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 17:58:36.360103 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 17:58:36.366794 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 17:58:36.366908 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 17:58:36.371801 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 17:58:36.371927 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 17:58:36.377740 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 17:58:36.377856 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 17:58:36.385694 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 17:58:36.385798 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 17:58:36.389106 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 14 17:58:36.454129 systemd-resolved[1805]: Positive Trust Anchors: May 14 17:58:36.454141 systemd-resolved[1805]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 17:58:36.454160 systemd-resolved[1805]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 17:58:36.458560 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 14 17:58:36.469911 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 17:58:36.480282 augenrules[1830]: No rules May 14 17:58:36.481235 systemd[1]: audit-rules.service: Deactivated successfully. May 14 17:58:36.481402 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 17:58:36.487083 systemd-resolved[1805]: Using system hostname 'ci-4334.0.0-a-9340e225f6'. May 14 17:58:36.488504 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 17:58:36.493180 systemd[1]: Reached target network.target - Network. May 14 17:58:36.496954 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 17:58:37.009935 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 14 17:58:37.015252 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 17:58:37.154065 systemd-networkd[1480]: enP6572s1: Gained IPv6LL May 14 17:58:37.218054 systemd-networkd[1480]: eth0: Gained IPv6LL May 14 17:58:37.219787 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 14 17:58:37.225159 systemd[1]: Reached target network-online.target - Network is Online. May 14 17:58:39.857425 ldconfig[1437]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 17:58:39.878599 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 14 17:58:39.884903 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 14 17:58:39.896196 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 14 17:58:39.900898 systemd[1]: Reached target sysinit.target - System Initialization. May 14 17:58:39.905427 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 14 17:58:39.910490 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 14 17:58:39.915732 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 14 17:58:39.920292 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 14 17:58:39.925263 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 14 17:58:39.930007 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 17:58:39.930035 systemd[1]: Reached target paths.target - Path Units. May 14 17:58:39.933536 systemd[1]: Reached target timers.target - Timer Units. May 14 17:58:39.952249 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 14 17:58:39.957519 systemd[1]: Starting docker.socket - Docker Socket for the API... May 14 17:58:39.977474 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 14 17:58:39.982747 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 14 17:58:39.987714 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 14 17:58:39.999472 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 14 17:58:40.003728 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 14 17:58:40.008860 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 14 17:58:40.013100 systemd[1]: Reached target sockets.target - Socket Units. May 14 17:58:40.016931 systemd[1]: Reached target basic.target - Basic System. May 14 17:58:40.020785 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 14 17:58:40.020803 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 14 17:58:40.022322 systemd[1]: Starting chronyd.service - NTP client/server... May 14 17:58:40.034049 systemd[1]: Starting containerd.service - containerd container runtime... May 14 17:58:40.040417 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 14 17:58:40.057695 (chronyd)[1843]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS May 14 17:58:40.060874 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 14 17:58:40.066180 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 14 17:58:40.080631 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 14 17:58:40.085453 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 14 17:58:40.089619 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 17:58:40.091245 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 17:58:40.102022 jq[1851]: false May 14 17:58:40.102446 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 14 17:58:40.108345 chronyd[1857]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) May 14 17:58:40.108726 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 14 17:58:40.122047 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 14 17:58:40.126746 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 14 17:58:40.129161 chronyd[1857]: Timezone right/UTC failed leap second check, ignoring May 14 17:58:40.133330 extend-filesystems[1852]: Found loop4 May 14 17:58:40.137938 extend-filesystems[1852]: Found loop5 May 14 17:58:40.137938 extend-filesystems[1852]: Found loop6 May 14 17:58:40.137938 extend-filesystems[1852]: Found loop7 May 14 17:58:40.137938 extend-filesystems[1852]: Found sda May 14 17:58:40.137938 extend-filesystems[1852]: Found sda1 May 14 17:58:40.137938 extend-filesystems[1852]: Found sda2 May 14 17:58:40.137938 extend-filesystems[1852]: Found sda3 May 14 17:58:40.137938 extend-filesystems[1852]: Found usr May 14 17:58:40.137938 extend-filesystems[1852]: Found sda4 May 14 17:58:40.137938 extend-filesystems[1852]: Found sda6 May 14 17:58:40.137938 extend-filesystems[1852]: Found sda7 May 14 17:58:40.137938 extend-filesystems[1852]: Found sda9 May 14 17:58:40.137938 extend-filesystems[1852]: Checking size of /dev/sda9 May 14 17:58:40.137425 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 14 17:58:40.135864 chronyd[1857]: Loaded seccomp filter (level 2) May 14 17:58:40.227346 extend-filesystems[1852]: Old size kept for /dev/sda9 May 14 17:58:40.227346 extend-filesystems[1852]: Found sr0 May 14 17:58:40.160778 systemd[1]: Starting systemd-logind.service - User Login Management... May 14 17:58:40.168328 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 17:58:40.174440 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 17:58:40.175173 systemd[1]: Starting update-engine.service - Update Engine... May 14 17:58:40.180004 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 14 17:58:40.253475 update_engine[1879]: I20250514 17:58:40.238199 1879 main.cc:92] Flatcar Update Engine starting May 14 17:58:40.191060 systemd[1]: Started chronyd.service - NTP client/server. May 14 17:58:40.253656 jq[1880]: true May 14 17:58:40.207987 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 14 17:58:40.219581 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 17:58:40.219715 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 14 17:58:40.219892 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 17:58:40.220018 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 14 17:58:40.247308 systemd[1]: motdgen.service: Deactivated successfully. May 14 17:58:40.247450 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 14 17:58:40.255517 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 14 17:58:40.263840 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 17:58:40.263989 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 14 17:58:40.277611 systemd-logind[1872]: New seat seat0. May 14 17:58:40.284583 (ntainerd)[1906]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 14 17:58:40.291105 systemd-logind[1872]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 14 17:58:40.291786 systemd[1]: Started systemd-logind.service - User Login Management. May 14 17:58:40.310195 jq[1903]: true May 14 17:58:40.317197 tar[1901]: linux-arm64/helm May 14 17:58:40.425598 bash[1953]: Updated "/home/core/.ssh/authorized_keys" May 14 17:58:40.420363 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 14 17:58:40.427643 dbus-daemon[1846]: [system] SELinux support is enabled May 14 17:58:40.430651 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 14 17:58:40.437773 update_engine[1879]: I20250514 17:58:40.437443 1879 update_check_scheduler.cc:74] Next update check in 11m26s May 14 17:58:40.441130 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 14 17:58:40.441201 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 17:58:40.441217 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 14 17:58:40.443413 dbus-daemon[1846]: [system] Successfully activated service 'org.freedesktop.systemd1' May 14 17:58:40.447229 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 17:58:40.447247 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 14 17:58:40.454619 systemd[1]: Started update-engine.service - Update Engine. May 14 17:58:40.461892 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 14 17:58:40.510540 coreos-metadata[1845]: May 14 17:58:40.510 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 14 17:58:40.516019 coreos-metadata[1845]: May 14 17:58:40.515 INFO Fetch successful May 14 17:58:40.516145 coreos-metadata[1845]: May 14 17:58:40.516 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 May 14 17:58:40.520335 coreos-metadata[1845]: May 14 17:58:40.520 INFO Fetch successful May 14 17:58:40.520639 coreos-metadata[1845]: May 14 17:58:40.520 INFO Fetching http://168.63.129.16/machine/6afa52b7-b52e-4e5f-9bf4-06745c46da2f/2fc19d67%2D766a%2D4a11%2D82a2%2D4adb1acfb7fa.%5Fci%2D4334.0.0%2Da%2D9340e225f6?comp=config&type=sharedConfig&incarnation=1: Attempt #1 May 14 17:58:40.523019 coreos-metadata[1845]: May 14 17:58:40.523 INFO Fetch successful May 14 17:58:40.523157 coreos-metadata[1845]: May 14 17:58:40.523 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 May 14 17:58:40.532046 coreos-metadata[1845]: May 14 17:58:40.532 INFO Fetch successful May 14 17:58:40.536755 sshd_keygen[1878]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 17:58:40.564506 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 14 17:58:40.570833 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 14 17:58:40.576983 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 14 17:58:40.588181 systemd[1]: Starting issuegen.service - Generate /run/issue... May 14 17:58:40.601939 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... May 14 17:58:40.609249 systemd[1]: issuegen.service: Deactivated successfully. May 14 17:58:40.610990 systemd[1]: Finished issuegen.service - Generate /run/issue. May 14 17:58:40.623091 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 14 17:58:40.647766 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 14 17:58:40.657647 systemd[1]: Started getty@tty1.service - Getty on tty1. May 14 17:58:40.665435 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 14 17:58:40.672094 systemd[1]: Reached target getty.target - Login Prompts. May 14 17:58:40.677958 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. May 14 17:58:40.693783 locksmithd[1990]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 17:58:40.807866 tar[1901]: linux-arm64/LICENSE May 14 17:58:40.807866 tar[1901]: linux-arm64/README.md May 14 17:58:40.819013 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 14 17:58:40.882591 containerd[1906]: time="2025-05-14T17:58:40Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 14 17:58:40.885527 containerd[1906]: time="2025-05-14T17:58:40.885498296Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 14 17:58:40.893065 containerd[1906]: time="2025-05-14T17:58:40.893040240Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.008µs" May 14 17:58:40.893151 containerd[1906]: time="2025-05-14T17:58:40.893135496Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 14 17:58:40.893199 containerd[1906]: time="2025-05-14T17:58:40.893188520Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 14 17:58:40.893376 containerd[1906]: time="2025-05-14T17:58:40.893359400Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 14 17:58:40.893440 containerd[1906]: time="2025-05-14T17:58:40.893427752Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 14 17:58:40.893495 containerd[1906]: time="2025-05-14T17:58:40.893484928Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 17:58:40.893588 containerd[1906]: time="2025-05-14T17:58:40.893574672Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 17:58:40.893635 containerd[1906]: time="2025-05-14T17:58:40.893624512Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 17:58:40.893855 containerd[1906]: time="2025-05-14T17:58:40.893836784Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 17:58:40.893924 containerd[1906]: time="2025-05-14T17:58:40.893909256Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 17:58:40.893986 containerd[1906]: time="2025-05-14T17:58:40.893956944Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 17:58:40.894042 containerd[1906]: time="2025-05-14T17:58:40.894028712Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 14 17:58:40.894163 containerd[1906]: time="2025-05-14T17:58:40.894148584Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 14 17:58:40.894374 containerd[1906]: time="2025-05-14T17:58:40.894357280Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 17:58:40.894451 containerd[1906]: time="2025-05-14T17:58:40.894438192Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 17:58:40.894523 containerd[1906]: time="2025-05-14T17:58:40.894494736Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 14 17:58:40.894583 containerd[1906]: time="2025-05-14T17:58:40.894572120Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 14 17:58:40.894882 containerd[1906]: time="2025-05-14T17:58:40.894792416Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 14 17:58:40.894882 containerd[1906]: time="2025-05-14T17:58:40.894863904Z" level=info msg="metadata content store policy set" policy=shared May 14 17:58:40.908767 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 17:58:40.913811 (kubelet)[2039]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 17:58:40.916187 containerd[1906]: time="2025-05-14T17:58:40.916162960Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 14 17:58:40.917038 containerd[1906]: time="2025-05-14T17:58:40.916698312Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 14 17:58:40.917038 containerd[1906]: time="2025-05-14T17:58:40.916728992Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 14 17:58:40.917038 containerd[1906]: time="2025-05-14T17:58:40.916738720Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 14 17:58:40.917038 containerd[1906]: time="2025-05-14T17:58:40.916747928Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 14 17:58:40.917038 containerd[1906]: time="2025-05-14T17:58:40.916756528Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 14 17:58:40.917038 containerd[1906]: time="2025-05-14T17:58:40.916779400Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 14 17:58:40.917038 containerd[1906]: time="2025-05-14T17:58:40.916787584Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 14 17:58:40.917038 containerd[1906]: time="2025-05-14T17:58:40.916794992Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 14 17:58:40.917038 containerd[1906]: time="2025-05-14T17:58:40.916800864Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 14 17:58:40.917038 containerd[1906]: time="2025-05-14T17:58:40.916806208Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 14 17:58:40.917038 containerd[1906]: time="2025-05-14T17:58:40.916814160Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 14 17:58:40.917038 containerd[1906]: time="2025-05-14T17:58:40.916914888Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 14 17:58:40.917038 containerd[1906]: time="2025-05-14T17:58:40.916928928Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 14 17:58:40.917038 containerd[1906]: time="2025-05-14T17:58:40.916940752Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 14 17:58:40.917234 containerd[1906]: time="2025-05-14T17:58:40.916950688Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 14 17:58:40.917614 containerd[1906]: time="2025-05-14T17:58:40.916957744Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 14 17:58:40.917634 containerd[1906]: time="2025-05-14T17:58:40.917619312Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 14 17:58:40.917653 containerd[1906]: time="2025-05-14T17:58:40.917647144Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 14 17:58:40.917670 containerd[1906]: time="2025-05-14T17:58:40.917656672Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 14 17:58:40.917670 containerd[1906]: time="2025-05-14T17:58:40.917666408Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 14 17:58:40.917696 containerd[1906]: time="2025-05-14T17:58:40.917673712Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 14 17:58:40.917696 containerd[1906]: time="2025-05-14T17:58:40.917682200Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 14 17:58:40.917948 containerd[1906]: time="2025-05-14T17:58:40.917734096Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 14 17:58:40.917948 containerd[1906]: time="2025-05-14T17:58:40.917879472Z" level=info msg="Start snapshots syncer" May 14 17:58:40.917948 containerd[1906]: time="2025-05-14T17:58:40.917900528Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 14 17:58:40.918166 containerd[1906]: time="2025-05-14T17:58:40.918140000Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 14 17:58:40.918235 containerd[1906]: time="2025-05-14T17:58:40.918181408Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 14 17:58:40.918492 containerd[1906]: time="2025-05-14T17:58:40.918469576Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 14 17:58:40.918598 containerd[1906]: time="2025-05-14T17:58:40.918585008Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 14 17:58:40.918621 containerd[1906]: time="2025-05-14T17:58:40.918612048Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 14 17:58:40.918637 containerd[1906]: time="2025-05-14T17:58:40.918624984Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 14 17:58:40.918657 containerd[1906]: time="2025-05-14T17:58:40.918638088Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 14 17:58:40.918657 containerd[1906]: time="2025-05-14T17:58:40.918648080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 14 17:58:40.918683 containerd[1906]: time="2025-05-14T17:58:40.918657960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 14 17:58:40.918683 containerd[1906]: time="2025-05-14T17:58:40.918667256Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 14 17:58:40.918707 containerd[1906]: time="2025-05-14T17:58:40.918691656Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 14 17:58:40.918707 containerd[1906]: time="2025-05-14T17:58:40.918701920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 14 17:58:40.918733 containerd[1906]: time="2025-05-14T17:58:40.918711352Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 14 17:58:40.918753 containerd[1906]: time="2025-05-14T17:58:40.918741344Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 17:58:40.918765 containerd[1906]: time="2025-05-14T17:58:40.918756816Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 17:58:40.918778 containerd[1906]: time="2025-05-14T17:58:40.918764760Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 17:58:40.918778 containerd[1906]: time="2025-05-14T17:58:40.918773544Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 17:58:40.918800 containerd[1906]: time="2025-05-14T17:58:40.918778800Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 14 17:58:40.918800 containerd[1906]: time="2025-05-14T17:58:40.918787344Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 14 17:58:40.918800 containerd[1906]: time="2025-05-14T17:58:40.918796168Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 14 17:58:40.918835 containerd[1906]: time="2025-05-14T17:58:40.918807696Z" level=info msg="runtime interface created" May 14 17:58:40.918835 containerd[1906]: time="2025-05-14T17:58:40.918813568Z" level=info msg="created NRI interface" May 14 17:58:40.918835 containerd[1906]: time="2025-05-14T17:58:40.918819200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 14 17:58:40.918835 containerd[1906]: time="2025-05-14T17:58:40.918829352Z" level=info msg="Connect containerd service" May 14 17:58:40.918877 containerd[1906]: time="2025-05-14T17:58:40.918853024Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 14 17:58:40.920799 containerd[1906]: time="2025-05-14T17:58:40.920477288Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 17:58:41.150612 kubelet[2039]: E0514 17:58:41.150573 2039 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 17:58:41.152626 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 17:58:41.152737 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 17:58:41.153005 systemd[1]: kubelet.service: Consumed 508ms CPU time, 236.3M memory peak. May 14 17:58:41.690633 containerd[1906]: time="2025-05-14T17:58:41.690591120Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 17:58:41.690746 containerd[1906]: time="2025-05-14T17:58:41.690647728Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 17:58:41.690746 containerd[1906]: time="2025-05-14T17:58:41.690667328Z" level=info msg="Start subscribing containerd event" May 14 17:58:41.690746 containerd[1906]: time="2025-05-14T17:58:41.690708216Z" level=info msg="Start recovering state" May 14 17:58:41.690795 containerd[1906]: time="2025-05-14T17:58:41.690769920Z" level=info msg="Start event monitor" May 14 17:58:41.690795 containerd[1906]: time="2025-05-14T17:58:41.690780120Z" level=info msg="Start cni network conf syncer for default" May 14 17:58:41.690795 containerd[1906]: time="2025-05-14T17:58:41.690785624Z" level=info msg="Start streaming server" May 14 17:58:41.690795 containerd[1906]: time="2025-05-14T17:58:41.690791504Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 14 17:58:41.690843 containerd[1906]: time="2025-05-14T17:58:41.690798856Z" level=info msg="runtime interface starting up..." May 14 17:58:41.690843 containerd[1906]: time="2025-05-14T17:58:41.690802872Z" level=info msg="starting plugins..." May 14 17:58:41.690843 containerd[1906]: time="2025-05-14T17:58:41.690812104Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 14 17:58:41.691509 containerd[1906]: time="2025-05-14T17:58:41.690901464Z" level=info msg="containerd successfully booted in 0.808700s" May 14 17:58:41.691074 systemd[1]: Started containerd.service - containerd container runtime. May 14 17:58:41.696147 systemd[1]: Reached target multi-user.target - Multi-User System. May 14 17:58:41.701996 systemd[1]: Startup finished in 1.649s (kernel) + 15.990s (initrd) + 13.243s (userspace) = 30.883s. May 14 17:58:41.912478 login[2017]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 14 17:58:41.913198 login[2020]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 14 17:58:41.918571 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 14 17:58:41.919373 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 14 17:58:41.924196 systemd-logind[1872]: New session 1 of user core. May 14 17:58:41.927255 systemd-logind[1872]: New session 2 of user core. May 14 17:58:41.935195 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 14 17:58:41.937561 systemd[1]: Starting user@500.service - User Manager for UID 500... May 14 17:58:41.945296 (systemd)[2065]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 17:58:41.946917 systemd-logind[1872]: New session c1 of user core. May 14 17:58:42.073283 systemd[2065]: Queued start job for default target default.target. May 14 17:58:42.077554 systemd[2065]: Created slice app.slice - User Application Slice. May 14 17:58:42.077576 systemd[2065]: Reached target paths.target - Paths. May 14 17:58:42.077600 systemd[2065]: Reached target timers.target - Timers. May 14 17:58:42.078841 systemd[2065]: Starting dbus.socket - D-Bus User Message Bus Socket... May 14 17:58:42.104540 systemd[2065]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 14 17:58:42.104614 systemd[2065]: Reached target sockets.target - Sockets. May 14 17:58:42.104640 systemd[2065]: Reached target basic.target - Basic System. May 14 17:58:42.104660 systemd[2065]: Reached target default.target - Main User Target. May 14 17:58:42.104676 systemd[2065]: Startup finished in 151ms. May 14 17:58:42.104777 systemd[1]: Started user@500.service - User Manager for UID 500. May 14 17:58:42.109074 systemd[1]: Started session-1.scope - Session 1 of User core. May 14 17:58:42.109579 systemd[1]: Started session-2.scope - Session 2 of User core. May 14 17:58:42.314123 waagent[2023]: 2025-05-14T17:58:42.310124Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 May 14 17:58:42.314376 waagent[2023]: 2025-05-14T17:58:42.314266Z INFO Daemon Daemon OS: flatcar 4334.0.0 May 14 17:58:42.317523 waagent[2023]: 2025-05-14T17:58:42.317492Z INFO Daemon Daemon Python: 3.11.12 May 14 17:58:42.320586 waagent[2023]: 2025-05-14T17:58:42.320551Z INFO Daemon Daemon Run daemon May 14 17:58:42.323449 waagent[2023]: 2025-05-14T17:58:42.323342Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4334.0.0' May 14 17:58:42.329706 waagent[2023]: 2025-05-14T17:58:42.329677Z INFO Daemon Daemon Using waagent for provisioning May 14 17:58:42.333443 waagent[2023]: 2025-05-14T17:58:42.333410Z INFO Daemon Daemon Activate resource disk May 14 17:58:42.336763 waagent[2023]: 2025-05-14T17:58:42.336736Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb May 14 17:58:42.344699 waagent[2023]: 2025-05-14T17:58:42.344665Z INFO Daemon Daemon Found device: None May 14 17:58:42.347800 waagent[2023]: 2025-05-14T17:58:42.347773Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology May 14 17:58:42.353559 waagent[2023]: 2025-05-14T17:58:42.353536Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 May 14 17:58:42.361558 waagent[2023]: 2025-05-14T17:58:42.361523Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 14 17:58:42.365588 waagent[2023]: 2025-05-14T17:58:42.365561Z INFO Daemon Daemon Running default provisioning handler May 14 17:58:42.373386 waagent[2023]: 2025-05-14T17:58:42.373343Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. May 14 17:58:42.383023 waagent[2023]: 2025-05-14T17:58:42.382989Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' May 14 17:58:42.390173 waagent[2023]: 2025-05-14T17:58:42.390144Z INFO Daemon Daemon cloud-init is enabled: False May 14 17:58:42.393885 waagent[2023]: 2025-05-14T17:58:42.393861Z INFO Daemon Daemon Copying ovf-env.xml May 14 17:58:42.440815 waagent[2023]: 2025-05-14T17:58:42.437843Z INFO Daemon Daemon Successfully mounted dvd May 14 17:58:42.463768 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. May 14 17:58:42.465669 waagent[2023]: 2025-05-14T17:58:42.465626Z INFO Daemon Daemon Detect protocol endpoint May 14 17:58:42.469298 waagent[2023]: 2025-05-14T17:58:42.469264Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 14 17:58:42.473256 waagent[2023]: 2025-05-14T17:58:42.473229Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler May 14 17:58:42.477964 waagent[2023]: 2025-05-14T17:58:42.477934Z INFO Daemon Daemon Test for route to 168.63.129.16 May 14 17:58:42.481652 waagent[2023]: 2025-05-14T17:58:42.481625Z INFO Daemon Daemon Route to 168.63.129.16 exists May 14 17:58:42.485387 waagent[2023]: 2025-05-14T17:58:42.485362Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 May 14 17:58:42.528808 waagent[2023]: 2025-05-14T17:58:42.528778Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 May 14 17:58:42.533788 waagent[2023]: 2025-05-14T17:58:42.533770Z INFO Daemon Daemon Wire protocol version:2012-11-30 May 14 17:58:42.537617 waagent[2023]: 2025-05-14T17:58:42.537597Z INFO Daemon Daemon Server preferred version:2015-04-05 May 14 17:58:42.732999 waagent[2023]: 2025-05-14T17:58:42.732898Z INFO Daemon Daemon Initializing goal state during protocol detection May 14 17:58:42.737651 waagent[2023]: 2025-05-14T17:58:42.737621Z INFO Daemon Daemon Forcing an update of the goal state. May 14 17:58:42.747525 waagent[2023]: 2025-05-14T17:58:42.747489Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] May 14 17:58:42.777283 waagent[2023]: 2025-05-14T17:58:42.777252Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.164 May 14 17:58:42.781403 waagent[2023]: 2025-05-14T17:58:42.781371Z INFO Daemon May 14 17:58:42.783459 waagent[2023]: 2025-05-14T17:58:42.783432Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: f4e5b3da-4311-4a01-89af-74da057d0a14 eTag: 7390091275762272865 source: Fabric] May 14 17:58:42.791594 waagent[2023]: 2025-05-14T17:58:42.791563Z INFO Daemon The vmSettings originated via Fabric; will ignore them. May 14 17:58:42.796235 waagent[2023]: 2025-05-14T17:58:42.796205Z INFO Daemon May 14 17:58:42.798138 waagent[2023]: 2025-05-14T17:58:42.798112Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] May 14 17:58:42.806275 waagent[2023]: 2025-05-14T17:58:42.806247Z INFO Daemon Daemon Downloading artifacts profile blob May 14 17:58:42.870234 waagent[2023]: 2025-05-14T17:58:42.870181Z INFO Daemon Downloaded certificate {'thumbprint': 'C250E7ED1B01907A63563CE54C4D3E89A083C807', 'hasPrivateKey': True} May 14 17:58:42.877346 waagent[2023]: 2025-05-14T17:58:42.877310Z INFO Daemon Downloaded certificate {'thumbprint': '4D429CA5496AD10D8E8BC8FE2135BC5F179684D7', 'hasPrivateKey': False} May 14 17:58:42.884585 waagent[2023]: 2025-05-14T17:58:42.884551Z INFO Daemon Fetch goal state completed May 14 17:58:42.893284 waagent[2023]: 2025-05-14T17:58:42.893254Z INFO Daemon Daemon Starting provisioning May 14 17:58:42.897089 waagent[2023]: 2025-05-14T17:58:42.897057Z INFO Daemon Daemon Handle ovf-env.xml. May 14 17:58:42.900573 waagent[2023]: 2025-05-14T17:58:42.900550Z INFO Daemon Daemon Set hostname [ci-4334.0.0-a-9340e225f6] May 14 17:58:42.921865 waagent[2023]: 2025-05-14T17:58:42.921827Z INFO Daemon Daemon Publish hostname [ci-4334.0.0-a-9340e225f6] May 14 17:58:42.926428 waagent[2023]: 2025-05-14T17:58:42.926395Z INFO Daemon Daemon Examine /proc/net/route for primary interface May 14 17:58:42.930941 waagent[2023]: 2025-05-14T17:58:42.930911Z INFO Daemon Daemon Primary interface is [eth0] May 14 17:58:42.939838 systemd-networkd[1480]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 17:58:42.939843 systemd-networkd[1480]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 17:58:42.939881 systemd-networkd[1480]: eth0: DHCP lease lost May 14 17:58:42.940980 waagent[2023]: 2025-05-14T17:58:42.940615Z INFO Daemon Daemon Create user account if not exists May 14 17:58:42.944527 waagent[2023]: 2025-05-14T17:58:42.944492Z INFO Daemon Daemon User core already exists, skip useradd May 14 17:58:42.948540 waagent[2023]: 2025-05-14T17:58:42.948513Z INFO Daemon Daemon Configure sudoer May 14 17:58:42.963001 waagent[2023]: 2025-05-14T17:58:42.962902Z INFO Daemon Daemon Configure sshd May 14 17:58:42.970124 waagent[2023]: 2025-05-14T17:58:42.970086Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. May 14 17:58:42.970997 systemd-networkd[1480]: eth0: DHCPv4 address 10.200.20.4/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 14 17:58:42.979687 waagent[2023]: 2025-05-14T17:58:42.979646Z INFO Daemon Daemon Deploy ssh public key. May 14 17:58:44.049829 waagent[2023]: 2025-05-14T17:58:44.049710Z INFO Daemon Daemon Provisioning complete May 14 17:58:44.062947 waagent[2023]: 2025-05-14T17:58:44.062908Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping May 14 17:58:44.067778 waagent[2023]: 2025-05-14T17:58:44.067746Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. May 14 17:58:44.074762 waagent[2023]: 2025-05-14T17:58:44.074737Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent May 14 17:58:44.169357 waagent[2121]: 2025-05-14T17:58:44.168991Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) May 14 17:58:44.169357 waagent[2121]: 2025-05-14T17:58:44.169087Z INFO ExtHandler ExtHandler OS: flatcar 4334.0.0 May 14 17:58:44.169357 waagent[2121]: 2025-05-14T17:58:44.169123Z INFO ExtHandler ExtHandler Python: 3.11.12 May 14 17:58:44.169357 waagent[2121]: 2025-05-14T17:58:44.169156Z INFO ExtHandler ExtHandler CPU Arch: aarch64 May 14 17:58:44.205084 waagent[2121]: 2025-05-14T17:58:44.205051Z INFO ExtHandler ExtHandler Distro: flatcar-4334.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; May 14 17:58:44.205273 waagent[2121]: 2025-05-14T17:58:44.205248Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 14 17:58:44.205378 waagent[2121]: 2025-05-14T17:58:44.205357Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 May 14 17:58:44.210349 waagent[2121]: 2025-05-14T17:58:44.210310Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] May 14 17:58:44.217988 waagent[2121]: 2025-05-14T17:58:44.217903Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 May 14 17:58:44.218294 waagent[2121]: 2025-05-14T17:58:44.218260Z INFO ExtHandler May 14 17:58:44.218342 waagent[2121]: 2025-05-14T17:58:44.218323Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: e764d74c-1409-4dde-8640-b6db38905de2 eTag: 7390091275762272865 source: Fabric] May 14 17:58:44.218556 waagent[2121]: 2025-05-14T17:58:44.218532Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. May 14 17:58:44.218923 waagent[2121]: 2025-05-14T17:58:44.218896Z INFO ExtHandler May 14 17:58:44.218958 waagent[2121]: 2025-05-14T17:58:44.218943Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] May 14 17:58:44.221743 waagent[2121]: 2025-05-14T17:58:44.221719Z INFO ExtHandler ExtHandler Downloading artifacts profile blob May 14 17:58:44.279530 waagent[2121]: 2025-05-14T17:58:44.279474Z INFO ExtHandler Downloaded certificate {'thumbprint': 'C250E7ED1B01907A63563CE54C4D3E89A083C807', 'hasPrivateKey': True} May 14 17:58:44.279790 waagent[2121]: 2025-05-14T17:58:44.279758Z INFO ExtHandler Downloaded certificate {'thumbprint': '4D429CA5496AD10D8E8BC8FE2135BC5F179684D7', 'hasPrivateKey': False} May 14 17:58:44.280098 waagent[2121]: 2025-05-14T17:58:44.280071Z INFO ExtHandler Fetch goal state completed May 14 17:58:44.292232 waagent[2121]: 2025-05-14T17:58:44.292189Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) May 14 17:58:44.295257 waagent[2121]: 2025-05-14T17:58:44.295213Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2121 May 14 17:58:44.295339 waagent[2121]: 2025-05-14T17:58:44.295316Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** May 14 17:58:44.295565 waagent[2121]: 2025-05-14T17:58:44.295539Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** May 14 17:58:44.296542 waagent[2121]: 2025-05-14T17:58:44.296509Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4334.0.0', '', 'Flatcar Container Linux by Kinvolk'] May 14 17:58:44.296845 waagent[2121]: 2025-05-14T17:58:44.296817Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4334.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported May 14 17:58:44.296942 waagent[2121]: 2025-05-14T17:58:44.296922Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False May 14 17:58:44.297370 waagent[2121]: 2025-05-14T17:58:44.297341Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules May 14 17:58:44.328108 waagent[2121]: 2025-05-14T17:58:44.328051Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service May 14 17:58:44.328196 waagent[2121]: 2025-05-14T17:58:44.328171Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup May 14 17:58:44.332228 waagent[2121]: 2025-05-14T17:58:44.332200Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now May 14 17:58:44.336742 systemd[1]: Reload requested from client PID 2138 ('systemctl') (unit waagent.service)... May 14 17:58:44.336924 systemd[1]: Reloading... May 14 17:58:44.402991 zram_generator::config[2176]: No configuration found. May 14 17:58:44.465421 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 17:58:44.543403 systemd[1]: Reloading finished in 206 ms. May 14 17:58:44.553992 waagent[2121]: 2025-05-14T17:58:44.553479Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service May 14 17:58:44.553992 waagent[2121]: 2025-05-14T17:58:44.553590Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully May 14 17:58:44.837995 waagent[2121]: 2025-05-14T17:58:44.837895Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. May 14 17:58:44.838212 waagent[2121]: 2025-05-14T17:58:44.838182Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] May 14 17:58:44.838774 waagent[2121]: 2025-05-14T17:58:44.838736Z INFO ExtHandler ExtHandler Starting env monitor service. May 14 17:58:44.839060 waagent[2121]: 2025-05-14T17:58:44.839018Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. May 14 17:58:44.839401 waagent[2121]: 2025-05-14T17:58:44.839362Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread May 14 17:58:44.839563 waagent[2121]: 2025-05-14T17:58:44.839487Z INFO ExtHandler ExtHandler Start Extension Telemetry service. May 14 17:58:44.839662 waagent[2121]: 2025-05-14T17:58:44.839541Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 14 17:58:44.840398 waagent[2121]: 2025-05-14T17:58:44.839810Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 14 17:58:44.840398 waagent[2121]: 2025-05-14T17:58:44.839867Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 May 14 17:58:44.840398 waagent[2121]: 2025-05-14T17:58:44.839983Z INFO EnvHandler ExtHandler Configure routes May 14 17:58:44.840398 waagent[2121]: 2025-05-14T17:58:44.840032Z INFO EnvHandler ExtHandler Gateway:None May 14 17:58:44.840398 waagent[2121]: 2025-05-14T17:58:44.840057Z INFO EnvHandler ExtHandler Routes:None May 14 17:58:44.840624 waagent[2121]: 2025-05-14T17:58:44.840592Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True May 14 17:58:44.840755 waagent[2121]: 2025-05-14T17:58:44.840734Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread May 14 17:58:44.842094 waagent[2121]: 2025-05-14T17:58:44.842044Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. May 14 17:58:44.842764 waagent[2121]: 2025-05-14T17:58:44.842740Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 May 14 17:58:44.843077 waagent[2121]: 2025-05-14T17:58:44.843042Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. May 14 17:58:44.844156 waagent[2121]: 2025-05-14T17:58:44.844130Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: May 14 17:58:44.844156 waagent[2121]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT May 14 17:58:44.844156 waagent[2121]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 May 14 17:58:44.844156 waagent[2121]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 May 14 17:58:44.844156 waagent[2121]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 May 14 17:58:44.844156 waagent[2121]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 14 17:58:44.844156 waagent[2121]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 14 17:58:44.849680 waagent[2121]: 2025-05-14T17:58:44.849650Z INFO ExtHandler ExtHandler May 14 17:58:44.849807 waagent[2121]: 2025-05-14T17:58:44.849782Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 1faf523b-cd22-4655-b351-48072a6a8a7c correlation f287c928-5f62-4d20-8941-f818129733a7 created: 2025-05-14T17:57:29.013672Z] May 14 17:58:44.851176 waagent[2121]: 2025-05-14T17:58:44.850266Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. May 14 17:58:44.851176 waagent[2121]: 2025-05-14T17:58:44.850677Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] May 14 17:58:44.881398 waagent[2121]: 2025-05-14T17:58:44.881365Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command May 14 17:58:44.881398 waagent[2121]: Try `iptables -h' or 'iptables --help' for more information.) May 14 17:58:44.881812 waagent[2121]: 2025-05-14T17:58:44.881780Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 5A1158CF-5795-46AD-B9EE-281623F4F3AF;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] May 14 17:58:44.912483 waagent[2121]: 2025-05-14T17:58:44.912450Z INFO MonitorHandler ExtHandler Network interfaces: May 14 17:58:44.912483 waagent[2121]: Executing ['ip', '-a', '-o', 'link']: May 14 17:58:44.912483 waagent[2121]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 May 14 17:58:44.912483 waagent[2121]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bb:e9:0a brd ff:ff:ff:ff:ff:ff May 14 17:58:44.912483 waagent[2121]: 3: enP6572s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bb:e9:0a brd ff:ff:ff:ff:ff:ff\ altname enP6572p0s2 May 14 17:58:44.912483 waagent[2121]: Executing ['ip', '-4', '-a', '-o', 'address']: May 14 17:58:44.912483 waagent[2121]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever May 14 17:58:44.912483 waagent[2121]: 2: eth0 inet 10.200.20.4/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever May 14 17:58:44.912483 waagent[2121]: Executing ['ip', '-6', '-a', '-o', 'address']: May 14 17:58:44.912483 waagent[2121]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever May 14 17:58:44.912483 waagent[2121]: 2: eth0 inet6 fe80::222:48ff:febb:e90a/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever May 14 17:58:44.912483 waagent[2121]: 3: enP6572s1 inet6 fe80::222:48ff:febb:e90a/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever May 14 17:58:44.968301 waagent[2121]: 2025-05-14T17:58:44.968268Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: May 14 17:58:44.968301 waagent[2121]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 14 17:58:44.968301 waagent[2121]: pkts bytes target prot opt in out source destination May 14 17:58:44.968301 waagent[2121]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 14 17:58:44.968301 waagent[2121]: pkts bytes target prot opt in out source destination May 14 17:58:44.968301 waagent[2121]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 14 17:58:44.968301 waagent[2121]: pkts bytes target prot opt in out source destination May 14 17:58:44.968301 waagent[2121]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 14 17:58:44.968301 waagent[2121]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 14 17:58:44.968301 waagent[2121]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 14 17:58:44.970527 waagent[2121]: 2025-05-14T17:58:44.970496Z INFO EnvHandler ExtHandler Current Firewall rules: May 14 17:58:44.970527 waagent[2121]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 14 17:58:44.970527 waagent[2121]: pkts bytes target prot opt in out source destination May 14 17:58:44.970527 waagent[2121]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 14 17:58:44.970527 waagent[2121]: pkts bytes target prot opt in out source destination May 14 17:58:44.970527 waagent[2121]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 14 17:58:44.970527 waagent[2121]: pkts bytes target prot opt in out source destination May 14 17:58:44.970527 waagent[2121]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 14 17:58:44.970527 waagent[2121]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 14 17:58:44.970527 waagent[2121]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 14 17:58:44.970883 waagent[2121]: 2025-05-14T17:58:44.970860Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 May 14 17:58:51.274510 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 17:58:51.275815 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 17:58:51.354062 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 17:58:51.366276 (kubelet)[2270]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 17:58:51.393686 kubelet[2270]: E0514 17:58:51.393654 2270 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 17:58:51.396304 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 17:58:51.396410 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 17:58:51.396782 systemd[1]: kubelet.service: Consumed 98ms CPU time, 94.9M memory peak. May 14 17:59:01.525261 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 14 17:59:01.526552 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 17:59:01.610465 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 17:59:01.616270 (kubelet)[2285]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 17:59:01.643877 kubelet[2285]: E0514 17:59:01.643841 2285 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 17:59:01.645532 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 17:59:01.645625 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 17:59:01.648015 systemd[1]: kubelet.service: Consumed 93ms CPU time, 93.2M memory peak. May 14 17:59:03.932886 chronyd[1857]: Selected source PHC0 May 14 17:59:11.774710 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 14 17:59:11.775992 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 17:59:11.863809 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 17:59:11.866383 (kubelet)[2300]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 17:59:11.892195 kubelet[2300]: E0514 17:59:11.892156 2300 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 17:59:11.893427 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 17:59:11.893514 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 17:59:11.893874 systemd[1]: kubelet.service: Consumed 94ms CPU time, 94.7M memory peak. May 14 17:59:16.006881 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 14 17:59:16.008268 systemd[1]: Started sshd@0-10.200.20.4:22-10.200.16.10:50080.service - OpenSSH per-connection server daemon (10.200.16.10:50080). May 14 17:59:16.547076 sshd[2312]: Accepted publickey for core from 10.200.16.10 port 50080 ssh2: RSA SHA256:GfAM5aEyZtKI1wsLTx07KX72HiGyI6L7Lx/Fls8o8zc May 14 17:59:16.547987 sshd-session[2312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 17:59:16.551488 systemd-logind[1872]: New session 3 of user core. May 14 17:59:16.558057 systemd[1]: Started session-3.scope - Session 3 of User core. May 14 17:59:16.930144 systemd[1]: Started sshd@1-10.200.20.4:22-10.200.16.10:50084.service - OpenSSH per-connection server daemon (10.200.16.10:50084). May 14 17:59:17.378471 sshd[2317]: Accepted publickey for core from 10.200.16.10 port 50084 ssh2: RSA SHA256:GfAM5aEyZtKI1wsLTx07KX72HiGyI6L7Lx/Fls8o8zc May 14 17:59:17.379431 sshd-session[2317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 17:59:17.382853 systemd-logind[1872]: New session 4 of user core. May 14 17:59:17.394172 systemd[1]: Started session-4.scope - Session 4 of User core. May 14 17:59:17.701901 sshd[2319]: Connection closed by 10.200.16.10 port 50084 May 14 17:59:17.702424 sshd-session[2317]: pam_unix(sshd:session): session closed for user core May 14 17:59:17.705145 systemd[1]: sshd@1-10.200.20.4:22-10.200.16.10:50084.service: Deactivated successfully. May 14 17:59:17.706379 systemd[1]: session-4.scope: Deactivated successfully. May 14 17:59:17.706886 systemd-logind[1872]: Session 4 logged out. Waiting for processes to exit. May 14 17:59:17.707896 systemd-logind[1872]: Removed session 4. May 14 17:59:17.777019 systemd[1]: Started sshd@2-10.200.20.4:22-10.200.16.10:50092.service - OpenSSH per-connection server daemon (10.200.16.10:50092). May 14 17:59:18.199422 sshd[2325]: Accepted publickey for core from 10.200.16.10 port 50092 ssh2: RSA SHA256:GfAM5aEyZtKI1wsLTx07KX72HiGyI6L7Lx/Fls8o8zc May 14 17:59:18.200333 sshd-session[2325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 17:59:18.204017 systemd-logind[1872]: New session 5 of user core. May 14 17:59:18.210087 systemd[1]: Started session-5.scope - Session 5 of User core. May 14 17:59:18.501395 sshd[2327]: Connection closed by 10.200.16.10 port 50092 May 14 17:59:18.500696 sshd-session[2325]: pam_unix(sshd:session): session closed for user core May 14 17:59:18.502913 systemd-logind[1872]: Session 5 logged out. Waiting for processes to exit. May 14 17:59:18.503189 systemd[1]: sshd@2-10.200.20.4:22-10.200.16.10:50092.service: Deactivated successfully. May 14 17:59:18.504511 systemd[1]: session-5.scope: Deactivated successfully. May 14 17:59:18.506346 systemd-logind[1872]: Removed session 5. May 14 17:59:18.574954 systemd[1]: Started sshd@3-10.200.20.4:22-10.200.16.10:43442.service - OpenSSH per-connection server daemon (10.200.16.10:43442). May 14 17:59:18.991621 sshd[2333]: Accepted publickey for core from 10.200.16.10 port 43442 ssh2: RSA SHA256:GfAM5aEyZtKI1wsLTx07KX72HiGyI6L7Lx/Fls8o8zc May 14 17:59:18.992507 sshd-session[2333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 17:59:18.995694 systemd-logind[1872]: New session 6 of user core. May 14 17:59:19.006224 systemd[1]: Started session-6.scope - Session 6 of User core. May 14 17:59:19.291694 sshd[2335]: Connection closed by 10.200.16.10 port 43442 May 14 17:59:19.292147 sshd-session[2333]: pam_unix(sshd:session): session closed for user core May 14 17:59:19.294784 systemd[1]: sshd@3-10.200.20.4:22-10.200.16.10:43442.service: Deactivated successfully. May 14 17:59:19.295990 systemd[1]: session-6.scope: Deactivated successfully. May 14 17:59:19.296482 systemd-logind[1872]: Session 6 logged out. Waiting for processes to exit. May 14 17:59:19.297416 systemd-logind[1872]: Removed session 6. May 14 17:59:19.369934 systemd[1]: Started sshd@4-10.200.20.4:22-10.200.16.10:43452.service - OpenSSH per-connection server daemon (10.200.16.10:43452). May 14 17:59:19.814231 sshd[2341]: Accepted publickey for core from 10.200.16.10 port 43452 ssh2: RSA SHA256:GfAM5aEyZtKI1wsLTx07KX72HiGyI6L7Lx/Fls8o8zc May 14 17:59:19.815118 sshd-session[2341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 17:59:19.818364 systemd-logind[1872]: New session 7 of user core. May 14 17:59:19.826076 systemd[1]: Started session-7.scope - Session 7 of User core. May 14 17:59:20.185056 sudo[2344]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 14 17:59:20.185265 sudo[2344]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 17:59:20.221712 sudo[2344]: pam_unix(sudo:session): session closed for user root May 14 17:59:20.310026 sshd[2343]: Connection closed by 10.200.16.10 port 43452 May 14 17:59:20.310441 sshd-session[2341]: pam_unix(sshd:session): session closed for user core May 14 17:59:20.313173 systemd-logind[1872]: Session 7 logged out. Waiting for processes to exit. May 14 17:59:20.313579 systemd[1]: sshd@4-10.200.20.4:22-10.200.16.10:43452.service: Deactivated successfully. May 14 17:59:20.314771 systemd[1]: session-7.scope: Deactivated successfully. May 14 17:59:20.316341 systemd-logind[1872]: Removed session 7. May 14 17:59:20.390038 systemd[1]: Started sshd@5-10.200.20.4:22-10.200.16.10:43466.service - OpenSSH per-connection server daemon (10.200.16.10:43466). May 14 17:59:20.837859 sshd[2350]: Accepted publickey for core from 10.200.16.10 port 43466 ssh2: RSA SHA256:GfAM5aEyZtKI1wsLTx07KX72HiGyI6L7Lx/Fls8o8zc May 14 17:59:20.838772 sshd-session[2350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 17:59:20.841906 systemd-logind[1872]: New session 8 of user core. May 14 17:59:20.849200 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 17:59:21.089683 sudo[2354]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 14 17:59:21.090266 sudo[2354]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 17:59:21.109243 sudo[2354]: pam_unix(sudo:session): session closed for user root May 14 17:59:21.112494 sudo[2353]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 14 17:59:21.112685 sudo[2353]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 17:59:21.119375 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 17:59:21.143306 augenrules[2376]: No rules May 14 17:59:21.144336 systemd[1]: audit-rules.service: Deactivated successfully. May 14 17:59:21.144573 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 17:59:21.147037 sudo[2353]: pam_unix(sudo:session): session closed for user root May 14 17:59:21.218675 sshd[2352]: Connection closed by 10.200.16.10 port 43466 May 14 17:59:21.219001 sshd-session[2350]: pam_unix(sshd:session): session closed for user core May 14 17:59:21.221360 systemd[1]: sshd@5-10.200.20.4:22-10.200.16.10:43466.service: Deactivated successfully. May 14 17:59:21.222455 systemd[1]: session-8.scope: Deactivated successfully. May 14 17:59:21.223933 systemd-logind[1872]: Session 8 logged out. Waiting for processes to exit. May 14 17:59:21.224805 systemd-logind[1872]: Removed session 8. May 14 17:59:21.293696 systemd[1]: Started sshd@6-10.200.20.4:22-10.200.16.10:43474.service - OpenSSH per-connection server daemon (10.200.16.10:43474). May 14 17:59:21.705371 sshd[2385]: Accepted publickey for core from 10.200.16.10 port 43474 ssh2: RSA SHA256:GfAM5aEyZtKI1wsLTx07KX72HiGyI6L7Lx/Fls8o8zc May 14 17:59:21.706239 sshd-session[2385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 17:59:21.709330 systemd-logind[1872]: New session 9 of user core. May 14 17:59:21.717067 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 17:59:21.941255 sudo[2388]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 17:59:21.941448 sudo[2388]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 17:59:21.942090 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 14 17:59:21.945139 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 17:59:23.471504 kernel: hv_balloon: Max. dynamic memory size: 4096 MB May 14 17:59:25.131764 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 17:59:25.134039 (kubelet)[2401]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 17:59:25.160715 kubelet[2401]: E0514 17:59:25.160676 2401 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 17:59:25.162524 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 17:59:25.162699 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 17:59:25.163146 systemd[1]: kubelet.service: Consumed 94ms CPU time, 92.9M memory peak. May 14 17:59:25.823431 update_engine[1879]: I20250514 17:59:25.822940 1879 update_attempter.cc:509] Updating boot flags... May 14 17:59:27.564449 systemd[1]: Starting docker.service - Docker Application Container Engine... May 14 17:59:27.572177 (dockerd)[2485]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 17:59:29.290990 dockerd[2485]: time="2025-05-14T17:59:29.290027319Z" level=info msg="Starting up" May 14 17:59:29.291661 dockerd[2485]: time="2025-05-14T17:59:29.291642639Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 14 17:59:30.738362 dockerd[2485]: time="2025-05-14T17:59:30.738312044Z" level=info msg="Loading containers: start." May 14 17:59:30.821988 kernel: Initializing XFRM netlink socket May 14 17:59:31.479381 systemd-networkd[1480]: docker0: Link UP May 14 17:59:31.533682 dockerd[2485]: time="2025-05-14T17:59:31.533311187Z" level=info msg="Loading containers: done." May 14 17:59:31.542035 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2464386414-merged.mount: Deactivated successfully. May 14 17:59:32.336326 dockerd[2485]: time="2025-05-14T17:59:32.336210410Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 17:59:32.336326 dockerd[2485]: time="2025-05-14T17:59:32.336329749Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 14 17:59:32.336690 dockerd[2485]: time="2025-05-14T17:59:32.336445025Z" level=info msg="Initializing buildkit" May 14 17:59:32.523981 dockerd[2485]: time="2025-05-14T17:59:32.523901559Z" level=info msg="Completed buildkit initialization" May 14 17:59:32.528535 dockerd[2485]: time="2025-05-14T17:59:32.528506774Z" level=info msg="Daemon has completed initialization" May 14 17:59:32.528732 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 17:59:32.529189 dockerd[2485]: time="2025-05-14T17:59:32.528621706Z" level=info msg="API listen on /run/docker.sock" May 14 17:59:34.881993 containerd[1906]: time="2025-05-14T17:59:34.881931333Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 14 17:59:35.274446 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 14 17:59:35.276176 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 17:59:35.361550 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 17:59:35.363711 (kubelet)[2698]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 17:59:35.390673 kubelet[2698]: E0514 17:59:35.390636 2698 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 17:59:35.392773 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 17:59:35.393001 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 17:59:35.394060 systemd[1]: kubelet.service: Consumed 94ms CPU time, 93M memory peak. May 14 17:59:41.345846 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4132199231.mount: Deactivated successfully. May 14 17:59:45.223217 containerd[1906]: time="2025-05-14T17:59:45.223161870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:59:45.269943 containerd[1906]: time="2025-05-14T17:59:45.269895249Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794150" May 14 17:59:45.275559 containerd[1906]: time="2025-05-14T17:59:45.275519393Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:59:45.317666 containerd[1906]: time="2025-05-14T17:59:45.317617833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:59:45.318375 containerd[1906]: time="2025-05-14T17:59:45.318239602Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 10.436209314s" May 14 17:59:45.318375 containerd[1906]: time="2025-05-14T17:59:45.318266011Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 14 17:59:45.329760 containerd[1906]: time="2025-05-14T17:59:45.329726410Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 14 17:59:45.524467 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. May 14 17:59:45.526910 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 17:59:45.616736 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 17:59:45.619238 (kubelet)[2775]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 17:59:45.681931 kubelet[2775]: E0514 17:59:45.681902 2775 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 17:59:45.683724 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 17:59:45.683831 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 17:59:45.684116 systemd[1]: kubelet.service: Consumed 92ms CPU time, 94.3M memory peak. May 14 17:59:55.774610 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. May 14 17:59:55.775884 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 17:59:55.856578 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 17:59:55.864328 (kubelet)[2790]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 17:59:55.891268 kubelet[2790]: E0514 17:59:55.891235 2790 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 17:59:55.893233 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 17:59:55.893340 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 17:59:55.893714 systemd[1]: kubelet.service: Consumed 94ms CPU time, 94.3M memory peak. May 14 18:00:00.835238 containerd[1906]: time="2025-05-14T18:00:00.835188333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:04.128699 containerd[1906]: time="2025-05-14T18:00:04.128583023Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855550" May 14 18:00:04.133977 containerd[1906]: time="2025-05-14T18:00:04.133923528Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:04.176022 containerd[1906]: time="2025-05-14T18:00:04.175976160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:04.176632 containerd[1906]: time="2025-05-14T18:00:04.176516781Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 18.846762011s" May 14 18:00:04.176632 containerd[1906]: time="2025-05-14T18:00:04.176544758Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 14 18:00:04.187856 containerd[1906]: time="2025-05-14T18:00:04.187830873Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 14 18:00:06.024682 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. May 14 18:00:06.026005 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:00:06.106732 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:00:06.109243 (kubelet)[2816]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 18:00:06.135509 kubelet[2816]: E0514 18:00:06.135474 2816 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 18:00:06.137224 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 18:00:06.137322 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 18:00:06.139050 systemd[1]: kubelet.service: Consumed 93ms CPU time, 94.5M memory peak. May 14 18:00:14.181968 containerd[1906]: time="2025-05-14T18:00:14.181870530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:14.185687 containerd[1906]: time="2025-05-14T18:00:14.185652643Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263945" May 14 18:00:14.190646 containerd[1906]: time="2025-05-14T18:00:14.190610164Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:14.237665 containerd[1906]: time="2025-05-14T18:00:14.237619043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:14.238378 containerd[1906]: time="2025-05-14T18:00:14.238272501Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 10.050413044s" May 14 18:00:14.238378 containerd[1906]: time="2025-05-14T18:00:14.238299245Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 14 18:00:14.251952 containerd[1906]: time="2025-05-14T18:00:14.251917303Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 14 18:00:16.274613 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. May 14 18:00:16.276121 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:00:16.355764 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:00:16.358268 (kubelet)[2843]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 18:00:16.384431 kubelet[2843]: E0514 18:00:16.384395 2843 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 18:00:16.386144 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 18:00:16.386238 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 18:00:16.386458 systemd[1]: kubelet.service: Consumed 92ms CPU time, 94.7M memory peak. May 14 18:00:21.140250 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1483545497.mount: Deactivated successfully. May 14 18:00:21.628993 containerd[1906]: time="2025-05-14T18:00:21.628671366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:21.631610 containerd[1906]: time="2025-05-14T18:00:21.631484558Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775705" May 14 18:00:21.638642 containerd[1906]: time="2025-05-14T18:00:21.638618662Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:21.644010 containerd[1906]: time="2025-05-14T18:00:21.643973488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:21.644355 containerd[1906]: time="2025-05-14T18:00:21.644251735Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 7.392298071s" May 14 18:00:21.644355 containerd[1906]: time="2025-05-14T18:00:21.644275399Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 14 18:00:21.655991 containerd[1906]: time="2025-05-14T18:00:21.655900514Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 14 18:00:23.883910 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1475376978.mount: Deactivated successfully. May 14 18:00:26.524545 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. May 14 18:00:26.525849 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:00:26.605299 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:00:26.607302 (kubelet)[2878]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 18:00:26.634281 kubelet[2878]: E0514 18:00:26.634241 2878 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 18:00:26.636113 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 18:00:26.636294 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 18:00:26.636706 systemd[1]: kubelet.service: Consumed 92ms CPU time, 94.1M memory peak. May 14 18:00:36.774582 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. May 14 18:00:36.776110 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:00:37.193765 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:00:37.199137 (kubelet)[2906]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 18:00:37.245490 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 18:00:37.245572 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 18:00:37.245748 systemd[1]: kubelet.service: Consumed 92ms CPU time, 94.6M memory peak. May 14 18:00:40.780238 kubelet[2906]: E0514 18:00:37.244104 2906 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 18:00:42.394868 containerd[1906]: time="2025-05-14T18:00:42.394296649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:42.403990 containerd[1906]: time="2025-05-14T18:00:42.403958343Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" May 14 18:00:42.411640 containerd[1906]: time="2025-05-14T18:00:42.411615758Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:42.418153 containerd[1906]: time="2025-05-14T18:00:42.418122719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:42.418777 containerd[1906]: time="2025-05-14T18:00:42.418753136Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 20.762828141s" May 14 18:00:42.418777 containerd[1906]: time="2025-05-14T18:00:42.418777105Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 14 18:00:42.430090 containerd[1906]: time="2025-05-14T18:00:42.430028833Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 14 18:00:43.104432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1687333117.mount: Deactivated successfully. May 14 18:00:43.144651 containerd[1906]: time="2025-05-14T18:00:43.144611795Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:43.148336 containerd[1906]: time="2025-05-14T18:00:43.148313360Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" May 14 18:00:43.153332 containerd[1906]: time="2025-05-14T18:00:43.153311791Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:43.159323 containerd[1906]: time="2025-05-14T18:00:43.159300369Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:43.159661 containerd[1906]: time="2025-05-14T18:00:43.159558664Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 729.504622ms" May 14 18:00:43.159661 containerd[1906]: time="2025-05-14T18:00:43.159578897Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 14 18:00:43.170310 containerd[1906]: time="2025-05-14T18:00:43.170256530Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 14 18:00:43.938060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3032249448.mount: Deactivated successfully. May 14 18:00:46.844597 containerd[1906]: time="2025-05-14T18:00:46.843999714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:46.846881 containerd[1906]: time="2025-05-14T18:00:46.846843087Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" May 14 18:00:46.850425 containerd[1906]: time="2025-05-14T18:00:46.850403248Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:46.857463 containerd[1906]: time="2025-05-14T18:00:46.857431630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:46.858107 containerd[1906]: time="2025-05-14T18:00:46.858086256Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.687662408s" May 14 18:00:46.858183 containerd[1906]: time="2025-05-14T18:00:46.858171106Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 14 18:00:47.274520 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. May 14 18:00:47.275845 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:00:47.380831 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:00:47.383359 (kubelet)[3023]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 18:00:47.410145 kubelet[3023]: E0514 18:00:47.410108 3023 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 18:00:47.411886 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 18:00:47.412003 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 18:00:47.412395 systemd[1]: kubelet.service: Consumed 92ms CPU time, 94.2M memory peak. May 14 18:00:49.793979 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:00:49.794313 systemd[1]: kubelet.service: Consumed 92ms CPU time, 94.2M memory peak. May 14 18:00:49.796103 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:00:49.811160 systemd[1]: Reload requested from client PID 3104 ('systemctl') (unit session-9.scope)... May 14 18:00:49.811253 systemd[1]: Reloading... May 14 18:00:49.878980 zram_generator::config[3146]: No configuration found. May 14 18:00:49.944463 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:00:50.022598 systemd[1]: Reloading finished in 211 ms. May 14 18:00:50.787456 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 14 18:00:50.787541 systemd[1]: kubelet.service: Failed with result 'signal'. May 14 18:00:50.787765 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:00:50.789304 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:00:51.653876 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:00:51.657157 (kubelet)[3213]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 18:00:51.682951 kubelet[3213]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:00:51.682951 kubelet[3213]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 18:00:51.682951 kubelet[3213]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:00:51.682951 kubelet[3213]: I0514 18:00:51.682931 3213 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 18:00:55.382345 kubelet[3213]: I0514 18:00:55.382315 3213 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 14 18:00:55.382345 kubelet[3213]: I0514 18:00:55.382341 3213 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 18:00:55.382680 kubelet[3213]: I0514 18:00:55.382500 3213 server.go:927] "Client rotation is on, will bootstrap in background" May 14 18:00:55.390454 kubelet[3213]: I0514 18:00:55.390434 3213 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 18:00:55.391129 kubelet[3213]: E0514 18:00:55.390981 3213 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.4:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.4:6443: connect: connection refused May 14 18:00:55.396630 kubelet[3213]: I0514 18:00:55.396605 3213 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 18:00:55.397306 kubelet[3213]: I0514 18:00:55.397273 3213 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 18:00:55.397522 kubelet[3213]: I0514 18:00:55.397307 3213 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4334.0.0-a-9340e225f6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 14 18:00:55.397522 kubelet[3213]: I0514 18:00:55.397494 3213 topology_manager.go:138] "Creating topology manager with none policy" May 14 18:00:55.397522 kubelet[3213]: I0514 18:00:55.397501 3213 container_manager_linux.go:301] "Creating device plugin manager" May 14 18:00:55.397640 kubelet[3213]: I0514 18:00:55.397590 3213 state_mem.go:36] "Initialized new in-memory state store" May 14 18:00:55.398326 kubelet[3213]: I0514 18:00:55.398242 3213 kubelet.go:400] "Attempting to sync node with API server" May 14 18:00:55.398326 kubelet[3213]: I0514 18:00:55.398263 3213 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 18:00:55.398326 kubelet[3213]: I0514 18:00:55.398289 3213 kubelet.go:312] "Adding apiserver pod source" May 14 18:00:55.398326 kubelet[3213]: I0514 18:00:55.398302 3213 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 18:00:55.402205 kubelet[3213]: I0514 18:00:55.401512 3213 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 14 18:00:55.402205 kubelet[3213]: I0514 18:00:55.401640 3213 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 18:00:55.402205 kubelet[3213]: W0514 18:00:55.401672 3213 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 18:00:55.402205 kubelet[3213]: I0514 18:00:55.402041 3213 server.go:1264] "Started kubelet" May 14 18:00:55.402205 kubelet[3213]: W0514 18:00:55.402118 3213 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused May 14 18:00:55.402205 kubelet[3213]: E0514 18:00:55.402149 3213 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused May 14 18:00:55.402205 kubelet[3213]: W0514 18:00:55.402183 3213 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334.0.0-a-9340e225f6&limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused May 14 18:00:55.402205 kubelet[3213]: E0514 18:00:55.402201 3213 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334.0.0-a-9340e225f6&limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused May 14 18:00:55.403209 kubelet[3213]: I0514 18:00:55.403186 3213 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 18:00:55.403774 kubelet[3213]: I0514 18:00:55.403753 3213 server.go:455] "Adding debug handlers to kubelet server" May 14 18:00:55.404346 kubelet[3213]: I0514 18:00:55.404305 3213 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 18:00:55.404505 kubelet[3213]: I0514 18:00:55.404489 3213 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 18:00:55.406215 kubelet[3213]: E0514 18:00:55.405873 3213 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.4:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.4:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4334.0.0-a-9340e225f6.183f76aa919516dc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4334.0.0-a-9340e225f6,UID:ci-4334.0.0-a-9340e225f6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4334.0.0-a-9340e225f6,},FirstTimestamp:2025-05-14 18:00:55.402026716 +0000 UTC m=+3.742193703,LastTimestamp:2025-05-14 18:00:55.402026716 +0000 UTC m=+3.742193703,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4334.0.0-a-9340e225f6,}" May 14 18:00:55.406215 kubelet[3213]: I0514 18:00:55.406060 3213 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 18:00:55.406889 kubelet[3213]: E0514 18:00:55.406867 3213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4334.0.0-a-9340e225f6\" not found" May 14 18:00:55.406956 kubelet[3213]: I0514 18:00:55.406900 3213 volume_manager.go:291] "Starting Kubelet Volume Manager" May 14 18:00:55.406956 kubelet[3213]: I0514 18:00:55.406955 3213 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 18:00:55.407013 kubelet[3213]: I0514 18:00:55.407007 3213 reconciler.go:26] "Reconciler: start to sync state" May 14 18:00:55.407813 kubelet[3213]: I0514 18:00:55.407790 3213 factory.go:221] Registration of the systemd container factory successfully May 14 18:00:55.407863 kubelet[3213]: I0514 18:00:55.407856 3213 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 18:00:55.408116 kubelet[3213]: E0514 18:00:55.408092 3213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4334.0.0-a-9340e225f6?timeout=10s\": dial tcp 10.200.20.4:6443: connect: connection refused" interval="200ms" May 14 18:00:55.408188 kubelet[3213]: W0514 18:00:55.408135 3213 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused May 14 18:00:55.408188 kubelet[3213]: E0514 18:00:55.408156 3213 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused May 14 18:00:55.409872 kubelet[3213]: I0514 18:00:55.409851 3213 factory.go:221] Registration of the containerd container factory successfully May 14 18:00:55.419386 kubelet[3213]: I0514 18:00:55.419365 3213 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 18:00:55.420163 kubelet[3213]: I0514 18:00:55.420149 3213 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 18:00:55.420244 kubelet[3213]: I0514 18:00:55.420236 3213 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 18:00:55.420291 kubelet[3213]: I0514 18:00:55.420285 3213 kubelet.go:2337] "Starting kubelet main sync loop" May 14 18:00:55.420365 kubelet[3213]: E0514 18:00:55.420346 3213 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 18:00:55.424727 kubelet[3213]: E0514 18:00:55.424710 3213 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 18:00:55.424727 kubelet[3213]: W0514 18:00:55.424827 3213 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused May 14 18:00:55.424727 kubelet[3213]: E0514 18:00:55.424862 3213 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused May 14 18:00:55.429380 kubelet[3213]: I0514 18:00:55.429364 3213 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 18:00:55.429380 kubelet[3213]: I0514 18:00:55.429375 3213 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 18:00:55.429460 kubelet[3213]: I0514 18:00:55.429403 3213 state_mem.go:36] "Initialized new in-memory state store" May 14 18:00:55.508792 kubelet[3213]: I0514 18:00:55.508776 3213 kubelet_node_status.go:73] "Attempting to register node" node="ci-4334.0.0-a-9340e225f6" May 14 18:00:55.509171 kubelet[3213]: E0514 18:00:55.509141 3213 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.4:6443/api/v1/nodes\": dial tcp 10.200.20.4:6443: connect: connection refused" node="ci-4334.0.0-a-9340e225f6" May 14 18:00:55.521301 kubelet[3213]: E0514 18:00:55.521287 3213 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 18:00:55.609003 kubelet[3213]: E0514 18:00:55.608929 3213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4334.0.0-a-9340e225f6?timeout=10s\": dial tcp 10.200.20.4:6443: connect: connection refused" interval="400ms" May 14 18:00:55.819224 kubelet[3213]: I0514 18:00:55.710944 3213 kubelet_node_status.go:73] "Attempting to register node" node="ci-4334.0.0-a-9340e225f6" May 14 18:00:55.819224 kubelet[3213]: E0514 18:00:55.711223 3213 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.4:6443/api/v1/nodes\": dial tcp 10.200.20.4:6443: connect: connection refused" node="ci-4334.0.0-a-9340e225f6" May 14 18:00:55.819224 kubelet[3213]: E0514 18:00:55.722311 3213 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 18:00:56.010131 kubelet[3213]: E0514 18:00:56.010082 3213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4334.0.0-a-9340e225f6?timeout=10s\": dial tcp 10.200.20.4:6443: connect: connection refused" interval="800ms" May 14 18:00:56.113049 kubelet[3213]: I0514 18:00:56.112854 3213 kubelet_node_status.go:73] "Attempting to register node" node="ci-4334.0.0-a-9340e225f6" May 14 18:00:56.113234 kubelet[3213]: E0514 18:00:56.113132 3213 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.4:6443/api/v1/nodes\": dial tcp 10.200.20.4:6443: connect: connection refused" node="ci-4334.0.0-a-9340e225f6" May 14 18:00:56.122492 kubelet[3213]: E0514 18:00:56.122476 3213 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 18:00:56.311425 kubelet[3213]: W0514 18:00:56.311327 3213 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused May 14 18:00:56.311425 kubelet[3213]: E0514 18:00:56.311377 3213 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused May 14 18:00:56.835715 kubelet[3213]: W0514 18:00:56.541388 3213 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused May 14 18:00:56.835715 kubelet[3213]: E0514 18:00:56.541420 3213 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused May 14 18:00:56.835715 kubelet[3213]: W0514 18:00:56.646308 3213 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334.0.0-a-9340e225f6&limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused May 14 18:00:56.835715 kubelet[3213]: E0514 18:00:56.646345 3213 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334.0.0-a-9340e225f6&limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused May 14 18:00:56.835715 kubelet[3213]: W0514 18:00:56.747921 3213 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused May 14 18:00:56.835715 kubelet[3213]: E0514 18:00:56.747999 3213 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused May 14 18:00:56.835818 kubelet[3213]: E0514 18:00:56.810635 3213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4334.0.0-a-9340e225f6?timeout=10s\": dial tcp 10.200.20.4:6443: connect: connection refused" interval="1.6s" May 14 18:00:56.914919 kubelet[3213]: I0514 18:00:56.914859 3213 kubelet_node_status.go:73] "Attempting to register node" node="ci-4334.0.0-a-9340e225f6" May 14 18:00:56.915156 kubelet[3213]: E0514 18:00:56.915134 3213 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.4:6443/api/v1/nodes\": dial tcp 10.200.20.4:6443: connect: connection refused" node="ci-4334.0.0-a-9340e225f6" May 14 18:00:56.923249 kubelet[3213]: E0514 18:00:56.923232 3213 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 18:00:56.974940 kubelet[3213]: I0514 18:00:56.974918 3213 policy_none.go:49] "None policy: Start" May 14 18:00:56.975838 kubelet[3213]: I0514 18:00:56.975811 3213 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 18:00:56.975924 kubelet[3213]: I0514 18:00:56.975912 3213 state_mem.go:35] "Initializing new in-memory state store" May 14 18:00:57.465665 kubelet[3213]: E0514 18:00:57.465633 3213 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.4:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.4:6443: connect: connection refused May 14 18:00:57.533342 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 18:00:57.543109 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 18:00:57.545357 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 18:00:58.411828 kubelet[3213]: E0514 18:00:58.411778 3213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4334.0.0-a-9340e225f6?timeout=10s\": dial tcp 10.200.20.4:6443: connect: connection refused" interval="3.2s" May 14 18:00:58.470181 kubelet[3213]: W0514 18:00:58.470126 3213 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused May 14 18:00:58.470181 kubelet[3213]: E0514 18:00:58.470183 3213 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused May 14 18:00:58.516790 kubelet[3213]: I0514 18:00:58.516770 3213 kubelet_node_status.go:73] "Attempting to register node" node="ci-4334.0.0-a-9340e225f6" May 14 18:00:58.517084 kubelet[3213]: E0514 18:00:58.517063 3213 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.4:6443/api/v1/nodes\": dial tcp 10.200.20.4:6443: connect: connection refused" node="ci-4334.0.0-a-9340e225f6" May 14 18:00:58.524219 kubelet[3213]: E0514 18:00:58.524206 3213 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 18:00:58.617349 kubelet[3213]: W0514 18:00:58.617293 3213 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334.0.0-a-9340e225f6&limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused May 14 18:00:58.617349 kubelet[3213]: E0514 18:00:58.617349 3213 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334.0.0-a-9340e225f6&limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused May 14 18:00:58.770128 kubelet[3213]: W0514 18:00:58.770035 3213 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused May 14 18:00:58.770128 kubelet[3213]: E0514 18:00:58.770066 3213 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused May 14 18:00:58.878795 kubelet[3213]: W0514 18:00:58.878726 3213 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused May 14 18:00:58.878795 kubelet[3213]: E0514 18:00:58.878779 3213 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused May 14 18:01:00.383791 kubelet[3213]: I0514 18:01:00.383486 3213 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 18:01:00.383791 kubelet[3213]: I0514 18:01:00.383766 3213 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 18:01:00.384318 kubelet[3213]: I0514 18:01:00.383866 3213 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 18:01:00.384955 kubelet[3213]: E0514 18:01:00.384935 3213 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4334.0.0-a-9340e225f6\" not found" May 14 18:01:01.612392 kubelet[3213]: E0514 18:01:01.612346 3213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4334.0.0-a-9340e225f6?timeout=10s\": dial tcp 10.200.20.4:6443: connect: connection refused" interval="6.4s" May 14 18:01:01.719307 kubelet[3213]: I0514 18:01:01.719070 3213 kubelet_node_status.go:73] "Attempting to register node" node="ci-4334.0.0-a-9340e225f6" May 14 18:01:01.719307 kubelet[3213]: E0514 18:01:01.719282 3213 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.4:6443/api/v1/nodes\": dial tcp 10.200.20.4:6443: connect: connection refused" node="ci-4334.0.0-a-9340e225f6" May 14 18:01:01.724654 kubelet[3213]: I0514 18:01:01.724629 3213 topology_manager.go:215] "Topology Admit Handler" podUID="504a01411eeecd82026aece062bdaa0d" podNamespace="kube-system" podName="kube-apiserver-ci-4334.0.0-a-9340e225f6" May 14 18:01:01.725814 kubelet[3213]: I0514 18:01:01.725620 3213 topology_manager.go:215] "Topology Admit Handler" podUID="3d9376e207f37e8dbd8035f2fa6eec87" podNamespace="kube-system" podName="kube-controller-manager-ci-4334.0.0-a-9340e225f6" May 14 18:01:01.726881 kubelet[3213]: I0514 18:01:01.726737 3213 topology_manager.go:215] "Topology Admit Handler" podUID="0e1739ec9b57a6a24aa3cca2d8d5ea21" podNamespace="kube-system" podName="kube-scheduler-ci-4334.0.0-a-9340e225f6" May 14 18:01:01.732075 systemd[1]: Created slice kubepods-burstable-pod504a01411eeecd82026aece062bdaa0d.slice - libcontainer container kubepods-burstable-pod504a01411eeecd82026aece062bdaa0d.slice. May 14 18:01:01.742516 systemd[1]: Created slice kubepods-burstable-pod3d9376e207f37e8dbd8035f2fa6eec87.slice - libcontainer container kubepods-burstable-pod3d9376e207f37e8dbd8035f2fa6eec87.slice. May 14 18:01:01.745940 systemd[1]: Created slice kubepods-burstable-pod0e1739ec9b57a6a24aa3cca2d8d5ea21.slice - libcontainer container kubepods-burstable-pod0e1739ec9b57a6a24aa3cca2d8d5ea21.slice. May 14 18:01:01.759037 kubelet[3213]: E0514 18:01:01.759013 3213 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.4:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.4:6443: connect: connection refused May 14 18:01:01.838562 kubelet[3213]: I0514 18:01:01.838516 3213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0e1739ec9b57a6a24aa3cca2d8d5ea21-kubeconfig\") pod \"kube-scheduler-ci-4334.0.0-a-9340e225f6\" (UID: \"0e1739ec9b57a6a24aa3cca2d8d5ea21\") " pod="kube-system/kube-scheduler-ci-4334.0.0-a-9340e225f6" May 14 18:01:01.838562 kubelet[3213]: I0514 18:01:01.838541 3213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/504a01411eeecd82026aece062bdaa0d-k8s-certs\") pod \"kube-apiserver-ci-4334.0.0-a-9340e225f6\" (UID: \"504a01411eeecd82026aece062bdaa0d\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-9340e225f6" May 14 18:01:01.838715 kubelet[3213]: I0514 18:01:01.838566 3213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3d9376e207f37e8dbd8035f2fa6eec87-flexvolume-dir\") pod \"kube-controller-manager-ci-4334.0.0-a-9340e225f6\" (UID: \"3d9376e207f37e8dbd8035f2fa6eec87\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-9340e225f6" May 14 18:01:01.838715 kubelet[3213]: I0514 18:01:01.838597 3213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3d9376e207f37e8dbd8035f2fa6eec87-k8s-certs\") pod \"kube-controller-manager-ci-4334.0.0-a-9340e225f6\" (UID: \"3d9376e207f37e8dbd8035f2fa6eec87\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-9340e225f6" May 14 18:01:01.838715 kubelet[3213]: I0514 18:01:01.838607 3213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3d9376e207f37e8dbd8035f2fa6eec87-kubeconfig\") pod \"kube-controller-manager-ci-4334.0.0-a-9340e225f6\" (UID: \"3d9376e207f37e8dbd8035f2fa6eec87\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-9340e225f6" May 14 18:01:01.838715 kubelet[3213]: I0514 18:01:01.838623 3213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3d9376e207f37e8dbd8035f2fa6eec87-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4334.0.0-a-9340e225f6\" (UID: \"3d9376e207f37e8dbd8035f2fa6eec87\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-9340e225f6" May 14 18:01:01.838715 kubelet[3213]: I0514 18:01:01.838635 3213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/504a01411eeecd82026aece062bdaa0d-ca-certs\") pod \"kube-apiserver-ci-4334.0.0-a-9340e225f6\" (UID: \"504a01411eeecd82026aece062bdaa0d\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-9340e225f6" May 14 18:01:01.838797 kubelet[3213]: I0514 18:01:01.838646 3213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/504a01411eeecd82026aece062bdaa0d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4334.0.0-a-9340e225f6\" (UID: \"504a01411eeecd82026aece062bdaa0d\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-9340e225f6" May 14 18:01:01.838797 kubelet[3213]: I0514 18:01:01.838661 3213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3d9376e207f37e8dbd8035f2fa6eec87-ca-certs\") pod \"kube-controller-manager-ci-4334.0.0-a-9340e225f6\" (UID: \"3d9376e207f37e8dbd8035f2fa6eec87\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-9340e225f6" May 14 18:01:02.041327 containerd[1906]: time="2025-05-14T18:01:02.041262910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4334.0.0-a-9340e225f6,Uid:504a01411eeecd82026aece062bdaa0d,Namespace:kube-system,Attempt:0,}" May 14 18:01:02.045691 containerd[1906]: time="2025-05-14T18:01:02.045605844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4334.0.0-a-9340e225f6,Uid:3d9376e207f37e8dbd8035f2fa6eec87,Namespace:kube-system,Attempt:0,}" May 14 18:01:02.048369 containerd[1906]: time="2025-05-14T18:01:02.048254876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4334.0.0-a-9340e225f6,Uid:0e1739ec9b57a6a24aa3cca2d8d5ea21,Namespace:kube-system,Attempt:0,}" May 14 18:01:02.173150 kubelet[3213]: W0514 18:01:02.173117 3213 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused May 14 18:01:02.173150 kubelet[3213]: E0514 18:01:02.173153 3213 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused May 14 18:01:02.978244 kubelet[3213]: W0514 18:01:02.829467 3213 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334.0.0-a-9340e225f6&limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused May 14 18:01:02.978244 kubelet[3213]: E0514 18:01:02.829504 3213 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334.0.0-a-9340e225f6&limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused May 14 18:01:02.978244 kubelet[3213]: W0514 18:01:02.946567 3213 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused May 14 18:01:02.978244 kubelet[3213]: E0514 18:01:02.946592 3213 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused May 14 18:01:03.495348 kubelet[3213]: W0514 18:01:03.495308 3213 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused May 14 18:01:03.495348 kubelet[3213]: E0514 18:01:03.495348 3213 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused May 14 18:01:03.975594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount284138650.mount: Deactivated successfully. May 14 18:01:03.991858 kubelet[3213]: E0514 18:01:03.991777 3213 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.4:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.4:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4334.0.0-a-9340e225f6.183f76aa919516dc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4334.0.0-a-9340e225f6,UID:ci-4334.0.0-a-9340e225f6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4334.0.0-a-9340e225f6,},FirstTimestamp:2025-05-14 18:00:55.402026716 +0000 UTC m=+3.742193703,LastTimestamp:2025-05-14 18:00:55.402026716 +0000 UTC m=+3.742193703,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4334.0.0-a-9340e225f6,}" May 14 18:01:04.025501 containerd[1906]: time="2025-05-14T18:01:04.025463034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:01:04.044718 containerd[1906]: time="2025-05-14T18:01:04.044688229Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" May 14 18:01:04.049958 containerd[1906]: time="2025-05-14T18:01:04.049932364Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:01:04.061992 containerd[1906]: time="2025-05-14T18:01:04.061753405Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:01:04.070530 containerd[1906]: time="2025-05-14T18:01:04.070360608Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 14 18:01:04.075847 containerd[1906]: time="2025-05-14T18:01:04.075807636Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:01:04.079992 containerd[1906]: time="2025-05-14T18:01:04.079455143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:01:04.079992 containerd[1906]: time="2025-05-14T18:01:04.079868258Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 1.005704612s" May 14 18:01:04.084594 containerd[1906]: time="2025-05-14T18:01:04.084521001Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 14 18:01:04.090460 containerd[1906]: time="2025-05-14T18:01:04.090430738Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 968.028003ms" May 14 18:01:04.129757 containerd[1906]: time="2025-05-14T18:01:04.129726616Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 947.53771ms" May 14 18:01:04.207136 containerd[1906]: time="2025-05-14T18:01:04.207080221Z" level=info msg="connecting to shim 0f736b168c5c763722a87e3fc3c82c6b6256d5b765c2a9902c1193645fabc50e" address="unix:///run/containerd/s/7f8829330966a8a2853b78035b9e228df32c870916c36d35423bd9516f5b3a68" namespace=k8s.io protocol=ttrpc version=3 May 14 18:01:04.207833 containerd[1906]: time="2025-05-14T18:01:04.207790769Z" level=info msg="connecting to shim c70d316c047935d86168c51086c907c2f2db76b741709cbb4b0981b0eb093421" address="unix:///run/containerd/s/d61baf5f9f702ae4c7b4354a3909958ae17f5035e2b06bf8fa625c1e164820e2" namespace=k8s.io protocol=ttrpc version=3 May 14 18:01:04.224151 systemd[1]: Started cri-containerd-0f736b168c5c763722a87e3fc3c82c6b6256d5b765c2a9902c1193645fabc50e.scope - libcontainer container 0f736b168c5c763722a87e3fc3c82c6b6256d5b765c2a9902c1193645fabc50e. May 14 18:01:04.229396 systemd[1]: Started cri-containerd-c70d316c047935d86168c51086c907c2f2db76b741709cbb4b0981b0eb093421.scope - libcontainer container c70d316c047935d86168c51086c907c2f2db76b741709cbb4b0981b0eb093421. May 14 18:01:04.248510 containerd[1906]: time="2025-05-14T18:01:04.248474586Z" level=info msg="connecting to shim 7ec50a0016e57e3a0bbf2c9c8adae375e9ef9769e8760da44f610cee43c79bfc" address="unix:///run/containerd/s/d0744abf95e434165597ed993462f35fced553c535c50ec241adcf2646b740a6" namespace=k8s.io protocol=ttrpc version=3 May 14 18:01:04.269957 containerd[1906]: time="2025-05-14T18:01:04.269888515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4334.0.0-a-9340e225f6,Uid:504a01411eeecd82026aece062bdaa0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f736b168c5c763722a87e3fc3c82c6b6256d5b765c2a9902c1193645fabc50e\"" May 14 18:01:04.270098 systemd[1]: Started cri-containerd-7ec50a0016e57e3a0bbf2c9c8adae375e9ef9769e8760da44f610cee43c79bfc.scope - libcontainer container 7ec50a0016e57e3a0bbf2c9c8adae375e9ef9769e8760da44f610cee43c79bfc. May 14 18:01:04.275137 containerd[1906]: time="2025-05-14T18:01:04.275114104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4334.0.0-a-9340e225f6,Uid:3d9376e207f37e8dbd8035f2fa6eec87,Namespace:kube-system,Attempt:0,} returns sandbox id \"c70d316c047935d86168c51086c907c2f2db76b741709cbb4b0981b0eb093421\"" May 14 18:01:04.275630 containerd[1906]: time="2025-05-14T18:01:04.275427888Z" level=info msg="CreateContainer within sandbox \"0f736b168c5c763722a87e3fc3c82c6b6256d5b765c2a9902c1193645fabc50e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 18:01:04.278200 containerd[1906]: time="2025-05-14T18:01:04.278173474Z" level=info msg="CreateContainer within sandbox \"c70d316c047935d86168c51086c907c2f2db76b741709cbb4b0981b0eb093421\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 18:01:04.314662 containerd[1906]: time="2025-05-14T18:01:04.314630977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4334.0.0-a-9340e225f6,Uid:0e1739ec9b57a6a24aa3cca2d8d5ea21,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ec50a0016e57e3a0bbf2c9c8adae375e9ef9769e8760da44f610cee43c79bfc\"" May 14 18:01:04.318732 containerd[1906]: time="2025-05-14T18:01:04.318696959Z" level=info msg="CreateContainer within sandbox \"7ec50a0016e57e3a0bbf2c9c8adae375e9ef9769e8760da44f610cee43c79bfc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 18:01:04.324599 containerd[1906]: time="2025-05-14T18:01:04.324561989Z" level=info msg="Container bc46042a476db3df102b034439894fdecbf21a04284bd5ea0bd6c8cc582e6c55: CDI devices from CRI Config.CDIDevices: []" May 14 18:01:04.334667 containerd[1906]: time="2025-05-14T18:01:04.334581779Z" level=info msg="Container 47c423e7a23f052bd90d284b1417300333029da2501f2cb2cd6c9da3bdc46493: CDI devices from CRI Config.CDIDevices: []" May 14 18:01:04.387721 containerd[1906]: time="2025-05-14T18:01:04.387687987Z" level=info msg="CreateContainer within sandbox \"0f736b168c5c763722a87e3fc3c82c6b6256d5b765c2a9902c1193645fabc50e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bc46042a476db3df102b034439894fdecbf21a04284bd5ea0bd6c8cc582e6c55\"" May 14 18:01:04.388177 containerd[1906]: time="2025-05-14T18:01:04.388153864Z" level=info msg="StartContainer for \"bc46042a476db3df102b034439894fdecbf21a04284bd5ea0bd6c8cc582e6c55\"" May 14 18:01:04.389357 containerd[1906]: time="2025-05-14T18:01:04.389335264Z" level=info msg="connecting to shim bc46042a476db3df102b034439894fdecbf21a04284bd5ea0bd6c8cc582e6c55" address="unix:///run/containerd/s/7f8829330966a8a2853b78035b9e228df32c870916c36d35423bd9516f5b3a68" protocol=ttrpc version=3 May 14 18:01:04.392116 containerd[1906]: time="2025-05-14T18:01:04.392075170Z" level=info msg="CreateContainer within sandbox \"c70d316c047935d86168c51086c907c2f2db76b741709cbb4b0981b0eb093421\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"47c423e7a23f052bd90d284b1417300333029da2501f2cb2cd6c9da3bdc46493\"" May 14 18:01:04.392408 containerd[1906]: time="2025-05-14T18:01:04.392389066Z" level=info msg="StartContainer for \"47c423e7a23f052bd90d284b1417300333029da2501f2cb2cd6c9da3bdc46493\"" May 14 18:01:04.393809 containerd[1906]: time="2025-05-14T18:01:04.393775967Z" level=info msg="connecting to shim 47c423e7a23f052bd90d284b1417300333029da2501f2cb2cd6c9da3bdc46493" address="unix:///run/containerd/s/d61baf5f9f702ae4c7b4354a3909958ae17f5035e2b06bf8fa625c1e164820e2" protocol=ttrpc version=3 May 14 18:01:04.408073 systemd[1]: Started cri-containerd-bc46042a476db3df102b034439894fdecbf21a04284bd5ea0bd6c8cc582e6c55.scope - libcontainer container bc46042a476db3df102b034439894fdecbf21a04284bd5ea0bd6c8cc582e6c55. May 14 18:01:04.410783 systemd[1]: Started cri-containerd-47c423e7a23f052bd90d284b1417300333029da2501f2cb2cd6c9da3bdc46493.scope - libcontainer container 47c423e7a23f052bd90d284b1417300333029da2501f2cb2cd6c9da3bdc46493. May 14 18:01:04.413121 containerd[1906]: time="2025-05-14T18:01:04.413076232Z" level=info msg="Container e8f22204997e275aec416b812b904ce5901c800e81f4b8ac5b5fbc99c6a32a71: CDI devices from CRI Config.CDIDevices: []" May 14 18:01:04.443922 containerd[1906]: time="2025-05-14T18:01:04.443780388Z" level=info msg="CreateContainer within sandbox \"7ec50a0016e57e3a0bbf2c9c8adae375e9ef9769e8760da44f610cee43c79bfc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e8f22204997e275aec416b812b904ce5901c800e81f4b8ac5b5fbc99c6a32a71\"" May 14 18:01:04.445408 containerd[1906]: time="2025-05-14T18:01:04.445333558Z" level=info msg="StartContainer for \"e8f22204997e275aec416b812b904ce5901c800e81f4b8ac5b5fbc99c6a32a71\"" May 14 18:01:04.449770 containerd[1906]: time="2025-05-14T18:01:04.449644586Z" level=info msg="connecting to shim e8f22204997e275aec416b812b904ce5901c800e81f4b8ac5b5fbc99c6a32a71" address="unix:///run/containerd/s/d0744abf95e434165597ed993462f35fced553c535c50ec241adcf2646b740a6" protocol=ttrpc version=3 May 14 18:01:04.456304 containerd[1906]: time="2025-05-14T18:01:04.456276525Z" level=info msg="StartContainer for \"bc46042a476db3df102b034439894fdecbf21a04284bd5ea0bd6c8cc582e6c55\" returns successfully" May 14 18:01:04.468986 containerd[1906]: time="2025-05-14T18:01:04.468453053Z" level=info msg="StartContainer for \"47c423e7a23f052bd90d284b1417300333029da2501f2cb2cd6c9da3bdc46493\" returns successfully" May 14 18:01:04.475071 systemd[1]: Started cri-containerd-e8f22204997e275aec416b812b904ce5901c800e81f4b8ac5b5fbc99c6a32a71.scope - libcontainer container e8f22204997e275aec416b812b904ce5901c800e81f4b8ac5b5fbc99c6a32a71. May 14 18:01:04.526850 containerd[1906]: time="2025-05-14T18:01:04.526783466Z" level=info msg="StartContainer for \"e8f22204997e275aec416b812b904ce5901c800e81f4b8ac5b5fbc99c6a32a71\" returns successfully" May 14 18:01:05.920421 kubelet[3213]: E0514 18:01:05.920384 3213 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4334.0.0-a-9340e225f6" not found May 14 18:01:06.275258 kubelet[3213]: E0514 18:01:06.275230 3213 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4334.0.0-a-9340e225f6" not found May 14 18:01:06.708955 kubelet[3213]: E0514 18:01:06.708656 3213 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4334.0.0-a-9340e225f6" not found May 14 18:01:07.618085 kubelet[3213]: E0514 18:01:07.618044 3213 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4334.0.0-a-9340e225f6" not found May 14 18:01:08.014955 kubelet[3213]: E0514 18:01:08.014845 3213 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4334.0.0-a-9340e225f6\" not found" node="ci-4334.0.0-a-9340e225f6" May 14 18:01:08.120886 kubelet[3213]: I0514 18:01:08.120861 3213 kubelet_node_status.go:73] "Attempting to register node" node="ci-4334.0.0-a-9340e225f6" May 14 18:01:08.129522 kubelet[3213]: I0514 18:01:08.129403 3213 kubelet_node_status.go:76] "Successfully registered node" node="ci-4334.0.0-a-9340e225f6" May 14 18:01:08.134613 kubelet[3213]: E0514 18:01:08.134591 3213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4334.0.0-a-9340e225f6\" not found" May 14 18:01:08.235504 kubelet[3213]: E0514 18:01:08.235478 3213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4334.0.0-a-9340e225f6\" not found" May 14 18:01:08.278315 systemd[1]: Reload requested from client PID 3492 ('systemctl') (unit session-9.scope)... May 14 18:01:08.278327 systemd[1]: Reloading... May 14 18:01:08.336026 kubelet[3213]: E0514 18:01:08.336000 3213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4334.0.0-a-9340e225f6\" not found" May 14 18:01:08.348984 zram_generator::config[3537]: No configuration found. May 14 18:01:08.415003 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:01:08.437074 kubelet[3213]: E0514 18:01:08.437048 3213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4334.0.0-a-9340e225f6\" not found" May 14 18:01:08.501688 systemd[1]: Reloading finished in 223 ms. May 14 18:01:08.522817 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:01:08.523492 kubelet[3213]: I0514 18:01:08.523163 3213 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 18:01:08.542758 systemd[1]: kubelet.service: Deactivated successfully. May 14 18:01:08.542953 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:01:08.543014 systemd[1]: kubelet.service: Consumed 473ms CPU time, 109.3M memory peak. May 14 18:01:08.544260 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:01:08.671860 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:01:08.676370 (kubelet)[3601]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 18:01:08.707199 kubelet[3601]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:01:08.707411 kubelet[3601]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 18:01:08.707440 kubelet[3601]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:01:08.707537 kubelet[3601]: I0514 18:01:08.707513 3601 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 18:01:08.711375 kubelet[3601]: I0514 18:01:08.711357 3601 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 14 18:01:08.711456 kubelet[3601]: I0514 18:01:08.711448 3601 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 18:01:08.711613 kubelet[3601]: I0514 18:01:08.711601 3601 server.go:927] "Client rotation is on, will bootstrap in background" May 14 18:01:08.712536 kubelet[3601]: I0514 18:01:08.712517 3601 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 18:01:08.713479 kubelet[3601]: I0514 18:01:08.713442 3601 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 18:01:08.720077 kubelet[3601]: I0514 18:01:08.720058 3601 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 18:01:08.720205 kubelet[3601]: I0514 18:01:08.720181 3601 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 18:01:08.720304 kubelet[3601]: I0514 18:01:08.720201 3601 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4334.0.0-a-9340e225f6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 14 18:01:08.720368 kubelet[3601]: I0514 18:01:08.720306 3601 topology_manager.go:138] "Creating topology manager with none policy" May 14 18:01:08.720368 kubelet[3601]: I0514 18:01:08.720313 3601 container_manager_linux.go:301] "Creating device plugin manager" May 14 18:01:08.720368 kubelet[3601]: I0514 18:01:08.720339 3601 state_mem.go:36] "Initialized new in-memory state store" May 14 18:01:08.720426 kubelet[3601]: I0514 18:01:08.720403 3601 kubelet.go:400] "Attempting to sync node with API server" May 14 18:01:08.720426 kubelet[3601]: I0514 18:01:08.720411 3601 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 18:01:08.721433 kubelet[3601]: I0514 18:01:08.720431 3601 kubelet.go:312] "Adding apiserver pod source" May 14 18:01:08.721433 kubelet[3601]: I0514 18:01:08.720442 3601 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 18:01:08.723044 kubelet[3601]: I0514 18:01:08.723026 3601 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 14 18:01:08.723145 kubelet[3601]: I0514 18:01:08.723134 3601 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 18:01:08.723392 kubelet[3601]: I0514 18:01:08.723373 3601 server.go:1264] "Started kubelet" May 14 18:01:08.724432 kubelet[3601]: I0514 18:01:08.724409 3601 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 18:01:08.729366 kubelet[3601]: E0514 18:01:08.729350 3601 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 18:01:08.735868 kubelet[3601]: I0514 18:01:08.735824 3601 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 18:01:08.737744 kubelet[3601]: I0514 18:01:08.737711 3601 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 18:01:08.737895 kubelet[3601]: I0514 18:01:08.737871 3601 server.go:455] "Adding debug handlers to kubelet server" May 14 18:01:08.738093 kubelet[3601]: I0514 18:01:08.738076 3601 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 18:01:08.738680 kubelet[3601]: I0514 18:01:08.738590 3601 volume_manager.go:291] "Starting Kubelet Volume Manager" May 14 18:01:08.740687 kubelet[3601]: I0514 18:01:08.740670 3601 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 18:01:08.740799 kubelet[3601]: I0514 18:01:08.740787 3601 reconciler.go:26] "Reconciler: start to sync state" May 14 18:01:08.743556 kubelet[3601]: I0514 18:01:08.743457 3601 factory.go:221] Registration of the systemd container factory successfully May 14 18:01:08.744375 kubelet[3601]: I0514 18:01:08.744355 3601 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 18:01:08.745277 kubelet[3601]: I0514 18:01:08.745249 3601 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 18:01:08.745993 kubelet[3601]: I0514 18:01:08.745972 3601 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 18:01:08.746054 kubelet[3601]: I0514 18:01:08.745999 3601 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 18:01:08.746054 kubelet[3601]: I0514 18:01:08.746011 3601 kubelet.go:2337] "Starting kubelet main sync loop" May 14 18:01:08.746054 kubelet[3601]: E0514 18:01:08.746041 3601 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 18:01:08.748245 kubelet[3601]: I0514 18:01:08.748230 3601 factory.go:221] Registration of the containerd container factory successfully May 14 18:01:08.785048 kubelet[3601]: I0514 18:01:08.785013 3601 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 18:01:08.785048 kubelet[3601]: I0514 18:01:08.785045 3601 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 18:01:08.785133 kubelet[3601]: I0514 18:01:08.785067 3601 state_mem.go:36] "Initialized new in-memory state store" May 14 18:01:08.785214 kubelet[3601]: I0514 18:01:08.785194 3601 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 18:01:08.785214 kubelet[3601]: I0514 18:01:08.785208 3601 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 18:01:08.785307 kubelet[3601]: I0514 18:01:08.785220 3601 policy_none.go:49] "None policy: Start" May 14 18:01:08.785675 kubelet[3601]: I0514 18:01:08.785650 3601 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 18:01:08.785675 kubelet[3601]: I0514 18:01:08.785668 3601 state_mem.go:35] "Initializing new in-memory state store" May 14 18:01:08.785790 kubelet[3601]: I0514 18:01:08.785750 3601 state_mem.go:75] "Updated machine memory state" May 14 18:01:08.789211 kubelet[3601]: I0514 18:01:08.789190 3601 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 18:01:08.789335 kubelet[3601]: I0514 18:01:08.789307 3601 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 18:01:08.789392 kubelet[3601]: I0514 18:01:08.789381 3601 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 18:01:08.841669 kubelet[3601]: I0514 18:01:08.841596 3601 kubelet_node_status.go:73] "Attempting to register node" node="ci-4334.0.0-a-9340e225f6" May 14 18:01:08.847095 kubelet[3601]: I0514 18:01:08.847064 3601 topology_manager.go:215] "Topology Admit Handler" podUID="504a01411eeecd82026aece062bdaa0d" podNamespace="kube-system" podName="kube-apiserver-ci-4334.0.0-a-9340e225f6" May 14 18:01:08.847160 kubelet[3601]: I0514 18:01:08.847146 3601 topology_manager.go:215] "Topology Admit Handler" podUID="3d9376e207f37e8dbd8035f2fa6eec87" podNamespace="kube-system" podName="kube-controller-manager-ci-4334.0.0-a-9340e225f6" May 14 18:01:08.847179 kubelet[3601]: I0514 18:01:08.847173 3601 topology_manager.go:215] "Topology Admit Handler" podUID="0e1739ec9b57a6a24aa3cca2d8d5ea21" podNamespace="kube-system" podName="kube-scheduler-ci-4334.0.0-a-9340e225f6" May 14 18:01:08.856348 kubelet[3601]: W0514 18:01:08.856253 3601 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 18:01:08.860637 kubelet[3601]: I0514 18:01:08.860535 3601 kubelet_node_status.go:112] "Node was previously registered" node="ci-4334.0.0-a-9340e225f6" May 14 18:01:08.861209 kubelet[3601]: I0514 18:01:08.861081 3601 kubelet_node_status.go:76] "Successfully registered node" node="ci-4334.0.0-a-9340e225f6" May 14 18:01:08.861301 kubelet[3601]: W0514 18:01:08.861290 3601 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 18:01:08.861945 kubelet[3601]: W0514 18:01:08.861846 3601 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 18:01:08.940978 kubelet[3601]: I0514 18:01:08.940939 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3d9376e207f37e8dbd8035f2fa6eec87-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4334.0.0-a-9340e225f6\" (UID: \"3d9376e207f37e8dbd8035f2fa6eec87\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-9340e225f6" May 14 18:01:08.941115 kubelet[3601]: I0514 18:01:08.941074 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/504a01411eeecd82026aece062bdaa0d-k8s-certs\") pod \"kube-apiserver-ci-4334.0.0-a-9340e225f6\" (UID: \"504a01411eeecd82026aece062bdaa0d\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-9340e225f6" May 14 18:01:08.941115 kubelet[3601]: I0514 18:01:08.941097 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3d9376e207f37e8dbd8035f2fa6eec87-kubeconfig\") pod \"kube-controller-manager-ci-4334.0.0-a-9340e225f6\" (UID: \"3d9376e207f37e8dbd8035f2fa6eec87\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-9340e225f6" May 14 18:01:08.941233 kubelet[3601]: I0514 18:01:08.941221 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3d9376e207f37e8dbd8035f2fa6eec87-ca-certs\") pod \"kube-controller-manager-ci-4334.0.0-a-9340e225f6\" (UID: \"3d9376e207f37e8dbd8035f2fa6eec87\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-9340e225f6" May 14 18:01:08.941336 kubelet[3601]: I0514 18:01:08.941327 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3d9376e207f37e8dbd8035f2fa6eec87-flexvolume-dir\") pod \"kube-controller-manager-ci-4334.0.0-a-9340e225f6\" (UID: \"3d9376e207f37e8dbd8035f2fa6eec87\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-9340e225f6" May 14 18:01:08.941462 kubelet[3601]: I0514 18:01:08.941409 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3d9376e207f37e8dbd8035f2fa6eec87-k8s-certs\") pod \"kube-controller-manager-ci-4334.0.0-a-9340e225f6\" (UID: \"3d9376e207f37e8dbd8035f2fa6eec87\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-9340e225f6" May 14 18:01:08.941462 kubelet[3601]: I0514 18:01:08.941424 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0e1739ec9b57a6a24aa3cca2d8d5ea21-kubeconfig\") pod \"kube-scheduler-ci-4334.0.0-a-9340e225f6\" (UID: \"0e1739ec9b57a6a24aa3cca2d8d5ea21\") " pod="kube-system/kube-scheduler-ci-4334.0.0-a-9340e225f6" May 14 18:01:08.941462 kubelet[3601]: I0514 18:01:08.941436 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/504a01411eeecd82026aece062bdaa0d-ca-certs\") pod \"kube-apiserver-ci-4334.0.0-a-9340e225f6\" (UID: \"504a01411eeecd82026aece062bdaa0d\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-9340e225f6" May 14 18:01:08.941462 kubelet[3601]: I0514 18:01:08.941448 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/504a01411eeecd82026aece062bdaa0d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4334.0.0-a-9340e225f6\" (UID: \"504a01411eeecd82026aece062bdaa0d\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-9340e225f6" May 14 18:01:09.720975 kubelet[3601]: I0514 18:01:09.720918 3601 apiserver.go:52] "Watching apiserver" May 14 18:01:12.380718 kubelet[3601]: I0514 18:01:09.741161 3601 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 18:01:12.380718 kubelet[3601]: I0514 18:01:09.790395 3601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4334.0.0-a-9340e225f6" podStartSLOduration=1.790386308 podStartE2EDuration="1.790386308s" podCreationTimestamp="2025-05-14 18:01:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:01:09.790284097 +0000 UTC m=+1.110469814" watchObservedRunningTime="2025-05-14 18:01:09.790386308 +0000 UTC m=+1.110572017" May 14 18:01:12.380718 kubelet[3601]: I0514 18:01:09.815348 3601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4334.0.0-a-9340e225f6" podStartSLOduration=1.815339756 podStartE2EDuration="1.815339756s" podCreationTimestamp="2025-05-14 18:01:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:01:09.802011989 +0000 UTC m=+1.122197714" watchObservedRunningTime="2025-05-14 18:01:09.815339756 +0000 UTC m=+1.135525473" May 14 18:01:12.380718 kubelet[3601]: I0514 18:01:09.824557 3601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-9340e225f6" podStartSLOduration=1.824548413 podStartE2EDuration="1.824548413s" podCreationTimestamp="2025-05-14 18:01:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:01:09.815503097 +0000 UTC m=+1.135688806" watchObservedRunningTime="2025-05-14 18:01:09.824548413 +0000 UTC m=+1.144734122" May 14 18:01:12.384545 sudo[3631]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 14 18:01:12.385029 sudo[3631]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 14 18:01:12.717105 sudo[3631]: pam_unix(sudo:session): session closed for user root May 14 18:01:13.732072 sudo[2388]: pam_unix(sudo:session): session closed for user root May 14 18:01:13.796993 sshd[2387]: Connection closed by 10.200.16.10 port 43474 May 14 18:01:13.797395 sshd-session[2385]: pam_unix(sshd:session): session closed for user core May 14 18:01:13.799482 systemd[1]: sshd@6-10.200.20.4:22-10.200.16.10:43474.service: Deactivated successfully. May 14 18:01:13.801243 systemd[1]: session-9.scope: Deactivated successfully. May 14 18:01:13.801422 systemd[1]: session-9.scope: Consumed 3.646s CPU time, 293.3M memory peak. May 14 18:01:13.802515 systemd-logind[1872]: Session 9 logged out. Waiting for processes to exit. May 14 18:01:13.803637 systemd-logind[1872]: Removed session 9. May 14 18:01:23.658896 kubelet[3601]: I0514 18:01:23.658860 3601 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 18:01:23.659726 containerd[1906]: time="2025-05-14T18:01:23.659346937Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 18:01:23.660022 kubelet[3601]: I0514 18:01:23.659626 3601 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 18:01:24.428265 kubelet[3601]: I0514 18:01:24.427642 3601 topology_manager.go:215] "Topology Admit Handler" podUID="e5bfa5f1-bb54-455f-8554-7ce76e3c5e98" podNamespace="kube-system" podName="kube-proxy-ptbbn" May 14 18:01:24.434697 kubelet[3601]: I0514 18:01:24.433718 3601 topology_manager.go:215] "Topology Admit Handler" podUID="e060d488-b501-4271-a9b5-49cf79a1f7a4" podNamespace="kube-system" podName="cilium-7xtq5" May 14 18:01:24.437010 systemd[1]: Created slice kubepods-besteffort-pode5bfa5f1_bb54_455f_8554_7ce76e3c5e98.slice - libcontainer container kubepods-besteffort-pode5bfa5f1_bb54_455f_8554_7ce76e3c5e98.slice. May 14 18:01:24.446829 systemd[1]: Created slice kubepods-burstable-pode060d488_b501_4271_a9b5_49cf79a1f7a4.slice - libcontainer container kubepods-burstable-pode060d488_b501_4271_a9b5_49cf79a1f7a4.slice. May 14 18:01:24.449515 kubelet[3601]: I0514 18:01:24.449494 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e060d488-b501-4271-a9b5-49cf79a1f7a4-xtables-lock\") pod \"cilium-7xtq5\" (UID: \"e060d488-b501-4271-a9b5-49cf79a1f7a4\") " pod="kube-system/cilium-7xtq5" May 14 18:01:24.449727 kubelet[3601]: I0514 18:01:24.449709 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e5bfa5f1-bb54-455f-8554-7ce76e3c5e98-kube-proxy\") pod \"kube-proxy-ptbbn\" (UID: \"e5bfa5f1-bb54-455f-8554-7ce76e3c5e98\") " pod="kube-system/kube-proxy-ptbbn" May 14 18:01:24.449974 kubelet[3601]: I0514 18:01:24.449888 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e060d488-b501-4271-a9b5-49cf79a1f7a4-hostproc\") pod \"cilium-7xtq5\" (UID: \"e060d488-b501-4271-a9b5-49cf79a1f7a4\") " pod="kube-system/cilium-7xtq5" May 14 18:01:24.449974 kubelet[3601]: I0514 18:01:24.449911 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e060d488-b501-4271-a9b5-49cf79a1f7a4-cilium-config-path\") pod \"cilium-7xtq5\" (UID: \"e060d488-b501-4271-a9b5-49cf79a1f7a4\") " pod="kube-system/cilium-7xtq5" May 14 18:01:24.449974 kubelet[3601]: I0514 18:01:24.449924 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e060d488-b501-4271-a9b5-49cf79a1f7a4-host-proc-sys-net\") pod \"cilium-7xtq5\" (UID: \"e060d488-b501-4271-a9b5-49cf79a1f7a4\") " pod="kube-system/cilium-7xtq5" May 14 18:01:24.449974 kubelet[3601]: I0514 18:01:24.449936 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn54t\" (UniqueName: \"kubernetes.io/projected/e5bfa5f1-bb54-455f-8554-7ce76e3c5e98-kube-api-access-xn54t\") pod \"kube-proxy-ptbbn\" (UID: \"e5bfa5f1-bb54-455f-8554-7ce76e3c5e98\") " pod="kube-system/kube-proxy-ptbbn" May 14 18:01:24.449974 kubelet[3601]: I0514 18:01:24.449955 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e060d488-b501-4271-a9b5-49cf79a1f7a4-cni-path\") pod \"cilium-7xtq5\" (UID: \"e060d488-b501-4271-a9b5-49cf79a1f7a4\") " pod="kube-system/cilium-7xtq5" May 14 18:01:24.450323 kubelet[3601]: I0514 18:01:24.450121 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e060d488-b501-4271-a9b5-49cf79a1f7a4-clustermesh-secrets\") pod \"cilium-7xtq5\" (UID: \"e060d488-b501-4271-a9b5-49cf79a1f7a4\") " pod="kube-system/cilium-7xtq5" May 14 18:01:24.450323 kubelet[3601]: I0514 18:01:24.450148 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e060d488-b501-4271-a9b5-49cf79a1f7a4-cilium-run\") pod \"cilium-7xtq5\" (UID: \"e060d488-b501-4271-a9b5-49cf79a1f7a4\") " pod="kube-system/cilium-7xtq5" May 14 18:01:24.450323 kubelet[3601]: I0514 18:01:24.450159 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e060d488-b501-4271-a9b5-49cf79a1f7a4-bpf-maps\") pod \"cilium-7xtq5\" (UID: \"e060d488-b501-4271-a9b5-49cf79a1f7a4\") " pod="kube-system/cilium-7xtq5" May 14 18:01:24.450323 kubelet[3601]: I0514 18:01:24.450169 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e060d488-b501-4271-a9b5-49cf79a1f7a4-lib-modules\") pod \"cilium-7xtq5\" (UID: \"e060d488-b501-4271-a9b5-49cf79a1f7a4\") " pod="kube-system/cilium-7xtq5" May 14 18:01:24.450323 kubelet[3601]: I0514 18:01:24.450181 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e5bfa5f1-bb54-455f-8554-7ce76e3c5e98-lib-modules\") pod \"kube-proxy-ptbbn\" (UID: \"e5bfa5f1-bb54-455f-8554-7ce76e3c5e98\") " pod="kube-system/kube-proxy-ptbbn" May 14 18:01:24.450323 kubelet[3601]: I0514 18:01:24.450191 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e060d488-b501-4271-a9b5-49cf79a1f7a4-host-proc-sys-kernel\") pod \"cilium-7xtq5\" (UID: \"e060d488-b501-4271-a9b5-49cf79a1f7a4\") " pod="kube-system/cilium-7xtq5" May 14 18:01:24.450441 kubelet[3601]: I0514 18:01:24.450204 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e060d488-b501-4271-a9b5-49cf79a1f7a4-hubble-tls\") pod \"cilium-7xtq5\" (UID: \"e060d488-b501-4271-a9b5-49cf79a1f7a4\") " pod="kube-system/cilium-7xtq5" May 14 18:01:24.450441 kubelet[3601]: I0514 18:01:24.450213 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e5bfa5f1-bb54-455f-8554-7ce76e3c5e98-xtables-lock\") pod \"kube-proxy-ptbbn\" (UID: \"e5bfa5f1-bb54-455f-8554-7ce76e3c5e98\") " pod="kube-system/kube-proxy-ptbbn" May 14 18:01:24.450441 kubelet[3601]: I0514 18:01:24.450244 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e060d488-b501-4271-a9b5-49cf79a1f7a4-cilium-cgroup\") pod \"cilium-7xtq5\" (UID: \"e060d488-b501-4271-a9b5-49cf79a1f7a4\") " pod="kube-system/cilium-7xtq5" May 14 18:01:24.450441 kubelet[3601]: I0514 18:01:24.450263 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e060d488-b501-4271-a9b5-49cf79a1f7a4-etc-cni-netd\") pod \"cilium-7xtq5\" (UID: \"e060d488-b501-4271-a9b5-49cf79a1f7a4\") " pod="kube-system/cilium-7xtq5" May 14 18:01:24.450441 kubelet[3601]: I0514 18:01:24.450272 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlqj2\" (UniqueName: \"kubernetes.io/projected/e060d488-b501-4271-a9b5-49cf79a1f7a4-kube-api-access-zlqj2\") pod \"cilium-7xtq5\" (UID: \"e060d488-b501-4271-a9b5-49cf79a1f7a4\") " pod="kube-system/cilium-7xtq5" May 14 18:01:24.584385 kubelet[3601]: I0514 18:01:24.583278 3601 topology_manager.go:215] "Topology Admit Handler" podUID="687e4d4b-09d9-4e3b-b440-e44865fe207e" podNamespace="kube-system" podName="cilium-operator-599987898-gjt2n" May 14 18:01:24.589156 systemd[1]: Created slice kubepods-besteffort-pod687e4d4b_09d9_4e3b_b440_e44865fe207e.slice - libcontainer container kubepods-besteffort-pod687e4d4b_09d9_4e3b_b440_e44865fe207e.slice. May 14 18:01:24.651807 kubelet[3601]: I0514 18:01:24.651784 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/687e4d4b-09d9-4e3b-b440-e44865fe207e-cilium-config-path\") pod \"cilium-operator-599987898-gjt2n\" (UID: \"687e4d4b-09d9-4e3b-b440-e44865fe207e\") " pod="kube-system/cilium-operator-599987898-gjt2n" May 14 18:01:24.652018 kubelet[3601]: I0514 18:01:24.651991 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv6tk\" (UniqueName: \"kubernetes.io/projected/687e4d4b-09d9-4e3b-b440-e44865fe207e-kube-api-access-cv6tk\") pod \"cilium-operator-599987898-gjt2n\" (UID: \"687e4d4b-09d9-4e3b-b440-e44865fe207e\") " pod="kube-system/cilium-operator-599987898-gjt2n" May 14 18:01:24.747130 containerd[1906]: time="2025-05-14T18:01:24.746836446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ptbbn,Uid:e5bfa5f1-bb54-455f-8554-7ce76e3c5e98,Namespace:kube-system,Attempt:0,}" May 14 18:01:24.752477 containerd[1906]: time="2025-05-14T18:01:24.752288452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7xtq5,Uid:e060d488-b501-4271-a9b5-49cf79a1f7a4,Namespace:kube-system,Attempt:0,}" May 14 18:01:24.882201 containerd[1906]: time="2025-05-14T18:01:24.882176030Z" level=info msg="connecting to shim 3c7b54f2a826f906b07a43ce62b6567cd7b4a558ee070eb4583de4c18d8bb48d" address="unix:///run/containerd/s/b7c5dd457823355b7069f31516b43611861bb82e7f3334cb44c512f4c161e67c" namespace=k8s.io protocol=ttrpc version=3 May 14 18:01:24.882387 containerd[1906]: time="2025-05-14T18:01:24.882286921Z" level=info msg="connecting to shim d1a34c701ca92b9ea33c4e6940a0f1a4f02f98d4bb9a52e7073c50938f07030f" address="unix:///run/containerd/s/c6eedafa17673411ed9e637afbd145ac9b4cc9faf4263454b0e8236e0183c493" namespace=k8s.io protocol=ttrpc version=3 May 14 18:01:24.893838 containerd[1906]: time="2025-05-14T18:01:24.893798989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-gjt2n,Uid:687e4d4b-09d9-4e3b-b440-e44865fe207e,Namespace:kube-system,Attempt:0,}" May 14 18:01:24.904092 systemd[1]: Started cri-containerd-3c7b54f2a826f906b07a43ce62b6567cd7b4a558ee070eb4583de4c18d8bb48d.scope - libcontainer container 3c7b54f2a826f906b07a43ce62b6567cd7b4a558ee070eb4583de4c18d8bb48d. May 14 18:01:24.906028 systemd[1]: Started cri-containerd-d1a34c701ca92b9ea33c4e6940a0f1a4f02f98d4bb9a52e7073c50938f07030f.scope - libcontainer container d1a34c701ca92b9ea33c4e6940a0f1a4f02f98d4bb9a52e7073c50938f07030f. May 14 18:01:24.941804 containerd[1906]: time="2025-05-14T18:01:24.941768060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7xtq5,Uid:e060d488-b501-4271-a9b5-49cf79a1f7a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c7b54f2a826f906b07a43ce62b6567cd7b4a558ee070eb4583de4c18d8bb48d\"" May 14 18:01:24.943301 containerd[1906]: time="2025-05-14T18:01:24.943152266Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 14 18:01:24.961594 containerd[1906]: time="2025-05-14T18:01:24.961570292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ptbbn,Uid:e5bfa5f1-bb54-455f-8554-7ce76e3c5e98,Namespace:kube-system,Attempt:0,} returns sandbox id \"d1a34c701ca92b9ea33c4e6940a0f1a4f02f98d4bb9a52e7073c50938f07030f\"" May 14 18:01:24.963528 containerd[1906]: time="2025-05-14T18:01:24.963472897Z" level=info msg="CreateContainer within sandbox \"d1a34c701ca92b9ea33c4e6940a0f1a4f02f98d4bb9a52e7073c50938f07030f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 18:01:25.036479 containerd[1906]: time="2025-05-14T18:01:25.036390005Z" level=info msg="connecting to shim 73002a4bd1b96fc8d78fff2ca0cca3fbb07886a15c8067cb011c69e282732b76" address="unix:///run/containerd/s/8c5868bc22626b8bf2303f2b828083b54ada6376bef524a47f916c159d1ae181" namespace=k8s.io protocol=ttrpc version=3 May 14 18:01:25.049105 containerd[1906]: time="2025-05-14T18:01:25.049083914Z" level=info msg="Container ea7c1a8aa76369a76aceb5fb6fd0084b2cf7468f00673bd52206117d5fc171c3: CDI devices from CRI Config.CDIDevices: []" May 14 18:01:25.051066 systemd[1]: Started cri-containerd-73002a4bd1b96fc8d78fff2ca0cca3fbb07886a15c8067cb011c69e282732b76.scope - libcontainer container 73002a4bd1b96fc8d78fff2ca0cca3fbb07886a15c8067cb011c69e282732b76. May 14 18:01:25.083868 containerd[1906]: time="2025-05-14T18:01:25.083833637Z" level=info msg="CreateContainer within sandbox \"d1a34c701ca92b9ea33c4e6940a0f1a4f02f98d4bb9a52e7073c50938f07030f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ea7c1a8aa76369a76aceb5fb6fd0084b2cf7468f00673bd52206117d5fc171c3\"" May 14 18:01:25.084405 containerd[1906]: time="2025-05-14T18:01:25.084386996Z" level=info msg="StartContainer for \"ea7c1a8aa76369a76aceb5fb6fd0084b2cf7468f00673bd52206117d5fc171c3\"" May 14 18:01:25.085526 containerd[1906]: time="2025-05-14T18:01:25.085462690Z" level=info msg="connecting to shim ea7c1a8aa76369a76aceb5fb6fd0084b2cf7468f00673bd52206117d5fc171c3" address="unix:///run/containerd/s/c6eedafa17673411ed9e637afbd145ac9b4cc9faf4263454b0e8236e0183c493" protocol=ttrpc version=3 May 14 18:01:25.087435 containerd[1906]: time="2025-05-14T18:01:25.087386991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-gjt2n,Uid:687e4d4b-09d9-4e3b-b440-e44865fe207e,Namespace:kube-system,Attempt:0,} returns sandbox id \"73002a4bd1b96fc8d78fff2ca0cca3fbb07886a15c8067cb011c69e282732b76\"" May 14 18:01:25.101066 systemd[1]: Started cri-containerd-ea7c1a8aa76369a76aceb5fb6fd0084b2cf7468f00673bd52206117d5fc171c3.scope - libcontainer container ea7c1a8aa76369a76aceb5fb6fd0084b2cf7468f00673bd52206117d5fc171c3. May 14 18:01:25.129642 containerd[1906]: time="2025-05-14T18:01:25.129617544Z" level=info msg="StartContainer for \"ea7c1a8aa76369a76aceb5fb6fd0084b2cf7468f00673bd52206117d5fc171c3\" returns successfully" May 14 18:01:28.758894 kubelet[3601]: I0514 18:01:28.758806 3601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ptbbn" podStartSLOduration=4.758792052 podStartE2EDuration="4.758792052s" podCreationTimestamp="2025-05-14 18:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:01:25.81981386 +0000 UTC m=+17.139999577" watchObservedRunningTime="2025-05-14 18:01:28.758792052 +0000 UTC m=+20.078977761" May 14 18:01:37.534350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1755202753.mount: Deactivated successfully. May 14 18:01:52.478984 containerd[1906]: time="2025-05-14T18:01:52.478925970Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:01:52.482569 containerd[1906]: time="2025-05-14T18:01:52.482539531Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 14 18:01:52.528693 containerd[1906]: time="2025-05-14T18:01:52.528653609Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:01:52.530093 containerd[1906]: time="2025-05-14T18:01:52.530009142Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 27.58683145s" May 14 18:01:52.530093 containerd[1906]: time="2025-05-14T18:01:52.530035780Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 14 18:01:52.531005 containerd[1906]: time="2025-05-14T18:01:52.530981685Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 14 18:01:52.532086 containerd[1906]: time="2025-05-14T18:01:52.532058969Z" level=info msg="CreateContainer within sandbox \"3c7b54f2a826f906b07a43ce62b6567cd7b4a558ee070eb4583de4c18d8bb48d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 18:01:52.743029 containerd[1906]: time="2025-05-14T18:01:52.742883592Z" level=info msg="Container e1ae58a8d412294f0ec0cc0db9dd6b13c6c9c2c2dd915697b9fa23e746ffa5d1: CDI devices from CRI Config.CDIDevices: []" May 14 18:01:52.878299 containerd[1906]: time="2025-05-14T18:01:52.878251284Z" level=info msg="CreateContainer within sandbox \"3c7b54f2a826f906b07a43ce62b6567cd7b4a558ee070eb4583de4c18d8bb48d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e1ae58a8d412294f0ec0cc0db9dd6b13c6c9c2c2dd915697b9fa23e746ffa5d1\"" May 14 18:01:52.878986 containerd[1906]: time="2025-05-14T18:01:52.878945385Z" level=info msg="StartContainer for \"e1ae58a8d412294f0ec0cc0db9dd6b13c6c9c2c2dd915697b9fa23e746ffa5d1\"" May 14 18:01:52.880512 containerd[1906]: time="2025-05-14T18:01:52.880479659Z" level=info msg="connecting to shim e1ae58a8d412294f0ec0cc0db9dd6b13c6c9c2c2dd915697b9fa23e746ffa5d1" address="unix:///run/containerd/s/b7c5dd457823355b7069f31516b43611861bb82e7f3334cb44c512f4c161e67c" protocol=ttrpc version=3 May 14 18:01:52.898087 systemd[1]: Started cri-containerd-e1ae58a8d412294f0ec0cc0db9dd6b13c6c9c2c2dd915697b9fa23e746ffa5d1.scope - libcontainer container e1ae58a8d412294f0ec0cc0db9dd6b13c6c9c2c2dd915697b9fa23e746ffa5d1. May 14 18:01:52.922267 systemd[1]: cri-containerd-e1ae58a8d412294f0ec0cc0db9dd6b13c6c9c2c2dd915697b9fa23e746ffa5d1.scope: Deactivated successfully. May 14 18:01:52.923879 containerd[1906]: time="2025-05-14T18:01:52.923850009Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e1ae58a8d412294f0ec0cc0db9dd6b13c6c9c2c2dd915697b9fa23e746ffa5d1\" id:\"e1ae58a8d412294f0ec0cc0db9dd6b13c6c9c2c2dd915697b9fa23e746ffa5d1\" pid:4005 exited_at:{seconds:1747245712 nanos:922829096}" May 14 18:01:52.986795 containerd[1906]: time="2025-05-14T18:01:52.986730350Z" level=info msg="received exit event container_id:\"e1ae58a8d412294f0ec0cc0db9dd6b13c6c9c2c2dd915697b9fa23e746ffa5d1\" id:\"e1ae58a8d412294f0ec0cc0db9dd6b13c6c9c2c2dd915697b9fa23e746ffa5d1\" pid:4005 exited_at:{seconds:1747245712 nanos:922829096}" May 14 18:01:52.987213 containerd[1906]: time="2025-05-14T18:01:52.987155548Z" level=info msg="StartContainer for \"e1ae58a8d412294f0ec0cc0db9dd6b13c6c9c2c2dd915697b9fa23e746ffa5d1\" returns successfully" May 14 18:01:52.999761 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1ae58a8d412294f0ec0cc0db9dd6b13c6c9c2c2dd915697b9fa23e746ffa5d1-rootfs.mount: Deactivated successfully. May 14 18:01:58.857425 containerd[1906]: time="2025-05-14T18:01:58.857377484Z" level=info msg="CreateContainer within sandbox \"3c7b54f2a826f906b07a43ce62b6567cd7b4a558ee070eb4583de4c18d8bb48d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 18:01:58.991064 containerd[1906]: time="2025-05-14T18:01:58.990861559Z" level=info msg="Container 72fc64f3d778c225bd87127cbe3d6f5020d56a43f4e381bee98727666f7880eb: CDI devices from CRI Config.CDIDevices: []" May 14 18:01:58.992957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount939092445.mount: Deactivated successfully. May 14 18:01:59.228610 containerd[1906]: time="2025-05-14T18:01:59.228426226Z" level=info msg="CreateContainer within sandbox \"3c7b54f2a826f906b07a43ce62b6567cd7b4a558ee070eb4583de4c18d8bb48d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"72fc64f3d778c225bd87127cbe3d6f5020d56a43f4e381bee98727666f7880eb\"" May 14 18:01:59.228953 containerd[1906]: time="2025-05-14T18:01:59.228928425Z" level=info msg="StartContainer for \"72fc64f3d778c225bd87127cbe3d6f5020d56a43f4e381bee98727666f7880eb\"" May 14 18:01:59.229513 containerd[1906]: time="2025-05-14T18:01:59.229487424Z" level=info msg="connecting to shim 72fc64f3d778c225bd87127cbe3d6f5020d56a43f4e381bee98727666f7880eb" address="unix:///run/containerd/s/b7c5dd457823355b7069f31516b43611861bb82e7f3334cb44c512f4c161e67c" protocol=ttrpc version=3 May 14 18:01:59.307065 systemd[1]: Started cri-containerd-72fc64f3d778c225bd87127cbe3d6f5020d56a43f4e381bee98727666f7880eb.scope - libcontainer container 72fc64f3d778c225bd87127cbe3d6f5020d56a43f4e381bee98727666f7880eb. May 14 18:01:59.341451 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 18:01:59.341610 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 18:01:59.341809 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 14 18:01:59.343837 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 18:01:59.345001 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 18:01:59.345445 systemd[1]: cri-containerd-72fc64f3d778c225bd87127cbe3d6f5020d56a43f4e381bee98727666f7880eb.scope: Deactivated successfully. May 14 18:01:59.348820 containerd[1906]: time="2025-05-14T18:01:59.348796225Z" level=info msg="TaskExit event in podsandbox handler container_id:\"72fc64f3d778c225bd87127cbe3d6f5020d56a43f4e381bee98727666f7880eb\" id:\"72fc64f3d778c225bd87127cbe3d6f5020d56a43f4e381bee98727666f7880eb\" pid:4052 exited_at:{seconds:1747245719 nanos:347418674}" May 14 18:01:59.361138 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 18:01:59.383898 containerd[1906]: time="2025-05-14T18:01:59.383814786Z" level=info msg="received exit event container_id:\"72fc64f3d778c225bd87127cbe3d6f5020d56a43f4e381bee98727666f7880eb\" id:\"72fc64f3d778c225bd87127cbe3d6f5020d56a43f4e381bee98727666f7880eb\" pid:4052 exited_at:{seconds:1747245719 nanos:347418674}" May 14 18:01:59.388432 containerd[1906]: time="2025-05-14T18:01:59.388397740Z" level=info msg="StartContainer for \"72fc64f3d778c225bd87127cbe3d6f5020d56a43f4e381bee98727666f7880eb\" returns successfully" May 14 18:01:59.989971 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72fc64f3d778c225bd87127cbe3d6f5020d56a43f4e381bee98727666f7880eb-rootfs.mount: Deactivated successfully. May 14 18:02:00.863037 containerd[1906]: time="2025-05-14T18:02:00.862994209Z" level=info msg="CreateContainer within sandbox \"3c7b54f2a826f906b07a43ce62b6567cd7b4a558ee070eb4583de4c18d8bb48d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 18:02:01.178687 containerd[1906]: time="2025-05-14T18:02:01.178589154Z" level=info msg="Container d43a1e52de42c300c6817d93184f7fd8274e718e9857d8a2c8791806b7e45c9f: CDI devices from CRI Config.CDIDevices: []" May 14 18:02:01.384856 containerd[1906]: time="2025-05-14T18:02:01.384824157Z" level=info msg="CreateContainer within sandbox \"3c7b54f2a826f906b07a43ce62b6567cd7b4a558ee070eb4583de4c18d8bb48d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d43a1e52de42c300c6817d93184f7fd8274e718e9857d8a2c8791806b7e45c9f\"" May 14 18:02:01.385675 containerd[1906]: time="2025-05-14T18:02:01.385174939Z" level=info msg="StartContainer for \"d43a1e52de42c300c6817d93184f7fd8274e718e9857d8a2c8791806b7e45c9f\"" May 14 18:02:01.386952 containerd[1906]: time="2025-05-14T18:02:01.386900966Z" level=info msg="connecting to shim d43a1e52de42c300c6817d93184f7fd8274e718e9857d8a2c8791806b7e45c9f" address="unix:///run/containerd/s/b7c5dd457823355b7069f31516b43611861bb82e7f3334cb44c512f4c161e67c" protocol=ttrpc version=3 May 14 18:02:01.404081 systemd[1]: Started cri-containerd-d43a1e52de42c300c6817d93184f7fd8274e718e9857d8a2c8791806b7e45c9f.scope - libcontainer container d43a1e52de42c300c6817d93184f7fd8274e718e9857d8a2c8791806b7e45c9f. May 14 18:02:01.434636 systemd[1]: cri-containerd-d43a1e52de42c300c6817d93184f7fd8274e718e9857d8a2c8791806b7e45c9f.scope: Deactivated successfully. May 14 18:02:01.436636 containerd[1906]: time="2025-05-14T18:02:01.436606839Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d43a1e52de42c300c6817d93184f7fd8274e718e9857d8a2c8791806b7e45c9f\" id:\"d43a1e52de42c300c6817d93184f7fd8274e718e9857d8a2c8791806b7e45c9f\" pid:4099 exited_at:{seconds:1747245721 nanos:436419199}" May 14 18:02:02.037501 containerd[1906]: time="2025-05-14T18:02:02.037384211Z" level=info msg="received exit event container_id:\"d43a1e52de42c300c6817d93184f7fd8274e718e9857d8a2c8791806b7e45c9f\" id:\"d43a1e52de42c300c6817d93184f7fd8274e718e9857d8a2c8791806b7e45c9f\" pid:4099 exited_at:{seconds:1747245721 nanos:436419199}" May 14 18:02:02.045486 containerd[1906]: time="2025-05-14T18:02:02.045461519Z" level=info msg="StartContainer for \"d43a1e52de42c300c6817d93184f7fd8274e718e9857d8a2c8791806b7e45c9f\" returns successfully" May 14 18:02:02.053886 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d43a1e52de42c300c6817d93184f7fd8274e718e9857d8a2c8791806b7e45c9f-rootfs.mount: Deactivated successfully. May 14 18:02:03.331264 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount139718484.mount: Deactivated successfully. May 14 18:02:04.075757 containerd[1906]: time="2025-05-14T18:02:04.075725932Z" level=info msg="CreateContainer within sandbox \"3c7b54f2a826f906b07a43ce62b6567cd7b4a558ee070eb4583de4c18d8bb48d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 18:02:04.594772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2545571078.mount: Deactivated successfully. May 14 18:02:04.595650 containerd[1906]: time="2025-05-14T18:02:04.595586085Z" level=info msg="Container 372c6c03a4a22780f92f7fff88d8861fa6a7d10163549424822865418eb0f999: CDI devices from CRI Config.CDIDevices: []" May 14 18:02:04.728365 containerd[1906]: time="2025-05-14T18:02:04.728309501Z" level=info msg="CreateContainer within sandbox \"3c7b54f2a826f906b07a43ce62b6567cd7b4a558ee070eb4583de4c18d8bb48d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"372c6c03a4a22780f92f7fff88d8861fa6a7d10163549424822865418eb0f999\"" May 14 18:02:04.729123 containerd[1906]: time="2025-05-14T18:02:04.729094654Z" level=info msg="StartContainer for \"372c6c03a4a22780f92f7fff88d8861fa6a7d10163549424822865418eb0f999\"" May 14 18:02:04.731476 containerd[1906]: time="2025-05-14T18:02:04.731432928Z" level=info msg="connecting to shim 372c6c03a4a22780f92f7fff88d8861fa6a7d10163549424822865418eb0f999" address="unix:///run/containerd/s/b7c5dd457823355b7069f31516b43611861bb82e7f3334cb44c512f4c161e67c" protocol=ttrpc version=3 May 14 18:02:04.750079 systemd[1]: Started cri-containerd-372c6c03a4a22780f92f7fff88d8861fa6a7d10163549424822865418eb0f999.scope - libcontainer container 372c6c03a4a22780f92f7fff88d8861fa6a7d10163549424822865418eb0f999. May 14 18:02:04.773694 systemd[1]: cri-containerd-372c6c03a4a22780f92f7fff88d8861fa6a7d10163549424822865418eb0f999.scope: Deactivated successfully. May 14 18:02:04.777598 containerd[1906]: time="2025-05-14T18:02:04.776684902Z" level=info msg="TaskExit event in podsandbox handler container_id:\"372c6c03a4a22780f92f7fff88d8861fa6a7d10163549424822865418eb0f999\" id:\"372c6c03a4a22780f92f7fff88d8861fa6a7d10163549424822865418eb0f999\" pid:4155 exited_at:{seconds:1747245724 nanos:775567874}" May 14 18:02:04.777598 containerd[1906]: time="2025-05-14T18:02:04.777140004Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode060d488_b501_4271_a9b5_49cf79a1f7a4.slice/cri-containerd-372c6c03a4a22780f92f7fff88d8861fa6a7d10163549424822865418eb0f999.scope/memory.events\": no such file or directory" May 14 18:02:04.785258 containerd[1906]: time="2025-05-14T18:02:04.785178443Z" level=info msg="received exit event container_id:\"372c6c03a4a22780f92f7fff88d8861fa6a7d10163549424822865418eb0f999\" id:\"372c6c03a4a22780f92f7fff88d8861fa6a7d10163549424822865418eb0f999\" pid:4155 exited_at:{seconds:1747245724 nanos:775567874}" May 14 18:02:04.786837 containerd[1906]: time="2025-05-14T18:02:04.786820192Z" level=info msg="StartContainer for \"372c6c03a4a22780f92f7fff88d8861fa6a7d10163549424822865418eb0f999\" returns successfully" May 14 18:02:04.805098 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-372c6c03a4a22780f92f7fff88d8861fa6a7d10163549424822865418eb0f999-rootfs.mount: Deactivated successfully. May 14 18:02:08.136186 containerd[1906]: time="2025-05-14T18:02:08.136128401Z" level=info msg="CreateContainer within sandbox \"3c7b54f2a826f906b07a43ce62b6567cd7b4a558ee070eb4583de4c18d8bb48d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 18:02:08.530128 containerd[1906]: time="2025-05-14T18:02:08.530062901Z" level=info msg="Container 6c93eb89e7392910aa747605d1310d40b94d6e426e15fc208375417bc6e0a5d0: CDI devices from CRI Config.CDIDevices: []" May 14 18:02:08.532058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount362790757.mount: Deactivated successfully. May 14 18:02:08.778053 containerd[1906]: time="2025-05-14T18:02:08.777941439Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:02:08.831197 containerd[1906]: time="2025-05-14T18:02:08.829912866Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 14 18:02:08.875182 containerd[1906]: time="2025-05-14T18:02:08.875130522Z" level=info msg="CreateContainer within sandbox \"3c7b54f2a826f906b07a43ce62b6567cd7b4a558ee070eb4583de4c18d8bb48d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6c93eb89e7392910aa747605d1310d40b94d6e426e15fc208375417bc6e0a5d0\"" May 14 18:02:08.876644 containerd[1906]: time="2025-05-14T18:02:08.876600466Z" level=info msg="StartContainer for \"6c93eb89e7392910aa747605d1310d40b94d6e426e15fc208375417bc6e0a5d0\"" May 14 18:02:08.877757 containerd[1906]: time="2025-05-14T18:02:08.877732254Z" level=info msg="connecting to shim 6c93eb89e7392910aa747605d1310d40b94d6e426e15fc208375417bc6e0a5d0" address="unix:///run/containerd/s/b7c5dd457823355b7069f31516b43611861bb82e7f3334cb44c512f4c161e67c" protocol=ttrpc version=3 May 14 18:02:08.881989 containerd[1906]: time="2025-05-14T18:02:08.881526657Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:02:08.883112 containerd[1906]: time="2025-05-14T18:02:08.883088475Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 16.351944166s" May 14 18:02:08.883298 containerd[1906]: time="2025-05-14T18:02:08.883282289Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 14 18:02:08.885057 containerd[1906]: time="2025-05-14T18:02:08.885038546Z" level=info msg="CreateContainer within sandbox \"73002a4bd1b96fc8d78fff2ca0cca3fbb07886a15c8067cb011c69e282732b76\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 14 18:02:08.901067 systemd[1]: Started cri-containerd-6c93eb89e7392910aa747605d1310d40b94d6e426e15fc208375417bc6e0a5d0.scope - libcontainer container 6c93eb89e7392910aa747605d1310d40b94d6e426e15fc208375417bc6e0a5d0. May 14 18:02:08.986790 containerd[1906]: time="2025-05-14T18:02:08.986740422Z" level=info msg="StartContainer for \"6c93eb89e7392910aa747605d1310d40b94d6e426e15fc208375417bc6e0a5d0\" returns successfully" May 14 18:02:09.026905 containerd[1906]: time="2025-05-14T18:02:09.026865603Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6c93eb89e7392910aa747605d1310d40b94d6e426e15fc208375417bc6e0a5d0\" id:\"d763566fca33c977bfba9b41453aa45e9603ce291050c84776b00e93372c86bf\" pid:4241 exited_at:{seconds:1747245729 nanos:26539449}" May 14 18:02:09.061801 kubelet[3601]: I0514 18:02:09.061721 3601 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 14 18:02:09.131590 kubelet[3601]: I0514 18:02:09.110389 3601 topology_manager.go:215] "Topology Admit Handler" podUID="14a71eac-27d6-4480-9eaa-b869c79e98b2" podNamespace="kube-system" podName="coredns-7db6d8ff4d-84j4w" May 14 18:02:09.131590 kubelet[3601]: I0514 18:02:09.115525 3601 topology_manager.go:215] "Topology Admit Handler" podUID="04dfe0a9-dfb1-4657-992a-6b0db9056413" podNamespace="kube-system" podName="coredns-7db6d8ff4d-zspxb" May 14 18:02:09.116824 systemd[1]: Created slice kubepods-burstable-pod14a71eac_27d6_4480_9eaa_b869c79e98b2.slice - libcontainer container kubepods-burstable-pod14a71eac_27d6_4480_9eaa_b869c79e98b2.slice. May 14 18:02:09.122326 systemd[1]: Created slice kubepods-burstable-pod04dfe0a9_dfb1_4657_992a_6b0db9056413.slice - libcontainer container kubepods-burstable-pod04dfe0a9_dfb1_4657_992a_6b0db9056413.slice. May 14 18:02:09.138345 containerd[1906]: time="2025-05-14T18:02:09.138293001Z" level=info msg="Container fbead3f35dd919e7f640e619845a2af370b4a7834e7f7e7dae7b9fe47c7a6e1e: CDI devices from CRI Config.CDIDevices: []" May 14 18:02:09.153179 kubelet[3601]: I0514 18:02:09.153114 3601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7xtq5" podStartSLOduration=17.56520325 podStartE2EDuration="45.153097758s" podCreationTimestamp="2025-05-14 18:01:24 +0000 UTC" firstStartedPulling="2025-05-14 18:01:24.942858858 +0000 UTC m=+16.263044567" lastFinishedPulling="2025-05-14 18:01:52.530753334 +0000 UTC m=+43.850939075" observedRunningTime="2025-05-14 18:02:09.152148304 +0000 UTC m=+60.472334021" watchObservedRunningTime="2025-05-14 18:02:09.153097758 +0000 UTC m=+60.473283467" May 14 18:02:09.250923 kubelet[3601]: I0514 18:02:09.250717 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv224\" (UniqueName: \"kubernetes.io/projected/14a71eac-27d6-4480-9eaa-b869c79e98b2-kube-api-access-vv224\") pod \"coredns-7db6d8ff4d-84j4w\" (UID: \"14a71eac-27d6-4480-9eaa-b869c79e98b2\") " pod="kube-system/coredns-7db6d8ff4d-84j4w" May 14 18:02:09.250923 kubelet[3601]: I0514 18:02:09.250749 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc9q6\" (UniqueName: \"kubernetes.io/projected/04dfe0a9-dfb1-4657-992a-6b0db9056413-kube-api-access-rc9q6\") pod \"coredns-7db6d8ff4d-zspxb\" (UID: \"04dfe0a9-dfb1-4657-992a-6b0db9056413\") " pod="kube-system/coredns-7db6d8ff4d-zspxb" May 14 18:02:09.250923 kubelet[3601]: I0514 18:02:09.250768 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14a71eac-27d6-4480-9eaa-b869c79e98b2-config-volume\") pod \"coredns-7db6d8ff4d-84j4w\" (UID: \"14a71eac-27d6-4480-9eaa-b869c79e98b2\") " pod="kube-system/coredns-7db6d8ff4d-84j4w" May 14 18:02:09.250923 kubelet[3601]: I0514 18:02:09.250786 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04dfe0a9-dfb1-4657-992a-6b0db9056413-config-volume\") pod \"coredns-7db6d8ff4d-zspxb\" (UID: \"04dfe0a9-dfb1-4657-992a-6b0db9056413\") " pod="kube-system/coredns-7db6d8ff4d-zspxb" May 14 18:02:09.335159 containerd[1906]: time="2025-05-14T18:02:09.335126159Z" level=info msg="CreateContainer within sandbox \"73002a4bd1b96fc8d78fff2ca0cca3fbb07886a15c8067cb011c69e282732b76\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"fbead3f35dd919e7f640e619845a2af370b4a7834e7f7e7dae7b9fe47c7a6e1e\"" May 14 18:02:09.336377 containerd[1906]: time="2025-05-14T18:02:09.335619423Z" level=info msg="StartContainer for \"fbead3f35dd919e7f640e619845a2af370b4a7834e7f7e7dae7b9fe47c7a6e1e\"" May 14 18:02:09.336377 containerd[1906]: time="2025-05-14T18:02:09.336183249Z" level=info msg="connecting to shim fbead3f35dd919e7f640e619845a2af370b4a7834e7f7e7dae7b9fe47c7a6e1e" address="unix:///run/containerd/s/8c5868bc22626b8bf2303f2b828083b54ada6376bef524a47f916c159d1ae181" protocol=ttrpc version=3 May 14 18:02:09.353077 systemd[1]: Started cri-containerd-fbead3f35dd919e7f640e619845a2af370b4a7834e7f7e7dae7b9fe47c7a6e1e.scope - libcontainer container fbead3f35dd919e7f640e619845a2af370b4a7834e7f7e7dae7b9fe47c7a6e1e. May 14 18:02:09.383018 containerd[1906]: time="2025-05-14T18:02:09.382908931Z" level=info msg="StartContainer for \"fbead3f35dd919e7f640e619845a2af370b4a7834e7f7e7dae7b9fe47c7a6e1e\" returns successfully" May 14 18:02:09.433394 containerd[1906]: time="2025-05-14T18:02:09.433368940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zspxb,Uid:04dfe0a9-dfb1-4657-992a-6b0db9056413,Namespace:kube-system,Attempt:0,}" May 14 18:02:09.433776 containerd[1906]: time="2025-05-14T18:02:09.433724256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-84j4w,Uid:14a71eac-27d6-4480-9eaa-b869c79e98b2,Namespace:kube-system,Attempt:0,}" May 14 18:02:10.153060 kubelet[3601]: I0514 18:02:10.153003 3601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-gjt2n" podStartSLOduration=2.357649095 podStartE2EDuration="46.152954356s" podCreationTimestamp="2025-05-14 18:01:24 +0000 UTC" firstStartedPulling="2025-05-14 18:01:25.088552895 +0000 UTC m=+16.408738604" lastFinishedPulling="2025-05-14 18:02:08.883858156 +0000 UTC m=+60.204043865" observedRunningTime="2025-05-14 18:02:10.152810855 +0000 UTC m=+61.472996564" watchObservedRunningTime="2025-05-14 18:02:10.152954356 +0000 UTC m=+61.473140073" May 14 18:02:13.028312 systemd-networkd[1480]: cilium_host: Link UP May 14 18:02:13.029097 systemd-networkd[1480]: cilium_net: Link UP May 14 18:02:13.030182 systemd-networkd[1480]: cilium_net: Gained carrier May 14 18:02:13.031059 systemd-networkd[1480]: cilium_host: Gained carrier May 14 18:02:13.166459 systemd-networkd[1480]: cilium_vxlan: Link UP May 14 18:02:13.166463 systemd-networkd[1480]: cilium_vxlan: Gained carrier May 14 18:02:13.413987 kernel: NET: Registered PF_ALG protocol family May 14 18:02:13.458146 systemd-networkd[1480]: cilium_net: Gained IPv6LL May 14 18:02:13.514107 systemd-networkd[1480]: cilium_host: Gained IPv6LL May 14 18:02:13.849670 systemd-networkd[1480]: lxc_health: Link UP May 14 18:02:13.862200 systemd-networkd[1480]: lxc_health: Gained carrier May 14 18:02:14.055939 systemd-networkd[1480]: lxcfce165d9ff93: Link UP May 14 18:02:14.069087 kernel: eth0: renamed from tmp03c82 May 14 18:02:14.070217 systemd-networkd[1480]: lxcfce165d9ff93: Gained carrier May 14 18:02:14.104791 systemd-networkd[1480]: lxcc89021ea4cb1: Link UP May 14 18:02:14.110976 kernel: eth0: renamed from tmp5512b May 14 18:02:14.111182 systemd-networkd[1480]: lxcc89021ea4cb1: Gained carrier May 14 18:02:14.883159 systemd-networkd[1480]: cilium_vxlan: Gained IPv6LL May 14 18:02:15.458488 systemd-networkd[1480]: lxc_health: Gained IPv6LL May 14 18:02:15.522145 systemd-networkd[1480]: lxcc89021ea4cb1: Gained IPv6LL May 14 18:02:16.099123 systemd-networkd[1480]: lxcfce165d9ff93: Gained IPv6LL May 14 18:02:17.101439 containerd[1906]: time="2025-05-14T18:02:17.101382029Z" level=info msg="connecting to shim 03c825d11e2624138018c83b5c3a20943a901d1a6a1da77199d9f15e792bdbf3" address="unix:///run/containerd/s/96cf9df84a81e06aea9d5a3ecb8f61429dc1d167e82f07d51b34738b1b2fcc49" namespace=k8s.io protocol=ttrpc version=3 May 14 18:02:17.121075 systemd[1]: Started cri-containerd-03c825d11e2624138018c83b5c3a20943a901d1a6a1da77199d9f15e792bdbf3.scope - libcontainer container 03c825d11e2624138018c83b5c3a20943a901d1a6a1da77199d9f15e792bdbf3. May 14 18:02:17.147844 containerd[1906]: time="2025-05-14T18:02:17.147798238Z" level=info msg="connecting to shim 5512b3aa601e1db228d8524722c1afba95a1a44b689346f685e67581c6d3fab1" address="unix:///run/containerd/s/1c7fe51c2f8825e36c1af7a57703ae1daeaaf3a91c9c54819aa32779d873cfd6" namespace=k8s.io protocol=ttrpc version=3 May 14 18:02:17.172089 systemd[1]: Started cri-containerd-5512b3aa601e1db228d8524722c1afba95a1a44b689346f685e67581c6d3fab1.scope - libcontainer container 5512b3aa601e1db228d8524722c1afba95a1a44b689346f685e67581c6d3fab1. May 14 18:02:17.185145 containerd[1906]: time="2025-05-14T18:02:17.184952868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zspxb,Uid:04dfe0a9-dfb1-4657-992a-6b0db9056413,Namespace:kube-system,Attempt:0,} returns sandbox id \"03c825d11e2624138018c83b5c3a20943a901d1a6a1da77199d9f15e792bdbf3\"" May 14 18:02:17.188324 containerd[1906]: time="2025-05-14T18:02:17.188273327Z" level=info msg="CreateContainer within sandbox \"03c825d11e2624138018c83b5c3a20943a901d1a6a1da77199d9f15e792bdbf3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 18:02:17.235484 containerd[1906]: time="2025-05-14T18:02:17.235406735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-84j4w,Uid:14a71eac-27d6-4480-9eaa-b869c79e98b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"5512b3aa601e1db228d8524722c1afba95a1a44b689346f685e67581c6d3fab1\"" May 14 18:02:17.238049 containerd[1906]: time="2025-05-14T18:02:17.238018123Z" level=info msg="CreateContainer within sandbox \"5512b3aa601e1db228d8524722c1afba95a1a44b689346f685e67581c6d3fab1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 18:02:17.580676 containerd[1906]: time="2025-05-14T18:02:17.580253400Z" level=info msg="Container 3478817864c6655573d5015cea1d0c78e1e8f8c9701f0b499ec240679c5085df: CDI devices from CRI Config.CDIDevices: []" May 14 18:02:17.681042 containerd[1906]: time="2025-05-14T18:02:17.681008081Z" level=info msg="Container 7a7bc55aa2d5c88eac8ecd47ec538fb9a3a1bdd14337523690c5c18db03d2b3a: CDI devices from CRI Config.CDIDevices: []" May 14 18:02:17.830565 containerd[1906]: time="2025-05-14T18:02:17.830525607Z" level=info msg="CreateContainer within sandbox \"03c825d11e2624138018c83b5c3a20943a901d1a6a1da77199d9f15e792bdbf3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3478817864c6655573d5015cea1d0c78e1e8f8c9701f0b499ec240679c5085df\"" May 14 18:02:17.833188 containerd[1906]: time="2025-05-14T18:02:17.831578769Z" level=info msg="StartContainer for \"3478817864c6655573d5015cea1d0c78e1e8f8c9701f0b499ec240679c5085df\"" May 14 18:02:17.833681 containerd[1906]: time="2025-05-14T18:02:17.833658644Z" level=info msg="connecting to shim 3478817864c6655573d5015cea1d0c78e1e8f8c9701f0b499ec240679c5085df" address="unix:///run/containerd/s/96cf9df84a81e06aea9d5a3ecb8f61429dc1d167e82f07d51b34738b1b2fcc49" protocol=ttrpc version=3 May 14 18:02:17.835636 containerd[1906]: time="2025-05-14T18:02:17.835608435Z" level=info msg="CreateContainer within sandbox \"5512b3aa601e1db228d8524722c1afba95a1a44b689346f685e67581c6d3fab1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7a7bc55aa2d5c88eac8ecd47ec538fb9a3a1bdd14337523690c5c18db03d2b3a\"" May 14 18:02:17.836602 containerd[1906]: time="2025-05-14T18:02:17.836441494Z" level=info msg="StartContainer for \"7a7bc55aa2d5c88eac8ecd47ec538fb9a3a1bdd14337523690c5c18db03d2b3a\"" May 14 18:02:17.837759 containerd[1906]: time="2025-05-14T18:02:17.837733936Z" level=info msg="connecting to shim 7a7bc55aa2d5c88eac8ecd47ec538fb9a3a1bdd14337523690c5c18db03d2b3a" address="unix:///run/containerd/s/1c7fe51c2f8825e36c1af7a57703ae1daeaaf3a91c9c54819aa32779d873cfd6" protocol=ttrpc version=3 May 14 18:02:17.853079 systemd[1]: Started cri-containerd-3478817864c6655573d5015cea1d0c78e1e8f8c9701f0b499ec240679c5085df.scope - libcontainer container 3478817864c6655573d5015cea1d0c78e1e8f8c9701f0b499ec240679c5085df. May 14 18:02:17.855498 systemd[1]: Started cri-containerd-7a7bc55aa2d5c88eac8ecd47ec538fb9a3a1bdd14337523690c5c18db03d2b3a.scope - libcontainer container 7a7bc55aa2d5c88eac8ecd47ec538fb9a3a1bdd14337523690c5c18db03d2b3a. May 14 18:02:17.892215 containerd[1906]: time="2025-05-14T18:02:17.892184356Z" level=info msg="StartContainer for \"7a7bc55aa2d5c88eac8ecd47ec538fb9a3a1bdd14337523690c5c18db03d2b3a\" returns successfully" May 14 18:02:17.892309 containerd[1906]: time="2025-05-14T18:02:17.892235550Z" level=info msg="StartContainer for \"3478817864c6655573d5015cea1d0c78e1e8f8c9701f0b499ec240679c5085df\" returns successfully" May 14 18:02:18.171272 kubelet[3601]: I0514 18:02:18.171159 3601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-84j4w" podStartSLOduration=54.171143137 podStartE2EDuration="54.171143137s" podCreationTimestamp="2025-05-14 18:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:02:18.170173065 +0000 UTC m=+69.490358814" watchObservedRunningTime="2025-05-14 18:02:18.171143137 +0000 UTC m=+69.491328846" May 14 18:02:18.202179 kubelet[3601]: I0514 18:02:18.202047 3601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-zspxb" podStartSLOduration=54.202034069 podStartE2EDuration="54.202034069s" podCreationTimestamp="2025-05-14 18:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:02:18.201123255 +0000 UTC m=+69.521308972" watchObservedRunningTime="2025-05-14 18:02:18.202034069 +0000 UTC m=+69.522219786" May 14 18:03:16.813997 systemd[1]: Started sshd@7-10.200.20.4:22-10.200.16.10:37214.service - OpenSSH per-connection server daemon (10.200.16.10:37214). May 14 18:03:17.231628 sshd[4918]: Accepted publickey for core from 10.200.16.10 port 37214 ssh2: RSA SHA256:GfAM5aEyZtKI1wsLTx07KX72HiGyI6L7Lx/Fls8o8zc May 14 18:03:17.233359 sshd-session[4918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:03:17.237257 systemd-logind[1872]: New session 10 of user core. May 14 18:03:17.250081 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 18:03:17.580037 sshd[4921]: Connection closed by 10.200.16.10 port 37214 May 14 18:03:17.579553 sshd-session[4918]: pam_unix(sshd:session): session closed for user core May 14 18:03:17.582797 systemd-logind[1872]: Session 10 logged out. Waiting for processes to exit. May 14 18:03:17.582933 systemd[1]: sshd@7-10.200.20.4:22-10.200.16.10:37214.service: Deactivated successfully. May 14 18:03:17.584499 systemd[1]: session-10.scope: Deactivated successfully. May 14 18:03:17.588007 systemd-logind[1872]: Removed session 10. May 14 18:03:22.655270 systemd[1]: Started sshd@8-10.200.20.4:22-10.200.16.10:34834.service - OpenSSH per-connection server daemon (10.200.16.10:34834). May 14 18:03:23.063262 sshd[4936]: Accepted publickey for core from 10.200.16.10 port 34834 ssh2: RSA SHA256:GfAM5aEyZtKI1wsLTx07KX72HiGyI6L7Lx/Fls8o8zc May 14 18:03:23.064311 sshd-session[4936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:03:23.067900 systemd-logind[1872]: New session 11 of user core. May 14 18:03:23.073069 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 18:03:23.403087 sshd[4938]: Connection closed by 10.200.16.10 port 34834 May 14 18:03:23.403635 sshd-session[4936]: pam_unix(sshd:session): session closed for user core May 14 18:03:23.406687 systemd[1]: sshd@8-10.200.20.4:22-10.200.16.10:34834.service: Deactivated successfully. May 14 18:03:23.408301 systemd[1]: session-11.scope: Deactivated successfully. May 14 18:03:23.410124 systemd-logind[1872]: Session 11 logged out. Waiting for processes to exit. May 14 18:03:23.411309 systemd-logind[1872]: Removed session 11. May 14 18:03:28.478171 systemd[1]: Started sshd@9-10.200.20.4:22-10.200.16.10:39464.service - OpenSSH per-connection server daemon (10.200.16.10:39464). May 14 18:03:28.886739 sshd[4953]: Accepted publickey for core from 10.200.16.10 port 39464 ssh2: RSA SHA256:GfAM5aEyZtKI1wsLTx07KX72HiGyI6L7Lx/Fls8o8zc May 14 18:03:28.887809 sshd-session[4953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:03:28.891796 systemd-logind[1872]: New session 12 of user core. May 14 18:03:28.898095 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 18:03:29.223060 sshd[4955]: Connection closed by 10.200.16.10 port 39464 May 14 18:03:29.223708 sshd-session[4953]: pam_unix(sshd:session): session closed for user core May 14 18:03:29.227133 systemd[1]: sshd@9-10.200.20.4:22-10.200.16.10:39464.service: Deactivated successfully. May 14 18:03:29.228613 systemd[1]: session-12.scope: Deactivated successfully. May 14 18:03:29.229242 systemd-logind[1872]: Session 12 logged out. Waiting for processes to exit. May 14 18:03:29.230822 systemd-logind[1872]: Removed session 12. May 14 18:03:34.304764 systemd[1]: Started sshd@10-10.200.20.4:22-10.200.16.10:39470.service - OpenSSH per-connection server daemon (10.200.16.10:39470). May 14 18:03:34.751762 sshd[4968]: Accepted publickey for core from 10.200.16.10 port 39470 ssh2: RSA SHA256:GfAM5aEyZtKI1wsLTx07KX72HiGyI6L7Lx/Fls8o8zc May 14 18:03:34.753077 sshd-session[4968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:03:34.757130 systemd-logind[1872]: New session 13 of user core. May 14 18:03:34.761085 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 18:03:35.117787 sshd[4970]: Connection closed by 10.200.16.10 port 39470 May 14 18:03:35.117064 sshd-session[4968]: pam_unix(sshd:session): session closed for user core May 14 18:03:35.119599 systemd[1]: sshd@10-10.200.20.4:22-10.200.16.10:39470.service: Deactivated successfully. May 14 18:03:35.121473 systemd[1]: session-13.scope: Deactivated successfully. May 14 18:03:35.123334 systemd-logind[1872]: Session 13 logged out. Waiting for processes to exit. May 14 18:03:35.124974 systemd-logind[1872]: Removed session 13. May 14 18:03:35.200249 systemd[1]: Started sshd@11-10.200.20.4:22-10.200.16.10:39484.service - OpenSSH per-connection server daemon (10.200.16.10:39484). May 14 18:03:35.642934 sshd[4982]: Accepted publickey for core from 10.200.16.10 port 39484 ssh2: RSA SHA256:GfAM5aEyZtKI1wsLTx07KX72HiGyI6L7Lx/Fls8o8zc May 14 18:03:35.644066 sshd-session[4982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:03:35.649584 systemd-logind[1872]: New session 14 of user core. May 14 18:03:35.653073 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 18:03:36.042544 sshd[4988]: Connection closed by 10.200.16.10 port 39484 May 14 18:03:36.043027 sshd-session[4982]: pam_unix(sshd:session): session closed for user core May 14 18:03:36.046282 systemd[1]: sshd@11-10.200.20.4:22-10.200.16.10:39484.service: Deactivated successfully. May 14 18:03:36.047561 systemd[1]: session-14.scope: Deactivated successfully. May 14 18:03:36.048537 systemd-logind[1872]: Session 14 logged out. Waiting for processes to exit. May 14 18:03:36.049627 systemd-logind[1872]: Removed session 14. May 14 18:03:36.117550 systemd[1]: Started sshd@12-10.200.20.4:22-10.200.16.10:39486.service - OpenSSH per-connection server daemon (10.200.16.10:39486). May 14 18:03:36.530188 sshd[4998]: Accepted publickey for core from 10.200.16.10 port 39486 ssh2: RSA SHA256:GfAM5aEyZtKI1wsLTx07KX72HiGyI6L7Lx/Fls8o8zc May 14 18:03:36.531287 sshd-session[4998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:03:36.534958 systemd-logind[1872]: New session 15 of user core. May 14 18:03:36.542087 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 18:03:36.876718 sshd[5000]: Connection closed by 10.200.16.10 port 39486 May 14 18:03:36.877364 sshd-session[4998]: pam_unix(sshd:session): session closed for user core May 14 18:03:36.880256 systemd-logind[1872]: Session 15 logged out. Waiting for processes to exit. May 14 18:03:36.880407 systemd[1]: sshd@12-10.200.20.4:22-10.200.16.10:39486.service: Deactivated successfully. May 14 18:03:36.882265 systemd[1]: session-15.scope: Deactivated successfully. May 14 18:03:36.884252 systemd-logind[1872]: Removed session 15. May 14 18:03:41.961581 systemd[1]: Started sshd@13-10.200.20.4:22-10.200.16.10:58522.service - OpenSSH per-connection server daemon (10.200.16.10:58522). May 14 18:03:42.413563 sshd[5011]: Accepted publickey for core from 10.200.16.10 port 58522 ssh2: RSA SHA256:GfAM5aEyZtKI1wsLTx07KX72HiGyI6L7Lx/Fls8o8zc May 14 18:03:42.414591 sshd-session[5011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:03:42.417949 systemd-logind[1872]: New session 16 of user core. May 14 18:03:42.423169 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 18:03:42.786023 sshd[5013]: Connection closed by 10.200.16.10 port 58522 May 14 18:03:42.786337 sshd-session[5011]: pam_unix(sshd:session): session closed for user core May 14 18:03:42.789417 systemd[1]: sshd@13-10.200.20.4:22-10.200.16.10:58522.service: Deactivated successfully. May 14 18:03:42.791006 systemd[1]: session-16.scope: Deactivated successfully. May 14 18:03:42.791620 systemd-logind[1872]: Session 16 logged out. Waiting for processes to exit. May 14 18:03:42.794242 systemd-logind[1872]: Removed session 16. May 14 18:03:42.865303 systemd[1]: Started sshd@14-10.200.20.4:22-10.200.16.10:58524.service - OpenSSH per-connection server daemon (10.200.16.10:58524). May 14 18:03:43.279175 sshd[5024]: Accepted publickey for core from 10.200.16.10 port 58524 ssh2: RSA SHA256:GfAM5aEyZtKI1wsLTx07KX72HiGyI6L7Lx/Fls8o8zc May 14 18:03:43.280328 sshd-session[5024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:03:43.284263 systemd-logind[1872]: New session 17 of user core. May 14 18:03:43.292080 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 18:03:43.652174 sshd[5026]: Connection closed by 10.200.16.10 port 58524 May 14 18:03:43.652918 sshd-session[5024]: pam_unix(sshd:session): session closed for user core May 14 18:03:43.656182 systemd-logind[1872]: Session 17 logged out. Waiting for processes to exit. May 14 18:03:43.656320 systemd[1]: sshd@14-10.200.20.4:22-10.200.16.10:58524.service: Deactivated successfully. May 14 18:03:43.658432 systemd[1]: session-17.scope: Deactivated successfully. May 14 18:03:43.660542 systemd-logind[1872]: Removed session 17. May 14 18:03:43.733011 systemd[1]: Started sshd@15-10.200.20.4:22-10.200.16.10:58534.service - OpenSSH per-connection server daemon (10.200.16.10:58534). May 14 18:03:44.185106 sshd[5035]: Accepted publickey for core from 10.200.16.10 port 58534 ssh2: RSA SHA256:GfAM5aEyZtKI1wsLTx07KX72HiGyI6L7Lx/Fls8o8zc May 14 18:03:44.186229 sshd-session[5035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:03:44.189995 systemd-logind[1872]: New session 18 of user core. May 14 18:03:44.194244 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 18:03:45.557561 sshd[5037]: Connection closed by 10.200.16.10 port 58534 May 14 18:03:45.556888 sshd-session[5035]: pam_unix(sshd:session): session closed for user core May 14 18:03:45.559738 systemd[1]: sshd@15-10.200.20.4:22-10.200.16.10:58534.service: Deactivated successfully. May 14 18:03:45.561307 systemd[1]: session-18.scope: Deactivated successfully. May 14 18:03:45.562501 systemd-logind[1872]: Session 18 logged out. Waiting for processes to exit. May 14 18:03:45.563746 systemd-logind[1872]: Removed session 18. May 14 18:03:45.642907 systemd[1]: Started sshd@16-10.200.20.4:22-10.200.16.10:58538.service - OpenSSH per-connection server daemon (10.200.16.10:58538). May 14 18:03:46.095481 sshd[5058]: Accepted publickey for core from 10.200.16.10 port 58538 ssh2: RSA SHA256:GfAM5aEyZtKI1wsLTx07KX72HiGyI6L7Lx/Fls8o8zc May 14 18:03:46.096628 sshd-session[5058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:03:46.100449 systemd-logind[1872]: New session 19 of user core. May 14 18:03:46.107245 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 18:03:46.536152 sshd[5060]: Connection closed by 10.200.16.10 port 58538 May 14 18:03:46.536640 sshd-session[5058]: pam_unix(sshd:session): session closed for user core May 14 18:03:46.539865 systemd[1]: sshd@16-10.200.20.4:22-10.200.16.10:58538.service: Deactivated successfully. May 14 18:03:46.541299 systemd[1]: session-19.scope: Deactivated successfully. May 14 18:03:46.541811 systemd-logind[1872]: Session 19 logged out. Waiting for processes to exit. May 14 18:03:46.543033 systemd-logind[1872]: Removed session 19. May 14 18:03:46.620613 systemd[1]: Started sshd@17-10.200.20.4:22-10.200.16.10:58550.service - OpenSSH per-connection server daemon (10.200.16.10:58550). May 14 18:03:47.070874 sshd[5070]: Accepted publickey for core from 10.200.16.10 port 58550 ssh2: RSA SHA256:GfAM5aEyZtKI1wsLTx07KX72HiGyI6L7Lx/Fls8o8zc May 14 18:03:47.071990 sshd-session[5070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:03:47.075811 systemd-logind[1872]: New session 20 of user core. May 14 18:03:47.080095 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 18:03:47.438076 sshd[5072]: Connection closed by 10.200.16.10 port 58550 May 14 18:03:47.438162 sshd-session[5070]: pam_unix(sshd:session): session closed for user core May 14 18:03:47.441463 systemd[1]: sshd@17-10.200.20.4:22-10.200.16.10:58550.service: Deactivated successfully. May 14 18:03:47.442933 systemd[1]: session-20.scope: Deactivated successfully. May 14 18:03:47.443637 systemd-logind[1872]: Session 20 logged out. Waiting for processes to exit. May 14 18:03:47.444823 systemd-logind[1872]: Removed session 20. May 14 18:03:52.516170 systemd[1]: Started sshd@18-10.200.20.4:22-10.200.16.10:47518.service - OpenSSH per-connection server daemon (10.200.16.10:47518). May 14 18:03:52.935112 sshd[5084]: Accepted publickey for core from 10.200.16.10 port 47518 ssh2: RSA SHA256:GfAM5aEyZtKI1wsLTx07KX72HiGyI6L7Lx/Fls8o8zc May 14 18:03:52.936138 sshd-session[5084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:03:52.939738 systemd-logind[1872]: New session 21 of user core. May 14 18:03:52.945238 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 18:03:53.274105 sshd[5089]: Connection closed by 10.200.16.10 port 47518 May 14 18:03:53.274762 sshd-session[5084]: pam_unix(sshd:session): session closed for user core May 14 18:03:53.277654 systemd[1]: sshd@18-10.200.20.4:22-10.200.16.10:47518.service: Deactivated successfully. May 14 18:03:53.279128 systemd[1]: session-21.scope: Deactivated successfully. May 14 18:03:53.279779 systemd-logind[1872]: Session 21 logged out. Waiting for processes to exit. May 14 18:03:53.281069 systemd-logind[1872]: Removed session 21. May 14 18:03:58.361570 systemd[1]: Started sshd@19-10.200.20.4:22-10.200.16.10:47532.service - OpenSSH per-connection server daemon (10.200.16.10:47532). May 14 18:03:58.811215 sshd[5105]: Accepted publickey for core from 10.200.16.10 port 47532 ssh2: RSA SHA256:GfAM5aEyZtKI1wsLTx07KX72HiGyI6L7Lx/Fls8o8zc May 14 18:03:58.812707 sshd-session[5105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:03:58.817777 systemd-logind[1872]: New session 22 of user core. May 14 18:03:58.822077 systemd[1]: Started session-22.scope - Session 22 of User core. May 14 18:03:59.185212 sshd[5107]: Connection closed by 10.200.16.10 port 47532 May 14 18:03:59.185765 sshd-session[5105]: pam_unix(sshd:session): session closed for user core May 14 18:03:59.188609 systemd[1]: sshd@19-10.200.20.4:22-10.200.16.10:47532.service: Deactivated successfully. May 14 18:03:59.190050 systemd[1]: session-22.scope: Deactivated successfully. May 14 18:03:59.190667 systemd-logind[1872]: Session 22 logged out. Waiting for processes to exit. May 14 18:03:59.191945 systemd-logind[1872]: Removed session 22. May 14 18:04:04.261209 systemd[1]: Started sshd@20-10.200.20.4:22-10.200.16.10:42406.service - OpenSSH per-connection server daemon (10.200.16.10:42406). May 14 18:04:04.675182 sshd[5118]: Accepted publickey for core from 10.200.16.10 port 42406 ssh2: RSA SHA256:GfAM5aEyZtKI1wsLTx07KX72HiGyI6L7Lx/Fls8o8zc May 14 18:04:04.676149 sshd-session[5118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:04:04.679838 systemd-logind[1872]: New session 23 of user core. May 14 18:04:04.686076 systemd[1]: Started session-23.scope - Session 23 of User core. May 14 18:04:05.014747 sshd[5120]: Connection closed by 10.200.16.10 port 42406 May 14 18:04:05.015391 sshd-session[5118]: pam_unix(sshd:session): session closed for user core May 14 18:04:05.018496 systemd[1]: sshd@20-10.200.20.4:22-10.200.16.10:42406.service: Deactivated successfully. May 14 18:04:05.018508 systemd-logind[1872]: Session 23 logged out. Waiting for processes to exit. May 14 18:04:05.021200 systemd[1]: session-23.scope: Deactivated successfully. May 14 18:04:05.022335 systemd-logind[1872]: Removed session 23. May 14 18:04:05.095734 systemd[1]: Started sshd@21-10.200.20.4:22-10.200.16.10:42412.service - OpenSSH per-connection server daemon (10.200.16.10:42412). May 14 18:04:05.545759 sshd[5132]: Accepted publickey for core from 10.200.16.10 port 42412 ssh2: RSA SHA256:GfAM5aEyZtKI1wsLTx07KX72HiGyI6L7Lx/Fls8o8zc May 14 18:04:05.546761 sshd-session[5132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:04:05.550420 systemd-logind[1872]: New session 24 of user core. May 14 18:04:05.557068 systemd[1]: Started session-24.scope - Session 24 of User core. May 14 18:04:07.086956 containerd[1906]: time="2025-05-14T18:04:07.086616526Z" level=info msg="StopContainer for \"fbead3f35dd919e7f640e619845a2af370b4a7834e7f7e7dae7b9fe47c7a6e1e\" with timeout 30 (s)" May 14 18:04:07.088222 containerd[1906]: time="2025-05-14T18:04:07.087693220Z" level=info msg="Stop container \"fbead3f35dd919e7f640e619845a2af370b4a7834e7f7e7dae7b9fe47c7a6e1e\" with signal terminated" May 14 18:04:07.088510 containerd[1906]: time="2025-05-14T18:04:07.088487522Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 18:04:07.093603 containerd[1906]: time="2025-05-14T18:04:07.093568497Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6c93eb89e7392910aa747605d1310d40b94d6e426e15fc208375417bc6e0a5d0\" id:\"cca4ac3467e325a1d482d2001af40659e89fcb86ce432e8298bb243035b4f2b6\" pid:5153 exited_at:{seconds:1747245847 nanos:93163942}" May 14 18:04:07.096131 containerd[1906]: time="2025-05-14T18:04:07.096093480Z" level=info msg="StopContainer for \"6c93eb89e7392910aa747605d1310d40b94d6e426e15fc208375417bc6e0a5d0\" with timeout 2 (s)" May 14 18:04:07.096565 systemd[1]: cri-containerd-fbead3f35dd919e7f640e619845a2af370b4a7834e7f7e7dae7b9fe47c7a6e1e.scope: Deactivated successfully. May 14 18:04:07.100079 containerd[1906]: time="2025-05-14T18:04:07.098174595Z" level=info msg="received exit event container_id:\"fbead3f35dd919e7f640e619845a2af370b4a7834e7f7e7dae7b9fe47c7a6e1e\" id:\"fbead3f35dd919e7f640e619845a2af370b4a7834e7f7e7dae7b9fe47c7a6e1e\" pid:4288 exited_at:{seconds:1747245847 nanos:97074660}" May 14 18:04:07.100079 containerd[1906]: time="2025-05-14T18:04:07.098232373Z" level=info msg="Stop container \"6c93eb89e7392910aa747605d1310d40b94d6e426e15fc208375417bc6e0a5d0\" with signal terminated" May 14 18:04:07.100079 containerd[1906]: time="2025-05-14T18:04:07.098744611Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fbead3f35dd919e7f640e619845a2af370b4a7834e7f7e7dae7b9fe47c7a6e1e\" id:\"fbead3f35dd919e7f640e619845a2af370b4a7834e7f7e7dae7b9fe47c7a6e1e\" pid:4288 exited_at:{seconds:1747245847 nanos:97074660}" May 14 18:04:07.106705 systemd-networkd[1480]: lxc_health: Link DOWN May 14 18:04:07.107012 systemd-networkd[1480]: lxc_health: Lost carrier May 14 18:04:07.121616 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fbead3f35dd919e7f640e619845a2af370b4a7834e7f7e7dae7b9fe47c7a6e1e-rootfs.mount: Deactivated successfully. May 14 18:04:07.122820 containerd[1906]: time="2025-05-14T18:04:07.122778704Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6c93eb89e7392910aa747605d1310d40b94d6e426e15fc208375417bc6e0a5d0\" id:\"6c93eb89e7392910aa747605d1310d40b94d6e426e15fc208375417bc6e0a5d0\" pid:4198 exited_at:{seconds:1747245847 nanos:122465623}" May 14 18:04:07.123218 containerd[1906]: time="2025-05-14T18:04:07.123192244Z" level=info msg="received exit event container_id:\"6c93eb89e7392910aa747605d1310d40b94d6e426e15fc208375417bc6e0a5d0\" id:\"6c93eb89e7392910aa747605d1310d40b94d6e426e15fc208375417bc6e0a5d0\" pid:4198 exited_at:{seconds:1747245847 nanos:122465623}" May 14 18:04:07.123209 systemd[1]: cri-containerd-6c93eb89e7392910aa747605d1310d40b94d6e426e15fc208375417bc6e0a5d0.scope: Deactivated successfully. May 14 18:04:07.124759 systemd[1]: cri-containerd-6c93eb89e7392910aa747605d1310d40b94d6e426e15fc208375417bc6e0a5d0.scope: Consumed 4.236s CPU time, 124.9M memory peak, 136K read from disk, 12.9M written to disk. May 14 18:04:07.136557 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c93eb89e7392910aa747605d1310d40b94d6e426e15fc208375417bc6e0a5d0-rootfs.mount: Deactivated successfully. May 14 18:04:08.823015 kubelet[3601]: E0514 18:04:08.822950 3601 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 18:04:09.035465 containerd[1906]: time="2025-05-14T18:04:09.035379129Z" level=info msg="StopContainer for \"6c93eb89e7392910aa747605d1310d40b94d6e426e15fc208375417bc6e0a5d0\" returns successfully" May 14 18:04:09.036284 containerd[1906]: time="2025-05-14T18:04:09.036190943Z" level=info msg="StopPodSandbox for \"3c7b54f2a826f906b07a43ce62b6567cd7b4a558ee070eb4583de4c18d8bb48d\"" May 14 18:04:09.036284 containerd[1906]: time="2025-05-14T18:04:09.036241377Z" level=info msg="Container to stop \"d43a1e52de42c300c6817d93184f7fd8274e718e9857d8a2c8791806b7e45c9f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:04:09.036284 containerd[1906]: time="2025-05-14T18:04:09.036249033Z" level=info msg="Container to stop \"372c6c03a4a22780f92f7fff88d8861fa6a7d10163549424822865418eb0f999\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:04:09.036284 containerd[1906]: time="2025-05-14T18:04:09.036255161Z" level=info msg="Container to stop \"6c93eb89e7392910aa747605d1310d40b94d6e426e15fc208375417bc6e0a5d0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:04:09.036284 containerd[1906]: time="2025-05-14T18:04:09.036260449Z" level=info msg="Container to stop \"e1ae58a8d412294f0ec0cc0db9dd6b13c6c9c2c2dd915697b9fa23e746ffa5d1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:04:09.036284 containerd[1906]: time="2025-05-14T18:04:09.036265129Z" level=info msg="Container to stop \"72fc64f3d778c225bd87127cbe3d6f5020d56a43f4e381bee98727666f7880eb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:04:09.040634 systemd[1]: cri-containerd-3c7b54f2a826f906b07a43ce62b6567cd7b4a558ee070eb4583de4c18d8bb48d.scope: Deactivated successfully. May 14 18:04:09.047307 containerd[1906]: time="2025-05-14T18:04:09.047280374Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3c7b54f2a826f906b07a43ce62b6567cd7b4a558ee070eb4583de4c18d8bb48d\" id:\"3c7b54f2a826f906b07a43ce62b6567cd7b4a558ee070eb4583de4c18d8bb48d\" pid:3741 exit_status:137 exited_at:{seconds:1747245849 nanos:47122073}" May 14 18:04:09.064058 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c7b54f2a826f906b07a43ce62b6567cd7b4a558ee070eb4583de4c18d8bb48d-rootfs.mount: Deactivated successfully. May 14 18:04:09.439542 sshd[5134]: Connection closed by 10.200.16.10 port 42412 May 14 18:04:09.099003 sshd-session[5132]: pam_unix(sshd:session): session closed for user core May 14 18:04:09.102255 systemd[1]: sshd@21-10.200.20.4:22-10.200.16.10:42412.service: Deactivated successfully. May 14 18:04:09.103600 systemd[1]: session-24.scope: Deactivated successfully. May 14 18:04:09.104241 systemd-logind[1872]: Session 24 logged out. Waiting for processes to exit. May 14 18:04:09.105321 systemd-logind[1872]: Removed session 24. May 14 18:04:09.182681 systemd[1]: Started sshd@22-10.200.20.4:22-10.200.16.10:58282.service - OpenSSH per-connection server daemon (10.200.16.10:58282). May 14 18:04:09.538800 containerd[1906]: time="2025-05-14T18:04:09.538723492Z" level=info msg="StopContainer for \"fbead3f35dd919e7f640e619845a2af370b4a7834e7f7e7dae7b9fe47c7a6e1e\" returns successfully" May 14 18:04:09.539053 containerd[1906]: time="2025-05-14T18:04:09.539027100Z" level=info msg="StopPodSandbox for \"73002a4bd1b96fc8d78fff2ca0cca3fbb07886a15c8067cb011c69e282732b76\"" May 14 18:04:09.539157 containerd[1906]: time="2025-05-14T18:04:09.539141759Z" level=info msg="Container to stop \"fbead3f35dd919e7f640e619845a2af370b4a7834e7f7e7dae7b9fe47c7a6e1e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:04:09.545607 systemd[1]: cri-containerd-73002a4bd1b96fc8d78fff2ca0cca3fbb07886a15c8067cb011c69e282732b76.scope: Deactivated successfully. May 14 18:04:09.563094 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73002a4bd1b96fc8d78fff2ca0cca3fbb07886a15c8067cb011c69e282732b76-rootfs.mount: Deactivated successfully. May 14 18:04:09.976789 sshd[5239]: Accepted publickey for core from 10.200.16.10 port 58282 ssh2: RSA SHA256:GfAM5aEyZtKI1wsLTx07KX72HiGyI6L7Lx/Fls8o8zc May 14 18:04:09.976910 sshd-session[5239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:04:09.981054 systemd-logind[1872]: New session 25 of user core. May 14 18:04:09.987223 systemd[1]: Started session-25.scope - Session 25 of User core. May 14 18:04:10.436437 containerd[1906]: time="2025-05-14T18:04:10.436122536Z" level=info msg="shim disconnected" id=73002a4bd1b96fc8d78fff2ca0cca3fbb07886a15c8067cb011c69e282732b76 namespace=k8s.io May 14 18:04:10.436437 containerd[1906]: time="2025-05-14T18:04:10.436150833Z" level=warning msg="cleaning up after shim disconnected" id=73002a4bd1b96fc8d78fff2ca0cca3fbb07886a15c8067cb011c69e282732b76 namespace=k8s.io May 14 18:04:10.436437 containerd[1906]: time="2025-05-14T18:04:10.436172954Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 18:04:10.437935 containerd[1906]: time="2025-05-14T18:04:10.437912833Z" level=info msg="shim disconnected" id=3c7b54f2a826f906b07a43ce62b6567cd7b4a558ee070eb4583de4c18d8bb48d namespace=k8s.io May 14 18:04:10.438093 containerd[1906]: time="2025-05-14T18:04:10.437931874Z" level=warning msg="cleaning up after shim disconnected" id=3c7b54f2a826f906b07a43ce62b6567cd7b4a558ee070eb4583de4c18d8bb48d namespace=k8s.io May 14 18:04:10.438093 containerd[1906]: time="2025-05-14T18:04:10.437950594Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 18:04:10.455510 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-73002a4bd1b96fc8d78fff2ca0cca3fbb07886a15c8067cb011c69e282732b76-shm.mount: Deactivated successfully. May 14 18:04:10.456053 containerd[1906]: time="2025-05-14T18:04:10.455865747Z" level=info msg="received exit event sandbox_id:\"73002a4bd1b96fc8d78fff2ca0cca3fbb07886a15c8067cb011c69e282732b76\" exit_status:137 exited_at:{seconds:1747245849 nanos:549214626}" May 14 18:04:10.456384 containerd[1906]: time="2025-05-14T18:04:10.456357472Z" level=info msg="received exit event sandbox_id:\"3c7b54f2a826f906b07a43ce62b6567cd7b4a558ee070eb4583de4c18d8bb48d\" exit_status:137 exited_at:{seconds:1747245849 nanos:47122073}" May 14 18:04:10.456543 containerd[1906]: time="2025-05-14T18:04:10.456493220Z" level=info msg="TaskExit event in podsandbox handler container_id:\"73002a4bd1b96fc8d78fff2ca0cca3fbb07886a15c8067cb011c69e282732b76\" id:\"73002a4bd1b96fc8d78fff2ca0cca3fbb07886a15c8067cb011c69e282732b76\" pid:3800 exit_status:137 exited_at:{seconds:1747245849 nanos:549214626}" May 14 18:04:10.458422 containerd[1906]: time="2025-05-14T18:04:10.457890842Z" level=info msg="TearDown network for sandbox \"3c7b54f2a826f906b07a43ce62b6567cd7b4a558ee070eb4583de4c18d8bb48d\" successfully" May 14 18:04:10.458422 containerd[1906]: time="2025-05-14T18:04:10.457911850Z" level=info msg="StopPodSandbox for \"3c7b54f2a826f906b07a43ce62b6567cd7b4a558ee070eb4583de4c18d8bb48d\" returns successfully" May 14 18:04:10.458422 containerd[1906]: time="2025-05-14T18:04:10.458022517Z" level=info msg="TearDown network for sandbox \"73002a4bd1b96fc8d78fff2ca0cca3fbb07886a15c8067cb011c69e282732b76\" successfully" May 14 18:04:10.458422 containerd[1906]: time="2025-05-14T18:04:10.458032686Z" level=info msg="StopPodSandbox for \"73002a4bd1b96fc8d78fff2ca0cca3fbb07886a15c8067cb011c69e282732b76\" returns successfully" May 14 18:04:10.459334 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3c7b54f2a826f906b07a43ce62b6567cd7b4a558ee070eb4583de4c18d8bb48d-shm.mount: Deactivated successfully. May 14 18:04:10.553785 kubelet[3601]: I0514 18:04:10.553759 3601 scope.go:117] "RemoveContainer" containerID="6c93eb89e7392910aa747605d1310d40b94d6e426e15fc208375417bc6e0a5d0" May 14 18:04:10.556977 containerd[1906]: time="2025-05-14T18:04:10.555767966Z" level=info msg="RemoveContainer for \"6c93eb89e7392910aa747605d1310d40b94d6e426e15fc208375417bc6e0a5d0\"" May 14 18:04:10.577459 containerd[1906]: time="2025-05-14T18:04:10.577423493Z" level=info msg="RemoveContainer for \"6c93eb89e7392910aa747605d1310d40b94d6e426e15fc208375417bc6e0a5d0\" returns successfully" May 14 18:04:10.577645 kubelet[3601]: I0514 18:04:10.577624 3601 scope.go:117] "RemoveContainer" containerID="372c6c03a4a22780f92f7fff88d8861fa6a7d10163549424822865418eb0f999" May 14 18:04:10.578946 containerd[1906]: time="2025-05-14T18:04:10.578876836Z" level=info msg="RemoveContainer for \"372c6c03a4a22780f92f7fff88d8861fa6a7d10163549424822865418eb0f999\"" May 14 18:04:10.648374 containerd[1906]: time="2025-05-14T18:04:10.648287329Z" level=info msg="RemoveContainer for \"372c6c03a4a22780f92f7fff88d8861fa6a7d10163549424822865418eb0f999\" returns successfully" May 14 18:04:10.648535 kubelet[3601]: I0514 18:04:10.648473 3601 scope.go:117] "RemoveContainer" containerID="d43a1e52de42c300c6817d93184f7fd8274e718e9857d8a2c8791806b7e45c9f" May 14 18:04:10.650545 containerd[1906]: time="2025-05-14T18:04:10.650523278Z" level=info msg="RemoveContainer for \"d43a1e52de42c300c6817d93184f7fd8274e718e9857d8a2c8791806b7e45c9f\"" May 14 18:04:10.662978 kubelet[3601]: I0514 18:04:10.661979 3601 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e060d488-b501-4271-a9b5-49cf79a1f7a4-hubble-tls\") pod \"e060d488-b501-4271-a9b5-49cf79a1f7a4\" (UID: \"e060d488-b501-4271-a9b5-49cf79a1f7a4\") " May 14 18:04:10.662978 kubelet[3601]: I0514 18:04:10.662008 3601 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e060d488-b501-4271-a9b5-49cf79a1f7a4-bpf-maps\") pod \"e060d488-b501-4271-a9b5-49cf79a1f7a4\" (UID: \"e060d488-b501-4271-a9b5-49cf79a1f7a4\") " May 14 18:04:10.662978 kubelet[3601]: I0514 18:04:10.662023 3601 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cv6tk\" (UniqueName: \"kubernetes.io/projected/687e4d4b-09d9-4e3b-b440-e44865fe207e-kube-api-access-cv6tk\") pod \"687e4d4b-09d9-4e3b-b440-e44865fe207e\" (UID: \"687e4d4b-09d9-4e3b-b440-e44865fe207e\") " May 14 18:04:10.662978 kubelet[3601]: I0514 18:04:10.662035 3601 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e060d488-b501-4271-a9b5-49cf79a1f7a4-xtables-lock\") pod \"e060d488-b501-4271-a9b5-49cf79a1f7a4\" (UID: \"e060d488-b501-4271-a9b5-49cf79a1f7a4\") " May 14 18:04:10.662978 kubelet[3601]: I0514 18:04:10.662045 3601 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e060d488-b501-4271-a9b5-49cf79a1f7a4-cilium-run\") pod \"e060d488-b501-4271-a9b5-49cf79a1f7a4\" (UID: \"e060d488-b501-4271-a9b5-49cf79a1f7a4\") " May 14 18:04:10.662978 kubelet[3601]: I0514 18:04:10.662056 3601 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e060d488-b501-4271-a9b5-49cf79a1f7a4-host-proc-sys-net\") pod \"e060d488-b501-4271-a9b5-49cf79a1f7a4\" (UID: \"e060d488-b501-4271-a9b5-49cf79a1f7a4\") " May 14 18:04:10.663132 kubelet[3601]: I0514 18:04:10.662066 3601 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e060d488-b501-4271-a9b5-49cf79a1f7a4-cni-path\") pod \"e060d488-b501-4271-a9b5-49cf79a1f7a4\" (UID: \"e060d488-b501-4271-a9b5-49cf79a1f7a4\") " May 14 18:04:10.663132 kubelet[3601]: I0514 18:04:10.662076 3601 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e060d488-b501-4271-a9b5-49cf79a1f7a4-lib-modules\") pod \"e060d488-b501-4271-a9b5-49cf79a1f7a4\" (UID: \"e060d488-b501-4271-a9b5-49cf79a1f7a4\") " May 14 18:04:10.663132 kubelet[3601]: I0514 18:04:10.662084 3601 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e060d488-b501-4271-a9b5-49cf79a1f7a4-host-proc-sys-kernel\") pod \"e060d488-b501-4271-a9b5-49cf79a1f7a4\" (UID: \"e060d488-b501-4271-a9b5-49cf79a1f7a4\") " May 14 18:04:10.663132 kubelet[3601]: I0514 18:04:10.662092 3601 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e060d488-b501-4271-a9b5-49cf79a1f7a4-cilium-cgroup\") pod \"e060d488-b501-4271-a9b5-49cf79a1f7a4\" (UID: \"e060d488-b501-4271-a9b5-49cf79a1f7a4\") " May 14 18:04:10.663132 kubelet[3601]: I0514 18:04:10.662102 3601 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e060d488-b501-4271-a9b5-49cf79a1f7a4-etc-cni-netd\") pod \"e060d488-b501-4271-a9b5-49cf79a1f7a4\" (UID: \"e060d488-b501-4271-a9b5-49cf79a1f7a4\") " May 14 18:04:10.663132 kubelet[3601]: I0514 18:04:10.662114 3601 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlqj2\" (UniqueName: \"kubernetes.io/projected/e060d488-b501-4271-a9b5-49cf79a1f7a4-kube-api-access-zlqj2\") pod \"e060d488-b501-4271-a9b5-49cf79a1f7a4\" (UID: \"e060d488-b501-4271-a9b5-49cf79a1f7a4\") " May 14 18:04:10.663221 kubelet[3601]: I0514 18:04:10.662126 3601 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/687e4d4b-09d9-4e3b-b440-e44865fe207e-cilium-config-path\") pod \"687e4d4b-09d9-4e3b-b440-e44865fe207e\" (UID: \"687e4d4b-09d9-4e3b-b440-e44865fe207e\") " May 14 18:04:10.663221 kubelet[3601]: I0514 18:04:10.662138 3601 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e060d488-b501-4271-a9b5-49cf79a1f7a4-clustermesh-secrets\") pod \"e060d488-b501-4271-a9b5-49cf79a1f7a4\" (UID: \"e060d488-b501-4271-a9b5-49cf79a1f7a4\") " May 14 18:04:10.663221 kubelet[3601]: I0514 18:04:10.662149 3601 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e060d488-b501-4271-a9b5-49cf79a1f7a4-cilium-config-path\") pod \"e060d488-b501-4271-a9b5-49cf79a1f7a4\" (UID: \"e060d488-b501-4271-a9b5-49cf79a1f7a4\") " May 14 18:04:10.663221 kubelet[3601]: I0514 18:04:10.662158 3601 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e060d488-b501-4271-a9b5-49cf79a1f7a4-hostproc\") pod \"e060d488-b501-4271-a9b5-49cf79a1f7a4\" (UID: \"e060d488-b501-4271-a9b5-49cf79a1f7a4\") " May 14 18:04:10.663221 kubelet[3601]: I0514 18:04:10.662191 3601 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e060d488-b501-4271-a9b5-49cf79a1f7a4-hostproc" (OuterVolumeSpecName: "hostproc") pod "e060d488-b501-4271-a9b5-49cf79a1f7a4" (UID: "e060d488-b501-4271-a9b5-49cf79a1f7a4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:04:10.663221 kubelet[3601]: I0514 18:04:10.662216 3601 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e060d488-b501-4271-a9b5-49cf79a1f7a4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e060d488-b501-4271-a9b5-49cf79a1f7a4" (UID: "e060d488-b501-4271-a9b5-49cf79a1f7a4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:04:10.663478 kubelet[3601]: I0514 18:04:10.663417 3601 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e060d488-b501-4271-a9b5-49cf79a1f7a4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e060d488-b501-4271-a9b5-49cf79a1f7a4" (UID: "e060d488-b501-4271-a9b5-49cf79a1f7a4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:04:10.663478 kubelet[3601]: I0514 18:04:10.663450 3601 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e060d488-b501-4271-a9b5-49cf79a1f7a4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e060d488-b501-4271-a9b5-49cf79a1f7a4" (UID: "e060d488-b501-4271-a9b5-49cf79a1f7a4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:04:10.663478 kubelet[3601]: I0514 18:04:10.663463 3601 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e060d488-b501-4271-a9b5-49cf79a1f7a4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e060d488-b501-4271-a9b5-49cf79a1f7a4" (UID: "e060d488-b501-4271-a9b5-49cf79a1f7a4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:04:10.663478 kubelet[3601]: I0514 18:04:10.663476 3601 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e060d488-b501-4271-a9b5-49cf79a1f7a4-cni-path" (OuterVolumeSpecName: "cni-path") pod "e060d488-b501-4271-a9b5-49cf79a1f7a4" (UID: "e060d488-b501-4271-a9b5-49cf79a1f7a4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:04:10.663551 kubelet[3601]: I0514 18:04:10.663487 3601 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e060d488-b501-4271-a9b5-49cf79a1f7a4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e060d488-b501-4271-a9b5-49cf79a1f7a4" (UID: "e060d488-b501-4271-a9b5-49cf79a1f7a4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:04:10.663551 kubelet[3601]: I0514 18:04:10.663496 3601 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e060d488-b501-4271-a9b5-49cf79a1f7a4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e060d488-b501-4271-a9b5-49cf79a1f7a4" (UID: "e060d488-b501-4271-a9b5-49cf79a1f7a4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:04:10.663551 kubelet[3601]: I0514 18:04:10.663505 3601 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e060d488-b501-4271-a9b5-49cf79a1f7a4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e060d488-b501-4271-a9b5-49cf79a1f7a4" (UID: "e060d488-b501-4271-a9b5-49cf79a1f7a4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:04:10.663551 kubelet[3601]: I0514 18:04:10.663512 3601 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e060d488-b501-4271-a9b5-49cf79a1f7a4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e060d488-b501-4271-a9b5-49cf79a1f7a4" (UID: "e060d488-b501-4271-a9b5-49cf79a1f7a4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:04:10.666910 systemd[1]: var-lib-kubelet-pods-e060d488\x2db501\x2d4271\x2da9b5\x2d49cf79a1f7a4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 14 18:04:10.671111 kubelet[3601]: I0514 18:04:10.670677 3601 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/687e4d4b-09d9-4e3b-b440-e44865fe207e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "687e4d4b-09d9-4e3b-b440-e44865fe207e" (UID: "687e4d4b-09d9-4e3b-b440-e44865fe207e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 18:04:10.672919 kubelet[3601]: I0514 18:04:10.672810 3601 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/687e4d4b-09d9-4e3b-b440-e44865fe207e-kube-api-access-cv6tk" (OuterVolumeSpecName: "kube-api-access-cv6tk") pod "687e4d4b-09d9-4e3b-b440-e44865fe207e" (UID: "687e4d4b-09d9-4e3b-b440-e44865fe207e"). InnerVolumeSpecName "kube-api-access-cv6tk". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 18:04:10.672859 systemd[1]: var-lib-kubelet-pods-687e4d4b\x2d09d9\x2d4e3b\x2db440\x2de44865fe207e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcv6tk.mount: Deactivated successfully. May 14 18:04:10.674973 kubelet[3601]: I0514 18:04:10.673195 3601 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e060d488-b501-4271-a9b5-49cf79a1f7a4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e060d488-b501-4271-a9b5-49cf79a1f7a4" (UID: "e060d488-b501-4271-a9b5-49cf79a1f7a4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 18:04:10.675451 kubelet[3601]: I0514 18:04:10.673796 3601 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e060d488-b501-4271-a9b5-49cf79a1f7a4-kube-api-access-zlqj2" (OuterVolumeSpecName: "kube-api-access-zlqj2") pod "e060d488-b501-4271-a9b5-49cf79a1f7a4" (UID: "e060d488-b501-4271-a9b5-49cf79a1f7a4"). InnerVolumeSpecName "kube-api-access-zlqj2". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 18:04:10.675521 kubelet[3601]: I0514 18:04:10.674812 3601 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e060d488-b501-4271-a9b5-49cf79a1f7a4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e060d488-b501-4271-a9b5-49cf79a1f7a4" (UID: "e060d488-b501-4271-a9b5-49cf79a1f7a4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 18:04:10.676592 kubelet[3601]: I0514 18:04:10.676096 3601 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e060d488-b501-4271-a9b5-49cf79a1f7a4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e060d488-b501-4271-a9b5-49cf79a1f7a4" (UID: "e060d488-b501-4271-a9b5-49cf79a1f7a4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 14 18:04:10.676329 systemd[1]: var-lib-kubelet-pods-e060d488\x2db501\x2d4271\x2da9b5\x2d49cf79a1f7a4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzlqj2.mount: Deactivated successfully. May 14 18:04:10.676396 systemd[1]: var-lib-kubelet-pods-e060d488\x2db501\x2d4271\x2da9b5\x2d49cf79a1f7a4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 14 18:04:10.742248 containerd[1906]: time="2025-05-14T18:04:10.742105303Z" level=info msg="RemoveContainer for \"d43a1e52de42c300c6817d93184f7fd8274e718e9857d8a2c8791806b7e45c9f\" returns successfully" May 14 18:04:10.745230 kubelet[3601]: I0514 18:04:10.745200 3601 scope.go:117] "RemoveContainer" containerID="72fc64f3d778c225bd87127cbe3d6f5020d56a43f4e381bee98727666f7880eb" May 14 18:04:10.748262 containerd[1906]: time="2025-05-14T18:04:10.748243182Z" level=info msg="RemoveContainer for \"72fc64f3d778c225bd87127cbe3d6f5020d56a43f4e381bee98727666f7880eb\"" May 14 18:04:10.754565 systemd[1]: Removed slice kubepods-burstable-pode060d488_b501_4271_a9b5_49cf79a1f7a4.slice - libcontainer container kubepods-burstable-pode060d488_b501_4271_a9b5_49cf79a1f7a4.slice. May 14 18:04:10.756056 systemd[1]: kubepods-burstable-pode060d488_b501_4271_a9b5_49cf79a1f7a4.slice: Consumed 4.291s CPU time, 125.4M memory peak, 136K read from disk, 12.9M written to disk. May 14 18:04:10.757631 systemd[1]: Removed slice kubepods-besteffort-pod687e4d4b_09d9_4e3b_b440_e44865fe207e.slice - libcontainer container kubepods-besteffort-pod687e4d4b_09d9_4e3b_b440_e44865fe207e.slice. May 14 18:04:10.763081 kubelet[3601]: I0514 18:04:10.763047 3601 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-zlqj2\" (UniqueName: \"kubernetes.io/projected/e060d488-b501-4271-a9b5-49cf79a1f7a4-kube-api-access-zlqj2\") on node \"ci-4334.0.0-a-9340e225f6\" DevicePath \"\"" May 14 18:04:10.763294 kubelet[3601]: I0514 18:04:10.763172 3601 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/687e4d4b-09d9-4e3b-b440-e44865fe207e-cilium-config-path\") on node \"ci-4334.0.0-a-9340e225f6\" DevicePath \"\"" May 14 18:04:10.763294 kubelet[3601]: I0514 18:04:10.763204 3601 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e060d488-b501-4271-a9b5-49cf79a1f7a4-clustermesh-secrets\") on node \"ci-4334.0.0-a-9340e225f6\" DevicePath \"\"" May 14 18:04:10.763294 kubelet[3601]: I0514 18:04:10.763212 3601 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e060d488-b501-4271-a9b5-49cf79a1f7a4-cilium-config-path\") on node \"ci-4334.0.0-a-9340e225f6\" DevicePath \"\"" May 14 18:04:10.763294 kubelet[3601]: I0514 18:04:10.763220 3601 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e060d488-b501-4271-a9b5-49cf79a1f7a4-hostproc\") on node \"ci-4334.0.0-a-9340e225f6\" DevicePath \"\"" May 14 18:04:10.763294 kubelet[3601]: I0514 18:04:10.763227 3601 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e060d488-b501-4271-a9b5-49cf79a1f7a4-hubble-tls\") on node \"ci-4334.0.0-a-9340e225f6\" DevicePath \"\"" May 14 18:04:10.763294 kubelet[3601]: I0514 18:04:10.763233 3601 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e060d488-b501-4271-a9b5-49cf79a1f7a4-bpf-maps\") on node \"ci-4334.0.0-a-9340e225f6\" DevicePath \"\"" May 14 18:04:10.763294 kubelet[3601]: I0514 18:04:10.763238 3601 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-cv6tk\" (UniqueName: \"kubernetes.io/projected/687e4d4b-09d9-4e3b-b440-e44865fe207e-kube-api-access-cv6tk\") on node \"ci-4334.0.0-a-9340e225f6\" DevicePath \"\"" May 14 18:04:10.763294 kubelet[3601]: I0514 18:04:10.763245 3601 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e060d488-b501-4271-a9b5-49cf79a1f7a4-xtables-lock\") on node \"ci-4334.0.0-a-9340e225f6\" DevicePath \"\"" May 14 18:04:10.763436 kubelet[3601]: I0514 18:04:10.763251 3601 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e060d488-b501-4271-a9b5-49cf79a1f7a4-cilium-run\") on node \"ci-4334.0.0-a-9340e225f6\" DevicePath \"\"" May 14 18:04:10.763436 kubelet[3601]: I0514 18:04:10.763257 3601 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e060d488-b501-4271-a9b5-49cf79a1f7a4-host-proc-sys-kernel\") on node \"ci-4334.0.0-a-9340e225f6\" DevicePath \"\"" May 14 18:04:10.763436 kubelet[3601]: I0514 18:04:10.763263 3601 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e060d488-b501-4271-a9b5-49cf79a1f7a4-cilium-cgroup\") on node \"ci-4334.0.0-a-9340e225f6\" DevicePath \"\"" May 14 18:04:10.763436 kubelet[3601]: I0514 18:04:10.763268 3601 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e060d488-b501-4271-a9b5-49cf79a1f7a4-host-proc-sys-net\") on node \"ci-4334.0.0-a-9340e225f6\" DevicePath \"\"" May 14 18:04:10.763436 kubelet[3601]: I0514 18:04:10.763273 3601 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e060d488-b501-4271-a9b5-49cf79a1f7a4-cni-path\") on node \"ci-4334.0.0-a-9340e225f6\" DevicePath \"\"" May 14 18:04:10.763436 kubelet[3601]: I0514 18:04:10.763278 3601 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e060d488-b501-4271-a9b5-49cf79a1f7a4-lib-modules\") on node \"ci-4334.0.0-a-9340e225f6\" DevicePath \"\"" May 14 18:04:10.763436 kubelet[3601]: I0514 18:04:10.763284 3601 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e060d488-b501-4271-a9b5-49cf79a1f7a4-etc-cni-netd\") on node \"ci-4334.0.0-a-9340e225f6\" DevicePath \"\"" May 14 18:04:10.835321 containerd[1906]: time="2025-05-14T18:04:10.835287483Z" level=info msg="RemoveContainer for \"72fc64f3d778c225bd87127cbe3d6f5020d56a43f4e381bee98727666f7880eb\" returns successfully" May 14 18:04:10.835591 kubelet[3601]: I0514 18:04:10.835572 3601 scope.go:117] "RemoveContainer" containerID="e1ae58a8d412294f0ec0cc0db9dd6b13c6c9c2c2dd915697b9fa23e746ffa5d1" May 14 18:04:10.836942 containerd[1906]: time="2025-05-14T18:04:10.836891927Z" level=info msg="RemoveContainer for \"e1ae58a8d412294f0ec0cc0db9dd6b13c6c9c2c2dd915697b9fa23e746ffa5d1\"" May 14 18:04:10.882000 kubelet[3601]: I0514 18:04:10.880588 3601 topology_manager.go:215] "Topology Admit Handler" podUID="5dc9a3f4-1f36-4f71-b9f9-90f9a417c113" podNamespace="kube-system" podName="cilium-6hmps" May 14 18:04:10.882000 kubelet[3601]: E0514 18:04:10.880653 3601 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e060d488-b501-4271-a9b5-49cf79a1f7a4" containerName="mount-cgroup" May 14 18:04:10.882000 kubelet[3601]: E0514 18:04:10.880661 3601 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e060d488-b501-4271-a9b5-49cf79a1f7a4" containerName="apply-sysctl-overwrites" May 14 18:04:10.882000 kubelet[3601]: E0514 18:04:10.880665 3601 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e060d488-b501-4271-a9b5-49cf79a1f7a4" containerName="cilium-agent" May 14 18:04:10.882000 kubelet[3601]: E0514 18:04:10.880669 3601 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="687e4d4b-09d9-4e3b-b440-e44865fe207e" containerName="cilium-operator" May 14 18:04:10.882000 kubelet[3601]: E0514 18:04:10.880673 3601 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e060d488-b501-4271-a9b5-49cf79a1f7a4" containerName="mount-bpf-fs" May 14 18:04:10.882000 kubelet[3601]: E0514 18:04:10.880676 3601 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e060d488-b501-4271-a9b5-49cf79a1f7a4" containerName="clean-cilium-state" May 14 18:04:10.882000 kubelet[3601]: I0514 18:04:10.880691 3601 memory_manager.go:354] "RemoveStaleState removing state" podUID="e060d488-b501-4271-a9b5-49cf79a1f7a4" containerName="cilium-agent" May 14 18:04:10.882000 kubelet[3601]: I0514 18:04:10.880696 3601 memory_manager.go:354] "RemoveStaleState removing state" podUID="687e4d4b-09d9-4e3b-b440-e44865fe207e" containerName="cilium-operator" May 14 18:04:10.888436 systemd[1]: Created slice kubepods-burstable-pod5dc9a3f4_1f36_4f71_b9f9_90f9a417c113.slice - libcontainer container kubepods-burstable-pod5dc9a3f4_1f36_4f71_b9f9_90f9a417c113.slice. May 14 18:04:10.927563 sshd[5262]: Connection closed by 10.200.16.10 port 58282 May 14 18:04:10.928458 sshd-session[5239]: pam_unix(sshd:session): session closed for user core May 14 18:04:10.929941 containerd[1906]: time="2025-05-14T18:04:10.929892455Z" level=info msg="RemoveContainer for \"e1ae58a8d412294f0ec0cc0db9dd6b13c6c9c2c2dd915697b9fa23e746ffa5d1\" returns successfully" May 14 18:04:10.930307 kubelet[3601]: I0514 18:04:10.930231 3601 scope.go:117] "RemoveContainer" containerID="fbead3f35dd919e7f640e619845a2af370b4a7834e7f7e7dae7b9fe47c7a6e1e" May 14 18:04:10.931622 systemd[1]: sshd@22-10.200.20.4:22-10.200.16.10:58282.service: Deactivated successfully. May 14 18:04:10.932095 containerd[1906]: time="2025-05-14T18:04:10.932041673Z" level=info msg="RemoveContainer for \"fbead3f35dd919e7f640e619845a2af370b4a7834e7f7e7dae7b9fe47c7a6e1e\"" May 14 18:04:10.933290 systemd-logind[1872]: Session 25 logged out. Waiting for processes to exit. May 14 18:04:10.934162 systemd[1]: session-25.scope: Deactivated successfully. May 14 18:04:10.936488 systemd-logind[1872]: Removed session 25. May 14 18:04:10.965737 kubelet[3601]: I0514 18:04:10.965711 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5dc9a3f4-1f36-4f71-b9f9-90f9a417c113-hostproc\") pod \"cilium-6hmps\" (UID: \"5dc9a3f4-1f36-4f71-b9f9-90f9a417c113\") " pod="kube-system/cilium-6hmps" May 14 18:04:10.965737 kubelet[3601]: I0514 18:04:10.965741 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5dc9a3f4-1f36-4f71-b9f9-90f9a417c113-lib-modules\") pod \"cilium-6hmps\" (UID: \"5dc9a3f4-1f36-4f71-b9f9-90f9a417c113\") " pod="kube-system/cilium-6hmps" May 14 18:04:10.965984 kubelet[3601]: I0514 18:04:10.965755 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5dc9a3f4-1f36-4f71-b9f9-90f9a417c113-xtables-lock\") pod \"cilium-6hmps\" (UID: \"5dc9a3f4-1f36-4f71-b9f9-90f9a417c113\") " pod="kube-system/cilium-6hmps" May 14 18:04:10.965984 kubelet[3601]: I0514 18:04:10.965769 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5dc9a3f4-1f36-4f71-b9f9-90f9a417c113-cilium-cgroup\") pod \"cilium-6hmps\" (UID: \"5dc9a3f4-1f36-4f71-b9f9-90f9a417c113\") " pod="kube-system/cilium-6hmps" May 14 18:04:10.965984 kubelet[3601]: I0514 18:04:10.965782 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5dc9a3f4-1f36-4f71-b9f9-90f9a417c113-cni-path\") pod \"cilium-6hmps\" (UID: \"5dc9a3f4-1f36-4f71-b9f9-90f9a417c113\") " pod="kube-system/cilium-6hmps" May 14 18:04:10.965984 kubelet[3601]: I0514 18:04:10.965808 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5dc9a3f4-1f36-4f71-b9f9-90f9a417c113-cilium-ipsec-secrets\") pod \"cilium-6hmps\" (UID: \"5dc9a3f4-1f36-4f71-b9f9-90f9a417c113\") " pod="kube-system/cilium-6hmps" May 14 18:04:10.965984 kubelet[3601]: I0514 18:04:10.965822 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhx8f\" (UniqueName: \"kubernetes.io/projected/5dc9a3f4-1f36-4f71-b9f9-90f9a417c113-kube-api-access-vhx8f\") pod \"cilium-6hmps\" (UID: \"5dc9a3f4-1f36-4f71-b9f9-90f9a417c113\") " pod="kube-system/cilium-6hmps" May 14 18:04:10.965984 kubelet[3601]: I0514 18:04:10.965841 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5dc9a3f4-1f36-4f71-b9f9-90f9a417c113-clustermesh-secrets\") pod \"cilium-6hmps\" (UID: \"5dc9a3f4-1f36-4f71-b9f9-90f9a417c113\") " pod="kube-system/cilium-6hmps" May 14 18:04:10.966117 kubelet[3601]: I0514 18:04:10.965852 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5dc9a3f4-1f36-4f71-b9f9-90f9a417c113-host-proc-sys-net\") pod \"cilium-6hmps\" (UID: \"5dc9a3f4-1f36-4f71-b9f9-90f9a417c113\") " pod="kube-system/cilium-6hmps" May 14 18:04:10.966117 kubelet[3601]: I0514 18:04:10.965864 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5dc9a3f4-1f36-4f71-b9f9-90f9a417c113-hubble-tls\") pod \"cilium-6hmps\" (UID: \"5dc9a3f4-1f36-4f71-b9f9-90f9a417c113\") " pod="kube-system/cilium-6hmps" May 14 18:04:10.966117 kubelet[3601]: I0514 18:04:10.965876 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5dc9a3f4-1f36-4f71-b9f9-90f9a417c113-etc-cni-netd\") pod \"cilium-6hmps\" (UID: \"5dc9a3f4-1f36-4f71-b9f9-90f9a417c113\") " pod="kube-system/cilium-6hmps" May 14 18:04:10.966117 kubelet[3601]: I0514 18:04:10.965893 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5dc9a3f4-1f36-4f71-b9f9-90f9a417c113-host-proc-sys-kernel\") pod \"cilium-6hmps\" (UID: \"5dc9a3f4-1f36-4f71-b9f9-90f9a417c113\") " pod="kube-system/cilium-6hmps" May 14 18:04:10.966117 kubelet[3601]: I0514 18:04:10.965904 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5dc9a3f4-1f36-4f71-b9f9-90f9a417c113-cilium-run\") pod \"cilium-6hmps\" (UID: \"5dc9a3f4-1f36-4f71-b9f9-90f9a417c113\") " pod="kube-system/cilium-6hmps" May 14 18:04:10.966117 kubelet[3601]: I0514 18:04:10.965913 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5dc9a3f4-1f36-4f71-b9f9-90f9a417c113-cilium-config-path\") pod \"cilium-6hmps\" (UID: \"5dc9a3f4-1f36-4f71-b9f9-90f9a417c113\") " pod="kube-system/cilium-6hmps" May 14 18:04:10.966207 kubelet[3601]: I0514 18:04:10.965924 3601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5dc9a3f4-1f36-4f71-b9f9-90f9a417c113-bpf-maps\") pod \"cilium-6hmps\" (UID: \"5dc9a3f4-1f36-4f71-b9f9-90f9a417c113\") " pod="kube-system/cilium-6hmps" May 14 18:04:11.009227 systemd[1]: Started sshd@23-10.200.20.4:22-10.200.16.10:58286.service - OpenSSH per-connection server daemon (10.200.16.10:58286). May 14 18:04:11.084509 containerd[1906]: time="2025-05-14T18:04:11.084479358Z" level=info msg="RemoveContainer for \"fbead3f35dd919e7f640e619845a2af370b4a7834e7f7e7dae7b9fe47c7a6e1e\" returns successfully" May 14 18:04:11.193922 containerd[1906]: time="2025-05-14T18:04:11.193727760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6hmps,Uid:5dc9a3f4-1f36-4f71-b9f9-90f9a417c113,Namespace:kube-system,Attempt:0,}" May 14 18:04:11.461520 sshd[5307]: Accepted publickey for core from 10.200.16.10 port 58286 ssh2: RSA SHA256:GfAM5aEyZtKI1wsLTx07KX72HiGyI6L7Lx/Fls8o8zc May 14 18:04:11.462909 sshd-session[5307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:04:11.466991 systemd-logind[1872]: New session 26 of user core. May 14 18:04:11.473086 systemd[1]: Started session-26.scope - Session 26 of User core. May 14 18:04:11.646148 containerd[1906]: time="2025-05-14T18:04:11.646109062Z" level=info msg="connecting to shim 2681b1e4dc841ae528263f384b1b2af5a1f9fd2ec4f987cea1af975881a868d1" address="unix:///run/containerd/s/35558cf5bfb64c9839003b8d9a0743b09999bed9b9c14ca771451f6a593db5d7" namespace=k8s.io protocol=ttrpc version=3 May 14 18:04:11.668080 systemd[1]: Started cri-containerd-2681b1e4dc841ae528263f384b1b2af5a1f9fd2ec4f987cea1af975881a868d1.scope - libcontainer container 2681b1e4dc841ae528263f384b1b2af5a1f9fd2ec4f987cea1af975881a868d1. May 14 18:04:11.690478 containerd[1906]: time="2025-05-14T18:04:11.690391485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6hmps,Uid:5dc9a3f4-1f36-4f71-b9f9-90f9a417c113,Namespace:kube-system,Attempt:0,} returns sandbox id \"2681b1e4dc841ae528263f384b1b2af5a1f9fd2ec4f987cea1af975881a868d1\"" May 14 18:04:11.693509 containerd[1906]: time="2025-05-14T18:04:11.693489626Z" level=info msg="CreateContainer within sandbox \"2681b1e4dc841ae528263f384b1b2af5a1f9fd2ec4f987cea1af975881a868d1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 18:04:11.784835 sshd[5313]: Connection closed by 10.200.16.10 port 58286 May 14 18:04:11.785198 sshd-session[5307]: pam_unix(sshd:session): session closed for user core May 14 18:04:11.789452 systemd[1]: sshd@23-10.200.20.4:22-10.200.16.10:58286.service: Deactivated successfully. May 14 18:04:11.794682 systemd[1]: session-26.scope: Deactivated successfully. May 14 18:04:11.797294 systemd-logind[1872]: Session 26 logged out. Waiting for processes to exit. May 14 18:04:11.798317 systemd-logind[1872]: Removed session 26. May 14 18:04:11.866630 systemd[1]: Started sshd@24-10.200.20.4:22-10.200.16.10:58288.service - OpenSSH per-connection server daemon (10.200.16.10:58288). May 14 18:04:11.880599 containerd[1906]: time="2025-05-14T18:04:11.880525973Z" level=info msg="Container a604278eb2211aa62478cfae6ba9e24d2d78f95bfeda627f8418779e1c3939a3: CDI devices from CRI Config.CDIDevices: []" May 14 18:04:11.883095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1467958276.mount: Deactivated successfully. May 14 18:04:12.003069 containerd[1906]: time="2025-05-14T18:04:12.003023929Z" level=info msg="CreateContainer within sandbox \"2681b1e4dc841ae528263f384b1b2af5a1f9fd2ec4f987cea1af975881a868d1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a604278eb2211aa62478cfae6ba9e24d2d78f95bfeda627f8418779e1c3939a3\"" May 14 18:04:12.003800 containerd[1906]: time="2025-05-14T18:04:12.003779886Z" level=info msg="StartContainer for \"a604278eb2211aa62478cfae6ba9e24d2d78f95bfeda627f8418779e1c3939a3\"" May 14 18:04:12.004754 containerd[1906]: time="2025-05-14T18:04:12.004728336Z" level=info msg="connecting to shim a604278eb2211aa62478cfae6ba9e24d2d78f95bfeda627f8418779e1c3939a3" address="unix:///run/containerd/s/35558cf5bfb64c9839003b8d9a0743b09999bed9b9c14ca771451f6a593db5d7" protocol=ttrpc version=3 May 14 18:04:12.021099 systemd[1]: Started cri-containerd-a604278eb2211aa62478cfae6ba9e24d2d78f95bfeda627f8418779e1c3939a3.scope - libcontainer container a604278eb2211aa62478cfae6ba9e24d2d78f95bfeda627f8418779e1c3939a3. May 14 18:04:12.046188 containerd[1906]: time="2025-05-14T18:04:12.046108776Z" level=info msg="StartContainer for \"a604278eb2211aa62478cfae6ba9e24d2d78f95bfeda627f8418779e1c3939a3\" returns successfully" May 14 18:04:12.047545 systemd[1]: cri-containerd-a604278eb2211aa62478cfae6ba9e24d2d78f95bfeda627f8418779e1c3939a3.scope: Deactivated successfully. May 14 18:04:12.049610 containerd[1906]: time="2025-05-14T18:04:12.049572526Z" level=info msg="received exit event container_id:\"a604278eb2211aa62478cfae6ba9e24d2d78f95bfeda627f8418779e1c3939a3\" id:\"a604278eb2211aa62478cfae6ba9e24d2d78f95bfeda627f8418779e1c3939a3\" pid:5379 exited_at:{seconds:1747245852 nanos:49239949}" May 14 18:04:12.049726 containerd[1906]: time="2025-05-14T18:04:12.049694106Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a604278eb2211aa62478cfae6ba9e24d2d78f95bfeda627f8418779e1c3939a3\" id:\"a604278eb2211aa62478cfae6ba9e24d2d78f95bfeda627f8418779e1c3939a3\" pid:5379 exited_at:{seconds:1747245852 nanos:49239949}" May 14 18:04:12.289262 sshd[5365]: Accepted publickey for core from 10.200.16.10 port 58288 ssh2: RSA SHA256:GfAM5aEyZtKI1wsLTx07KX72HiGyI6L7Lx/Fls8o8zc May 14 18:04:12.290401 sshd-session[5365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:04:12.294890 systemd-logind[1872]: New session 27 of user core. May 14 18:04:12.300120 systemd[1]: Started session-27.scope - Session 27 of User core. May 14 18:04:12.748881 kubelet[3601]: I0514 18:04:12.748784 3601 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="687e4d4b-09d9-4e3b-b440-e44865fe207e" path="/var/lib/kubelet/pods/687e4d4b-09d9-4e3b-b440-e44865fe207e/volumes" May 14 18:04:12.778077 kubelet[3601]: I0514 18:04:12.749103 3601 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e060d488-b501-4271-a9b5-49cf79a1f7a4" path="/var/lib/kubelet/pods/e060d488-b501-4271-a9b5-49cf79a1f7a4/volumes" May 14 18:04:13.031285 kubelet[3601]: I0514 18:04:12.925498 3601 setters.go:580] "Node became not ready" node="ci-4334.0.0-a-9340e225f6" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-14T18:04:12Z","lastTransitionTime":"2025-05-14T18:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 14 18:04:13.840796 kubelet[3601]: E0514 18:04:13.824015 3601 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 18:04:15.576203 containerd[1906]: time="2025-05-14T18:04:15.575374630Z" level=info msg="CreateContainer within sandbox \"2681b1e4dc841ae528263f384b1b2af5a1f9fd2ec4f987cea1af975881a868d1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 18:04:15.697932 containerd[1906]: time="2025-05-14T18:04:15.697896053Z" level=info msg="Container 09eaaff1444a7ef4db07f0488c6962642fa7f27e761f20e16ddb9581e0aa5865: CDI devices from CRI Config.CDIDevices: []" May 14 18:04:15.699595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount823501438.mount: Deactivated successfully. May 14 18:04:15.831742 containerd[1906]: time="2025-05-14T18:04:15.831547648Z" level=info msg="CreateContainer within sandbox \"2681b1e4dc841ae528263f384b1b2af5a1f9fd2ec4f987cea1af975881a868d1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"09eaaff1444a7ef4db07f0488c6962642fa7f27e761f20e16ddb9581e0aa5865\"" May 14 18:04:15.832216 containerd[1906]: time="2025-05-14T18:04:15.832196443Z" level=info msg="StartContainer for \"09eaaff1444a7ef4db07f0488c6962642fa7f27e761f20e16ddb9581e0aa5865\"" May 14 18:04:15.833070 containerd[1906]: time="2025-05-14T18:04:15.833036947Z" level=info msg="connecting to shim 09eaaff1444a7ef4db07f0488c6962642fa7f27e761f20e16ddb9581e0aa5865" address="unix:///run/containerd/s/35558cf5bfb64c9839003b8d9a0743b09999bed9b9c14ca771451f6a593db5d7" protocol=ttrpc version=3 May 14 18:04:15.851087 systemd[1]: Started cri-containerd-09eaaff1444a7ef4db07f0488c6962642fa7f27e761f20e16ddb9581e0aa5865.scope - libcontainer container 09eaaff1444a7ef4db07f0488c6962642fa7f27e761f20e16ddb9581e0aa5865. May 14 18:04:15.873732 systemd[1]: cri-containerd-09eaaff1444a7ef4db07f0488c6962642fa7f27e761f20e16ddb9581e0aa5865.scope: Deactivated successfully. May 14 18:04:15.876345 containerd[1906]: time="2025-05-14T18:04:15.876292078Z" level=info msg="StartContainer for \"09eaaff1444a7ef4db07f0488c6962642fa7f27e761f20e16ddb9581e0aa5865\" returns successfully" May 14 18:04:15.876503 containerd[1906]: time="2025-05-14T18:04:15.876419818Z" level=info msg="received exit event container_id:\"09eaaff1444a7ef4db07f0488c6962642fa7f27e761f20e16ddb9581e0aa5865\" id:\"09eaaff1444a7ef4db07f0488c6962642fa7f27e761f20e16ddb9581e0aa5865\" pid:5434 exited_at:{seconds:1747245855 nanos:874924318}" May 14 18:04:15.877116 containerd[1906]: time="2025-05-14T18:04:15.877081813Z" level=info msg="TaskExit event in podsandbox handler container_id:\"09eaaff1444a7ef4db07f0488c6962642fa7f27e761f20e16ddb9581e0aa5865\" id:\"09eaaff1444a7ef4db07f0488c6962642fa7f27e761f20e16ddb9581e0aa5865\" pid:5434 exited_at:{seconds:1747245855 nanos:874924318}" May 14 18:04:16.694341 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09eaaff1444a7ef4db07f0488c6962642fa7f27e761f20e16ddb9581e0aa5865-rootfs.mount: Deactivated successfully. May 14 18:04:17.582720 containerd[1906]: time="2025-05-14T18:04:17.582661053Z" level=info msg="CreateContainer within sandbox \"2681b1e4dc841ae528263f384b1b2af5a1f9fd2ec4f987cea1af975881a868d1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 18:04:17.744541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1455243399.mount: Deactivated successfully. May 14 18:04:17.745728 containerd[1906]: time="2025-05-14T18:04:17.745055558Z" level=info msg="Container 9c69adb1f41ab068ddfc2ee44460f1e8b845a54f932ea9d01b12efffddb3c13b: CDI devices from CRI Config.CDIDevices: []" May 14 18:04:17.841106 containerd[1906]: time="2025-05-14T18:04:17.840995317Z" level=info msg="CreateContainer within sandbox \"2681b1e4dc841ae528263f384b1b2af5a1f9fd2ec4f987cea1af975881a868d1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9c69adb1f41ab068ddfc2ee44460f1e8b845a54f932ea9d01b12efffddb3c13b\"" May 14 18:04:17.842182 containerd[1906]: time="2025-05-14T18:04:17.842149693Z" level=info msg="StartContainer for \"9c69adb1f41ab068ddfc2ee44460f1e8b845a54f932ea9d01b12efffddb3c13b\"" May 14 18:04:17.843145 containerd[1906]: time="2025-05-14T18:04:17.843120544Z" level=info msg="connecting to shim 9c69adb1f41ab068ddfc2ee44460f1e8b845a54f932ea9d01b12efffddb3c13b" address="unix:///run/containerd/s/35558cf5bfb64c9839003b8d9a0743b09999bed9b9c14ca771451f6a593db5d7" protocol=ttrpc version=3 May 14 18:04:17.859096 systemd[1]: Started cri-containerd-9c69adb1f41ab068ddfc2ee44460f1e8b845a54f932ea9d01b12efffddb3c13b.scope - libcontainer container 9c69adb1f41ab068ddfc2ee44460f1e8b845a54f932ea9d01b12efffddb3c13b. May 14 18:04:17.883374 systemd[1]: cri-containerd-9c69adb1f41ab068ddfc2ee44460f1e8b845a54f932ea9d01b12efffddb3c13b.scope: Deactivated successfully. May 14 18:04:17.884873 containerd[1906]: time="2025-05-14T18:04:17.884845690Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9c69adb1f41ab068ddfc2ee44460f1e8b845a54f932ea9d01b12efffddb3c13b\" id:\"9c69adb1f41ab068ddfc2ee44460f1e8b845a54f932ea9d01b12efffddb3c13b\" pid:5479 exited_at:{seconds:1747245857 nanos:884603003}" May 14 18:04:17.892233 containerd[1906]: time="2025-05-14T18:04:17.892180078Z" level=info msg="received exit event container_id:\"9c69adb1f41ab068ddfc2ee44460f1e8b845a54f932ea9d01b12efffddb3c13b\" id:\"9c69adb1f41ab068ddfc2ee44460f1e8b845a54f932ea9d01b12efffddb3c13b\" pid:5479 exited_at:{seconds:1747245857 nanos:884603003}" May 14 18:04:17.894573 containerd[1906]: time="2025-05-14T18:04:17.894525255Z" level=info msg="StartContainer for \"9c69adb1f41ab068ddfc2ee44460f1e8b845a54f932ea9d01b12efffddb3c13b\" returns successfully" May 14 18:04:18.739896 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c69adb1f41ab068ddfc2ee44460f1e8b845a54f932ea9d01b12efffddb3c13b-rootfs.mount: Deactivated successfully. May 14 18:04:18.824935 kubelet[3601]: E0514 18:04:18.824899 3601 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 18:04:19.593917 containerd[1906]: time="2025-05-14T18:04:19.593502325Z" level=info msg="CreateContainer within sandbox \"2681b1e4dc841ae528263f384b1b2af5a1f9fd2ec4f987cea1af975881a868d1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 18:04:19.700472 containerd[1906]: time="2025-05-14T18:04:19.700246649Z" level=info msg="Container 332ca3924165c3254342bb3d7f9d2aa663a4a5ed68196bce4fc6a34b153303f9: CDI devices from CRI Config.CDIDevices: []" May 14 18:04:19.703287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount658769582.mount: Deactivated successfully. May 14 18:04:19.837677 containerd[1906]: time="2025-05-14T18:04:19.837635809Z" level=info msg="CreateContainer within sandbox \"2681b1e4dc841ae528263f384b1b2af5a1f9fd2ec4f987cea1af975881a868d1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"332ca3924165c3254342bb3d7f9d2aa663a4a5ed68196bce4fc6a34b153303f9\"" May 14 18:04:19.838343 containerd[1906]: time="2025-05-14T18:04:19.838304708Z" level=info msg="StartContainer for \"332ca3924165c3254342bb3d7f9d2aa663a4a5ed68196bce4fc6a34b153303f9\"" May 14 18:04:19.840537 containerd[1906]: time="2025-05-14T18:04:19.840488928Z" level=info msg="connecting to shim 332ca3924165c3254342bb3d7f9d2aa663a4a5ed68196bce4fc6a34b153303f9" address="unix:///run/containerd/s/35558cf5bfb64c9839003b8d9a0743b09999bed9b9c14ca771451f6a593db5d7" protocol=ttrpc version=3 May 14 18:04:19.859090 systemd[1]: Started cri-containerd-332ca3924165c3254342bb3d7f9d2aa663a4a5ed68196bce4fc6a34b153303f9.scope - libcontainer container 332ca3924165c3254342bb3d7f9d2aa663a4a5ed68196bce4fc6a34b153303f9. May 14 18:04:19.876707 systemd[1]: cri-containerd-332ca3924165c3254342bb3d7f9d2aa663a4a5ed68196bce4fc6a34b153303f9.scope: Deactivated successfully. May 14 18:04:19.878684 containerd[1906]: time="2025-05-14T18:04:19.878649631Z" level=info msg="TaskExit event in podsandbox handler container_id:\"332ca3924165c3254342bb3d7f9d2aa663a4a5ed68196bce4fc6a34b153303f9\" id:\"332ca3924165c3254342bb3d7f9d2aa663a4a5ed68196bce4fc6a34b153303f9\" pid:5520 exited_at:{seconds:1747245859 nanos:877781918}" May 14 18:04:19.882347 containerd[1906]: time="2025-05-14T18:04:19.882308748Z" level=info msg="received exit event container_id:\"332ca3924165c3254342bb3d7f9d2aa663a4a5ed68196bce4fc6a34b153303f9\" id:\"332ca3924165c3254342bb3d7f9d2aa663a4a5ed68196bce4fc6a34b153303f9\" pid:5520 exited_at:{seconds:1747245859 nanos:877781918}" May 14 18:04:19.890423 containerd[1906]: time="2025-05-14T18:04:19.888402022Z" level=info msg="StartContainer for \"332ca3924165c3254342bb3d7f9d2aa663a4a5ed68196bce4fc6a34b153303f9\" returns successfully" May 14 18:04:19.901786 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-332ca3924165c3254342bb3d7f9d2aa663a4a5ed68196bce4fc6a34b153303f9-rootfs.mount: Deactivated successfully. May 14 18:04:21.602000 containerd[1906]: time="2025-05-14T18:04:21.601442998Z" level=info msg="CreateContainer within sandbox \"2681b1e4dc841ae528263f384b1b2af5a1f9fd2ec4f987cea1af975881a868d1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 18:04:21.739889 containerd[1906]: time="2025-05-14T18:04:21.739800834Z" level=info msg="Container 235d27d590ebf6fdaee9de5dee61328c25a474facd51f6b4c1d4e7a612e1ae4e: CDI devices from CRI Config.CDIDevices: []" May 14 18:04:21.890730 containerd[1906]: time="2025-05-14T18:04:21.890569999Z" level=info msg="CreateContainer within sandbox \"2681b1e4dc841ae528263f384b1b2af5a1f9fd2ec4f987cea1af975881a868d1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"235d27d590ebf6fdaee9de5dee61328c25a474facd51f6b4c1d4e7a612e1ae4e\"" May 14 18:04:21.891451 containerd[1906]: time="2025-05-14T18:04:21.891407374Z" level=info msg="StartContainer for \"235d27d590ebf6fdaee9de5dee61328c25a474facd51f6b4c1d4e7a612e1ae4e\"" May 14 18:04:21.892367 containerd[1906]: time="2025-05-14T18:04:21.892340944Z" level=info msg="connecting to shim 235d27d590ebf6fdaee9de5dee61328c25a474facd51f6b4c1d4e7a612e1ae4e" address="unix:///run/containerd/s/35558cf5bfb64c9839003b8d9a0743b09999bed9b9c14ca771451f6a593db5d7" protocol=ttrpc version=3 May 14 18:04:21.909087 systemd[1]: Started cri-containerd-235d27d590ebf6fdaee9de5dee61328c25a474facd51f6b4c1d4e7a612e1ae4e.scope - libcontainer container 235d27d590ebf6fdaee9de5dee61328c25a474facd51f6b4c1d4e7a612e1ae4e. May 14 18:04:21.941057 containerd[1906]: time="2025-05-14T18:04:21.940992515Z" level=info msg="StartContainer for \"235d27d590ebf6fdaee9de5dee61328c25a474facd51f6b4c1d4e7a612e1ae4e\" returns successfully" May 14 18:04:22.000226 containerd[1906]: time="2025-05-14T18:04:22.000178850Z" level=info msg="TaskExit event in podsandbox handler container_id:\"235d27d590ebf6fdaee9de5dee61328c25a474facd51f6b4c1d4e7a612e1ae4e\" id:\"f46480e50342d1ee586108728752283bde20344e68e6bdcc1b9defd3af0d7ffc\" pid:5589 exited_at:{seconds:1747245861 nanos:999620291}" May 14 18:04:22.392045 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 14 18:04:22.618501 kubelet[3601]: I0514 18:04:22.618445 3601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6hmps" podStartSLOduration=12.618431461 podStartE2EDuration="12.618431461s" podCreationTimestamp="2025-05-14 18:04:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:04:22.618249032 +0000 UTC m=+193.938434757" watchObservedRunningTime="2025-05-14 18:04:22.618431461 +0000 UTC m=+193.938617170" May 14 18:04:22.866353 containerd[1906]: time="2025-05-14T18:04:22.866300273Z" level=info msg="TaskExit event in podsandbox handler container_id:\"235d27d590ebf6fdaee9de5dee61328c25a474facd51f6b4c1d4e7a612e1ae4e\" id:\"b6b74e5b2e840df3f170ce3978f2d6b24c78f31085cefee01ba8759894813a2d\" pid:5667 exit_status:1 exited_at:{seconds:1747245862 nanos:865894749}" May 14 18:04:24.704306 systemd-networkd[1480]: lxc_health: Link UP May 14 18:04:24.714573 systemd-networkd[1480]: lxc_health: Gained carrier May 14 18:04:24.978185 containerd[1906]: time="2025-05-14T18:04:24.977945635Z" level=info msg="TaskExit event in podsandbox handler container_id:\"235d27d590ebf6fdaee9de5dee61328c25a474facd51f6b4c1d4e7a612e1ae4e\" id:\"64f0b71840f4f1c21ac4b80396a3c425e4df6701e4b1276d52802622071175d9\" pid:6111 exited_at:{seconds:1747245864 nanos:977640027}" May 14 18:04:25.762085 systemd-networkd[1480]: lxc_health: Gained IPv6LL May 14 18:04:27.061210 containerd[1906]: time="2025-05-14T18:04:27.061166319Z" level=info msg="TaskExit event in podsandbox handler container_id:\"235d27d590ebf6fdaee9de5dee61328c25a474facd51f6b4c1d4e7a612e1ae4e\" id:\"c450df302ebadfd0916186669ed8348ae6c781309f18fa03751cfd3912d6e4fe\" pid:6151 exited_at:{seconds:1747245867 nanos:60006712}" May 14 18:04:29.129306 containerd[1906]: time="2025-05-14T18:04:29.129269736Z" level=info msg="TaskExit event in podsandbox handler container_id:\"235d27d590ebf6fdaee9de5dee61328c25a474facd51f6b4c1d4e7a612e1ae4e\" id:\"fbcf5888f1d13a1e637a4c1b1a2e9f5ecb3ee4d444b4a2ef0030d7cdad61fa85\" pid:6179 exited_at:{seconds:1747245869 nanos:128236988}" May 14 18:04:31.196244 containerd[1906]: time="2025-05-14T18:04:31.196137170Z" level=info msg="TaskExit event in podsandbox handler container_id:\"235d27d590ebf6fdaee9de5dee61328c25a474facd51f6b4c1d4e7a612e1ae4e\" id:\"798194c2125a2c47b2685948c4d5024e59da70fb0f2f518ebafa95260176244d\" pid:6200 exited_at:{seconds:1747245871 nanos:195785936}" May 14 18:04:31.198862 kubelet[3601]: E0514 18:04:31.198826 3601 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:34568->127.0.0.1:46079: read tcp 127.0.0.1:34568->127.0.0.1:46079: read: connection reset by peer May 14 18:04:31.272681 sshd[5416]: Connection closed by 10.200.16.10 port 58288 May 14 18:04:31.273251 sshd-session[5365]: pam_unix(sshd:session): session closed for user core May 14 18:04:31.276245 systemd[1]: sshd@24-10.200.20.4:22-10.200.16.10:58288.service: Deactivated successfully. May 14 18:04:31.277747 systemd[1]: session-27.scope: Deactivated successfully. May 14 18:04:31.279444 systemd-logind[1872]: Session 27 logged out. Waiting for processes to exit. May 14 18:04:31.281019 systemd-logind[1872]: Removed session 27.