May 9 04:48:17.920511 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 9 04:48:17.920533 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Fri May 9 03:42:00 -00 2025 May 9 04:48:17.920555 kernel: KASLR enabled May 9 04:48:17.920561 kernel: efi: EFI v2.7 by EDK II May 9 04:48:17.920567 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 May 9 04:48:17.920573 kernel: random: crng init done May 9 04:48:17.920580 kernel: secureboot: Secure boot disabled May 9 04:48:17.920586 kernel: ACPI: Early table checksum verification disabled May 9 04:48:17.920592 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) May 9 04:48:17.920599 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 9 04:48:17.920605 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 9 04:48:17.920622 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 04:48:17.920628 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 9 04:48:17.920634 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 04:48:17.920641 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 04:48:17.920649 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 04:48:17.920683 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 04:48:17.920690 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 9 04:48:17.920697 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 9 04:48:17.920703 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 9 04:48:17.920709 kernel: NUMA: Failed to initialise from firmware May 9 04:48:17.920715 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 9 04:48:17.920721 kernel: NUMA: NODE_DATA [mem 0xdc955e00-0xdc95cfff] May 9 04:48:17.920727 kernel: Zone ranges: May 9 04:48:17.920733 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 9 04:48:17.920741 kernel: DMA32 empty May 9 04:48:17.920747 kernel: Normal empty May 9 04:48:17.920753 kernel: Device empty May 9 04:48:17.920758 kernel: Movable zone start for each node May 9 04:48:17.920764 kernel: Early memory node ranges May 9 04:48:17.920770 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] May 9 04:48:17.920778 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] May 9 04:48:17.920784 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] May 9 04:48:17.920790 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] May 9 04:48:17.920796 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] May 9 04:48:17.920802 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] May 9 04:48:17.920808 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] May 9 04:48:17.920814 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] May 9 04:48:17.920821 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] May 9 04:48:17.920828 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 9 04:48:17.920836 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 9 04:48:17.920843 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 9 04:48:17.920849 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 9 04:48:17.920857 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 9 04:48:17.920864 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 9 04:48:17.920870 kernel: psci: probing for conduit method from ACPI. May 9 04:48:17.920877 kernel: psci: PSCIv1.1 detected in firmware. May 9 04:48:17.920883 kernel: psci: Using standard PSCI v0.2 function IDs May 9 04:48:17.920890 kernel: psci: Trusted OS migration not required May 9 04:48:17.920896 kernel: psci: SMC Calling Convention v1.1 May 9 04:48:17.920903 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 9 04:48:17.920909 kernel: percpu: Embedded 31 pages/cpu s87016 r8192 d31768 u126976 May 9 04:48:17.920916 kernel: pcpu-alloc: s87016 r8192 d31768 u126976 alloc=31*4096 May 9 04:48:17.920922 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 9 04:48:17.920930 kernel: Detected PIPT I-cache on CPU0 May 9 04:48:17.920937 kernel: CPU features: detected: GIC system register CPU interface May 9 04:48:17.920947 kernel: CPU features: detected: Hardware dirty bit management May 9 04:48:17.920954 kernel: CPU features: detected: Spectre-v4 May 9 04:48:17.920960 kernel: CPU features: detected: Spectre-BHB May 9 04:48:17.920967 kernel: CPU features: kernel page table isolation forced ON by KASLR May 9 04:48:17.920973 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 9 04:48:17.920980 kernel: CPU features: detected: ARM erratum 1418040 May 9 04:48:17.920986 kernel: CPU features: detected: SSBS not fully self-synchronizing May 9 04:48:17.920993 kernel: alternatives: applying boot alternatives May 9 04:48:17.921000 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=180634d3e256b1dbb5700949694cb34c82ca79af028365e078744f4de51d78d8 May 9 04:48:17.921008 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 9 04:48:17.921015 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 9 04:48:17.921022 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 9 04:48:17.921028 kernel: Fallback order for Node 0: 0 May 9 04:48:17.921035 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 9 04:48:17.921041 kernel: Policy zone: DMA May 9 04:48:17.921048 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 9 04:48:17.921054 kernel: software IO TLB: area num 4. May 9 04:48:17.921061 kernel: software IO TLB: mapped [mem 0x00000000d5000000-0x00000000d9000000] (64MB) May 9 04:48:17.921068 kernel: Memory: 2386500K/2572288K available (10432K kernel code, 2202K rwdata, 8168K rodata, 39040K init, 993K bss, 185788K reserved, 0K cma-reserved) May 9 04:48:17.921075 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 9 04:48:17.921085 kernel: rcu: Preemptible hierarchical RCU implementation. May 9 04:48:17.921092 kernel: rcu: RCU event tracing is enabled. May 9 04:48:17.921098 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 9 04:48:17.921105 kernel: Trampoline variant of Tasks RCU enabled. May 9 04:48:17.921112 kernel: Tracing variant of Tasks RCU enabled. May 9 04:48:17.921118 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 9 04:48:17.921125 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 9 04:48:17.921131 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 9 04:48:17.921138 kernel: GICv3: 256 SPIs implemented May 9 04:48:17.921144 kernel: GICv3: 0 Extended SPIs implemented May 9 04:48:17.921151 kernel: Root IRQ handler: gic_handle_irq May 9 04:48:17.921157 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 9 04:48:17.921165 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 9 04:48:17.921172 kernel: ITS [mem 0x08080000-0x0809ffff] May 9 04:48:17.921179 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) May 9 04:48:17.921185 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) May 9 04:48:17.921192 kernel: GICv3: using LPI property table @0x00000000400f0000 May 9 04:48:17.921198 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 9 04:48:17.921205 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 9 04:48:17.921212 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 04:48:17.921218 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 9 04:48:17.921225 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 9 04:48:17.921231 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 9 04:48:17.921239 kernel: arm-pv: using stolen time PV May 9 04:48:17.921246 kernel: Console: colour dummy device 80x25 May 9 04:48:17.921253 kernel: ACPI: Core revision 20230628 May 9 04:48:17.921264 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 9 04:48:17.921270 kernel: pid_max: default: 32768 minimum: 301 May 9 04:48:17.921277 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 9 04:48:17.921284 kernel: landlock: Up and running. May 9 04:48:17.921291 kernel: SELinux: Initializing. May 9 04:48:17.921297 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 04:48:17.921305 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 04:48:17.921312 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 9 04:48:17.921319 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 04:48:17.921326 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 04:48:17.921333 kernel: rcu: Hierarchical SRCU implementation. May 9 04:48:17.921340 kernel: rcu: Max phase no-delay instances is 400. May 9 04:48:17.921349 kernel: Platform MSI: ITS@0x8080000 domain created May 9 04:48:17.921356 kernel: PCI/MSI: ITS@0x8080000 domain created May 9 04:48:17.921362 kernel: Remapping and enabling EFI services. May 9 04:48:17.921371 kernel: smp: Bringing up secondary CPUs ... May 9 04:48:17.921382 kernel: Detected PIPT I-cache on CPU1 May 9 04:48:17.921390 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 9 04:48:17.921398 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 9 04:48:17.921405 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 04:48:17.921412 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 9 04:48:17.921419 kernel: Detected PIPT I-cache on CPU2 May 9 04:48:17.921426 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 9 04:48:17.921433 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 9 04:48:17.921441 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 04:48:17.921448 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 9 04:48:17.921455 kernel: Detected PIPT I-cache on CPU3 May 9 04:48:17.921463 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 9 04:48:17.921470 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 9 04:48:17.921477 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 04:48:17.921484 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 9 04:48:17.921491 kernel: smp: Brought up 1 node, 4 CPUs May 9 04:48:17.921498 kernel: SMP: Total of 4 processors activated. May 9 04:48:17.921506 kernel: CPU features: detected: 32-bit EL0 Support May 9 04:48:17.921513 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 9 04:48:17.921520 kernel: CPU features: detected: Common not Private translations May 9 04:48:17.921527 kernel: CPU features: detected: CRC32 instructions May 9 04:48:17.921534 kernel: CPU features: detected: Enhanced Virtualization Traps May 9 04:48:17.921541 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 9 04:48:17.921548 kernel: CPU features: detected: LSE atomic instructions May 9 04:48:17.921555 kernel: CPU features: detected: Privileged Access Never May 9 04:48:17.921562 kernel: CPU features: detected: RAS Extension Support May 9 04:48:17.921570 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 9 04:48:17.921577 kernel: CPU: All CPU(s) started at EL1 May 9 04:48:17.921584 kernel: alternatives: applying system-wide alternatives May 9 04:48:17.921591 kernel: devtmpfs: initialized May 9 04:48:17.921598 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 9 04:48:17.921605 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 9 04:48:17.921618 kernel: pinctrl core: initialized pinctrl subsystem May 9 04:48:17.921625 kernel: SMBIOS 3.0.0 present. May 9 04:48:17.921632 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 9 04:48:17.921640 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 9 04:48:17.921648 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 9 04:48:17.921675 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 9 04:48:17.921683 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 9 04:48:17.921690 kernel: audit: initializing netlink subsys (disabled) May 9 04:48:17.921697 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 May 9 04:48:17.921704 kernel: thermal_sys: Registered thermal governor 'step_wise' May 9 04:48:17.921711 kernel: cpuidle: using governor menu May 9 04:48:17.921718 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 9 04:48:17.921728 kernel: ASID allocator initialised with 32768 entries May 9 04:48:17.921735 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 9 04:48:17.921741 kernel: Serial: AMBA PL011 UART driver May 9 04:48:17.921748 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 9 04:48:17.921755 kernel: Modules: 0 pages in range for non-PLT usage May 9 04:48:17.921762 kernel: Modules: 509024 pages in range for PLT usage May 9 04:48:17.921769 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 9 04:48:17.921776 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 9 04:48:17.921783 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 9 04:48:17.921792 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 9 04:48:17.921799 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 9 04:48:17.921806 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 9 04:48:17.921813 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 9 04:48:17.921820 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 9 04:48:17.921827 kernel: ACPI: Added _OSI(Module Device) May 9 04:48:17.921837 kernel: ACPI: Added _OSI(Processor Device) May 9 04:48:17.921844 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 9 04:48:17.921851 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 9 04:48:17.921859 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 9 04:48:17.921866 kernel: ACPI: Interpreter enabled May 9 04:48:17.921873 kernel: ACPI: Using GIC for interrupt routing May 9 04:48:17.921880 kernel: ACPI: MCFG table detected, 1 entries May 9 04:48:17.921887 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 9 04:48:17.921894 kernel: printk: console [ttyAMA0] enabled May 9 04:48:17.921901 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 9 04:48:17.922038 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 9 04:48:17.922112 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 9 04:48:17.922175 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 9 04:48:17.922239 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 9 04:48:17.922311 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 9 04:48:17.922321 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 9 04:48:17.922329 kernel: PCI host bridge to bus 0000:00 May 9 04:48:17.922404 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 9 04:48:17.922470 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 9 04:48:17.922532 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 9 04:48:17.922590 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 9 04:48:17.922696 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 9 04:48:17.922787 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 9 04:48:17.922856 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 9 04:48:17.922931 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 9 04:48:17.923003 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 9 04:48:17.923069 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 9 04:48:17.923139 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 9 04:48:17.923211 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 9 04:48:17.923273 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 9 04:48:17.923341 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 9 04:48:17.923402 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 9 04:48:17.923413 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 9 04:48:17.923421 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 9 04:48:17.923428 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 9 04:48:17.923435 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 9 04:48:17.923443 kernel: iommu: Default domain type: Translated May 9 04:48:17.923450 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 9 04:48:17.923457 kernel: efivars: Registered efivars operations May 9 04:48:17.923464 kernel: vgaarb: loaded May 9 04:48:17.923471 kernel: clocksource: Switched to clocksource arch_sys_counter May 9 04:48:17.923480 kernel: VFS: Disk quotas dquot_6.6.0 May 9 04:48:17.923487 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 9 04:48:17.923494 kernel: pnp: PnP ACPI init May 9 04:48:17.923574 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 9 04:48:17.923584 kernel: pnp: PnP ACPI: found 1 devices May 9 04:48:17.923592 kernel: NET: Registered PF_INET protocol family May 9 04:48:17.923599 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 9 04:48:17.923606 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 9 04:48:17.923622 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 9 04:48:17.923630 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 9 04:48:17.923637 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 9 04:48:17.923644 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 9 04:48:17.923652 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 04:48:17.923683 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 04:48:17.923691 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 9 04:48:17.923699 kernel: PCI: CLS 0 bytes, default 64 May 9 04:48:17.923706 kernel: kvm [1]: HYP mode not available May 9 04:48:17.923716 kernel: Initialise system trusted keyrings May 9 04:48:17.923723 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 9 04:48:17.923730 kernel: Key type asymmetric registered May 9 04:48:17.923737 kernel: Asymmetric key parser 'x509' registered May 9 04:48:17.923745 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 9 04:48:17.923752 kernel: io scheduler mq-deadline registered May 9 04:48:17.923759 kernel: io scheduler kyber registered May 9 04:48:17.923767 kernel: io scheduler bfq registered May 9 04:48:17.923774 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 9 04:48:17.923783 kernel: ACPI: button: Power Button [PWRB] May 9 04:48:17.923791 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 9 04:48:17.923886 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 9 04:48:17.923896 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 9 04:48:17.923903 kernel: thunder_xcv, ver 1.0 May 9 04:48:17.923911 kernel: thunder_bgx, ver 1.0 May 9 04:48:17.923918 kernel: nicpf, ver 1.0 May 9 04:48:17.923925 kernel: nicvf, ver 1.0 May 9 04:48:17.924001 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 9 04:48:17.924065 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-09T04:48:17 UTC (1746766097) May 9 04:48:17.924074 kernel: hid: raw HID events driver (C) Jiri Kosina May 9 04:48:17.924082 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 9 04:48:17.924089 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 9 04:48:17.924096 kernel: watchdog: Hard watchdog permanently disabled May 9 04:48:17.924103 kernel: NET: Registered PF_INET6 protocol family May 9 04:48:17.924110 kernel: Segment Routing with IPv6 May 9 04:48:17.924117 kernel: In-situ OAM (IOAM) with IPv6 May 9 04:48:17.924125 kernel: NET: Registered PF_PACKET protocol family May 9 04:48:17.924132 kernel: Key type dns_resolver registered May 9 04:48:17.924139 kernel: registered taskstats version 1 May 9 04:48:17.924146 kernel: Loading compiled-in X.509 certificates May 9 04:48:17.924156 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: aad33ee745b4b133d332bac6576e33058e4e0478' May 9 04:48:17.924163 kernel: Key type .fscrypt registered May 9 04:48:17.924170 kernel: Key type fscrypt-provisioning registered May 9 04:48:17.924177 kernel: ima: No TPM chip found, activating TPM-bypass! May 9 04:48:17.924184 kernel: ima: Allocated hash algorithm: sha1 May 9 04:48:17.924192 kernel: ima: No architecture policies found May 9 04:48:17.924199 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 9 04:48:17.924206 kernel: clk: Disabling unused clocks May 9 04:48:17.924213 kernel: Warning: unable to open an initial console. May 9 04:48:17.924220 kernel: Freeing unused kernel memory: 39040K May 9 04:48:17.924227 kernel: Run /init as init process May 9 04:48:17.924234 kernel: with arguments: May 9 04:48:17.924241 kernel: /init May 9 04:48:17.924248 kernel: with environment: May 9 04:48:17.924256 kernel: HOME=/ May 9 04:48:17.924263 kernel: TERM=linux May 9 04:48:17.924270 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 9 04:48:17.924278 systemd[1]: Successfully made /usr/ read-only. May 9 04:48:17.924287 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 9 04:48:17.924295 systemd[1]: Detected virtualization kvm. May 9 04:48:17.924302 systemd[1]: Detected architecture arm64. May 9 04:48:17.924311 systemd[1]: Running in initrd. May 9 04:48:17.924318 systemd[1]: No hostname configured, using default hostname. May 9 04:48:17.924326 systemd[1]: Hostname set to . May 9 04:48:17.924333 systemd[1]: Initializing machine ID from VM UUID. May 9 04:48:17.924341 systemd[1]: Queued start job for default target initrd.target. May 9 04:48:17.924348 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 04:48:17.924356 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 04:48:17.924364 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 9 04:48:17.924373 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 04:48:17.924381 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 9 04:48:17.924389 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 9 04:48:17.924397 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 9 04:48:17.924405 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 9 04:48:17.924412 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 04:48:17.924420 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 04:48:17.924429 systemd[1]: Reached target paths.target - Path Units. May 9 04:48:17.924436 systemd[1]: Reached target slices.target - Slice Units. May 9 04:48:17.924443 systemd[1]: Reached target swap.target - Swaps. May 9 04:48:17.924451 systemd[1]: Reached target timers.target - Timer Units. May 9 04:48:17.924458 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 9 04:48:17.924466 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 04:48:17.924473 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 9 04:48:17.924481 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 9 04:48:17.924488 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 04:48:17.924498 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 04:48:17.924505 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 04:48:17.924513 systemd[1]: Reached target sockets.target - Socket Units. May 9 04:48:17.924520 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 9 04:48:17.924527 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 04:48:17.924535 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 9 04:48:17.924542 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 9 04:48:17.924550 systemd[1]: Starting systemd-fsck-usr.service... May 9 04:48:17.924559 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 04:48:17.924566 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 04:48:17.924573 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 04:48:17.924587 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 04:48:17.924594 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 9 04:48:17.924603 systemd[1]: Finished systemd-fsck-usr.service. May 9 04:48:17.924619 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 04:48:17.924627 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 04:48:17.924667 systemd-journald[240]: Collecting audit messages is disabled. May 9 04:48:17.924690 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 04:48:17.924699 systemd-journald[240]: Journal started May 9 04:48:17.924719 systemd-journald[240]: Runtime Journal (/run/log/journal/8e190c34ebae4274b8ef43459fb977c5) is 5.9M, max 47.3M, 41.4M free. May 9 04:48:17.916906 systemd-modules-load[241]: Inserted module 'overlay' May 9 04:48:17.927315 systemd[1]: Started systemd-journald.service - Journal Service. May 9 04:48:17.929672 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 9 04:48:17.931393 systemd-modules-load[241]: Inserted module 'br_netfilter' May 9 04:48:17.932105 kernel: Bridge firewalling registered May 9 04:48:17.936010 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 04:48:17.936982 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 04:48:17.942851 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 04:48:17.944124 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 04:48:17.948498 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 04:48:17.954950 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 04:48:17.956537 systemd-tmpfiles[270]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 9 04:48:17.957782 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 9 04:48:17.960626 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 04:48:17.963624 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 04:48:17.964813 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 04:48:17.969233 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 04:48:17.975151 dracut-cmdline[281]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=180634d3e256b1dbb5700949694cb34c82ca79af028365e078744f4de51d78d8 May 9 04:48:18.013313 systemd-resolved[294]: Positive Trust Anchors: May 9 04:48:18.013329 systemd-resolved[294]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 04:48:18.013361 systemd-resolved[294]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 04:48:18.018068 systemd-resolved[294]: Defaulting to hostname 'linux'. May 9 04:48:18.018973 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 04:48:18.021150 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 04:48:18.056674 kernel: SCSI subsystem initialized May 9 04:48:18.060673 kernel: Loading iSCSI transport class v2.0-870. May 9 04:48:18.067676 kernel: iscsi: registered transport (tcp) May 9 04:48:18.080840 kernel: iscsi: registered transport (qla4xxx) May 9 04:48:18.080882 kernel: QLogic iSCSI HBA Driver May 9 04:48:18.099890 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 9 04:48:18.116756 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 9 04:48:18.118517 systemd[1]: Reached target network-pre.target - Preparation for Network. May 9 04:48:18.159797 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 9 04:48:18.161895 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 9 04:48:18.222678 kernel: raid6: neonx8 gen() 15792 MB/s May 9 04:48:18.239666 kernel: raid6: neonx4 gen() 15789 MB/s May 9 04:48:18.256665 kernel: raid6: neonx2 gen() 13186 MB/s May 9 04:48:18.273664 kernel: raid6: neonx1 gen() 10507 MB/s May 9 04:48:18.290666 kernel: raid6: int64x8 gen() 6760 MB/s May 9 04:48:18.307665 kernel: raid6: int64x4 gen() 7305 MB/s May 9 04:48:18.324674 kernel: raid6: int64x2 gen() 6098 MB/s May 9 04:48:18.341664 kernel: raid6: int64x1 gen() 5056 MB/s May 9 04:48:18.341686 kernel: raid6: using algorithm neonx8 gen() 15792 MB/s May 9 04:48:18.358668 kernel: raid6: .... xor() 11822 MB/s, rmw enabled May 9 04:48:18.358680 kernel: raid6: using neon recovery algorithm May 9 04:48:18.364023 kernel: xor: measuring software checksum speed May 9 04:48:18.364039 kernel: 8regs : 21647 MB/sec May 9 04:48:18.364047 kernel: 32regs : 21276 MB/sec May 9 04:48:18.365024 kernel: arm64_neon : 27413 MB/sec May 9 04:48:18.365036 kernel: xor: using function: arm64_neon (27413 MB/sec) May 9 04:48:18.417674 kernel: Btrfs loaded, zoned=no, fsverity=no May 9 04:48:18.424233 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 9 04:48:18.426417 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 04:48:18.458549 systemd-udevd[496]: Using default interface naming scheme 'v255'. May 9 04:48:18.466691 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 04:48:18.470803 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 9 04:48:18.496694 dracut-pre-trigger[502]: rd.md=0: removing MD RAID activation May 9 04:48:18.520014 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 9 04:48:18.522087 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 04:48:18.574853 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 04:48:18.576763 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 9 04:48:18.626228 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 9 04:48:18.627129 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 9 04:48:18.626897 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 04:48:18.627439 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 04:48:18.629420 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 9 04:48:18.636734 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 9 04:48:18.636755 kernel: GPT:9289727 != 19775487 May 9 04:48:18.636803 kernel: GPT:Alternate GPT header not at the end of the disk. May 9 04:48:18.636814 kernel: GPT:9289727 != 19775487 May 9 04:48:18.636822 kernel: GPT: Use GNU Parted to correct GPT errors. May 9 04:48:18.636831 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 04:48:18.631069 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 04:48:18.652405 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (544) May 9 04:48:18.652456 kernel: BTRFS: device fsid 40f1eae7-2721-4eea-912a-4692becebc68 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (553) May 9 04:48:18.659677 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 04:48:18.672831 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 9 04:48:18.673936 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 9 04:48:18.681751 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 9 04:48:18.689729 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 9 04:48:18.695806 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 9 04:48:18.696710 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 9 04:48:18.698927 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 9 04:48:18.700485 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 04:48:18.702204 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 04:48:18.704341 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 9 04:48:18.705805 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 9 04:48:18.719263 disk-uuid[585]: Primary Header is updated. May 9 04:48:18.719263 disk-uuid[585]: Secondary Entries is updated. May 9 04:48:18.719263 disk-uuid[585]: Secondary Header is updated. May 9 04:48:18.723673 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 04:48:18.726140 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 9 04:48:19.735704 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 04:48:19.737945 disk-uuid[588]: The operation has completed successfully. May 9 04:48:19.765385 systemd[1]: disk-uuid.service: Deactivated successfully. May 9 04:48:19.765488 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 9 04:48:19.787581 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 9 04:48:19.803225 sh[605]: Success May 9 04:48:19.815314 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 9 04:48:19.815347 kernel: device-mapper: uevent: version 1.0.3 May 9 04:48:19.815357 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 9 04:48:19.821677 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 9 04:48:19.845017 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 9 04:48:19.847524 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 9 04:48:19.860806 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 9 04:48:19.866061 kernel: BTRFS info (device dm-0): first mount of filesystem 40f1eae7-2721-4eea-912a-4692becebc68 May 9 04:48:19.866100 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 9 04:48:19.866111 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 9 04:48:19.867848 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 9 04:48:19.867875 kernel: BTRFS info (device dm-0): using free space tree May 9 04:48:19.871039 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 9 04:48:19.872096 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 9 04:48:19.873050 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 9 04:48:19.873774 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 9 04:48:19.876172 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 9 04:48:19.896968 kernel: BTRFS info (device vda6): first mount of filesystem 43f5fbf3-70bc-4d67-8861-0fe39cce4ad6 May 9 04:48:19.897010 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 9 04:48:19.897020 kernel: BTRFS info (device vda6): using free space tree May 9 04:48:19.899685 kernel: BTRFS info (device vda6): auto enabling async discard May 9 04:48:19.902674 kernel: BTRFS info (device vda6): last unmount of filesystem 43f5fbf3-70bc-4d67-8861-0fe39cce4ad6 May 9 04:48:19.906181 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 9 04:48:19.908761 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 9 04:48:19.973702 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 04:48:19.977146 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 04:48:20.042962 systemd-networkd[790]: lo: Link UP May 9 04:48:20.042974 systemd-networkd[790]: lo: Gained carrier May 9 04:48:20.043783 systemd-networkd[790]: Enumeration completed May 9 04:48:20.044420 systemd-networkd[790]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 04:48:20.044423 systemd-networkd[790]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 04:48:20.044980 systemd-networkd[790]: eth0: Link UP May 9 04:48:20.044983 systemd-networkd[790]: eth0: Gained carrier May 9 04:48:20.044990 systemd-networkd[790]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 04:48:20.046676 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 04:48:20.047538 systemd[1]: Reached target network.target - Network. May 9 04:48:20.067894 systemd-networkd[790]: eth0: DHCPv4 address 10.0.0.27/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 9 04:48:20.069442 ignition[695]: Ignition 2.21.0 May 9 04:48:20.069460 ignition[695]: Stage: fetch-offline May 9 04:48:20.069498 ignition[695]: no configs at "/usr/lib/ignition/base.d" May 9 04:48:20.069506 ignition[695]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 04:48:20.069821 ignition[695]: parsed url from cmdline: "" May 9 04:48:20.069825 ignition[695]: no config URL provided May 9 04:48:20.069830 ignition[695]: reading system config file "/usr/lib/ignition/user.ign" May 9 04:48:20.069839 ignition[695]: no config at "/usr/lib/ignition/user.ign" May 9 04:48:20.069859 ignition[695]: op(1): [started] loading QEMU firmware config module May 9 04:48:20.069863 ignition[695]: op(1): executing: "modprobe" "qemu_fw_cfg" May 9 04:48:20.081337 ignition[695]: op(1): [finished] loading QEMU firmware config module May 9 04:48:20.119568 ignition[695]: parsing config with SHA512: 203d191477e5dbeb4ffe7c9e4970aeb03824f30bbf76dc1a626aed922c4958cddfaf484273f63cd0f5872380b7d409abcfb5abc7cdafb6d42bde1151be2b1e82 May 9 04:48:20.123811 unknown[695]: fetched base config from "system" May 9 04:48:20.123822 unknown[695]: fetched user config from "qemu" May 9 04:48:20.124164 ignition[695]: fetch-offline: fetch-offline passed May 9 04:48:20.124217 ignition[695]: Ignition finished successfully May 9 04:48:20.126933 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 9 04:48:20.128175 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 9 04:48:20.129000 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 9 04:48:20.153159 ignition[803]: Ignition 2.21.0 May 9 04:48:20.153176 ignition[803]: Stage: kargs May 9 04:48:20.153321 ignition[803]: no configs at "/usr/lib/ignition/base.d" May 9 04:48:20.153331 ignition[803]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 04:48:20.154969 ignition[803]: kargs: kargs passed May 9 04:48:20.155045 ignition[803]: Ignition finished successfully May 9 04:48:20.157626 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 9 04:48:20.159787 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 9 04:48:20.183827 ignition[811]: Ignition 2.21.0 May 9 04:48:20.183842 ignition[811]: Stage: disks May 9 04:48:20.183966 ignition[811]: no configs at "/usr/lib/ignition/base.d" May 9 04:48:20.183975 ignition[811]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 04:48:20.186165 ignition[811]: disks: disks passed May 9 04:48:20.186227 ignition[811]: Ignition finished successfully May 9 04:48:20.188687 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 9 04:48:20.189675 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 9 04:48:20.190453 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 9 04:48:20.191956 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 04:48:20.193310 systemd[1]: Reached target sysinit.target - System Initialization. May 9 04:48:20.194720 systemd[1]: Reached target basic.target - Basic System. May 9 04:48:20.196749 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 9 04:48:20.220508 systemd-fsck[821]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 9 04:48:20.225966 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 9 04:48:20.228711 systemd[1]: Mounting sysroot.mount - /sysroot... May 9 04:48:20.286694 kernel: EXT4-fs (vda9): mounted filesystem 6dc42008-f956-4b63-8173-09d769f43317 r/w with ordered data mode. Quota mode: none. May 9 04:48:20.287532 systemd[1]: Mounted sysroot.mount - /sysroot. May 9 04:48:20.288717 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 9 04:48:20.290611 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 04:48:20.292010 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 9 04:48:20.292812 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 9 04:48:20.292854 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 9 04:48:20.292878 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 9 04:48:20.305085 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 9 04:48:20.307805 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 9 04:48:20.312153 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (829) May 9 04:48:20.312177 kernel: BTRFS info (device vda6): first mount of filesystem 43f5fbf3-70bc-4d67-8861-0fe39cce4ad6 May 9 04:48:20.312187 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 9 04:48:20.312198 kernel: BTRFS info (device vda6): using free space tree May 9 04:48:20.314683 kernel: BTRFS info (device vda6): auto enabling async discard May 9 04:48:20.315524 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 04:48:20.350379 initrd-setup-root[853]: cut: /sysroot/etc/passwd: No such file or directory May 9 04:48:20.355594 initrd-setup-root[860]: cut: /sysroot/etc/group: No such file or directory May 9 04:48:20.360593 initrd-setup-root[867]: cut: /sysroot/etc/shadow: No such file or directory May 9 04:48:20.364402 initrd-setup-root[874]: cut: /sysroot/etc/gshadow: No such file or directory May 9 04:48:20.428989 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 9 04:48:20.431298 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 9 04:48:20.432586 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 9 04:48:20.455786 kernel: BTRFS info (device vda6): last unmount of filesystem 43f5fbf3-70bc-4d67-8861-0fe39cce4ad6 May 9 04:48:20.467548 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 9 04:48:20.476900 ignition[942]: INFO : Ignition 2.21.0 May 9 04:48:20.476900 ignition[942]: INFO : Stage: mount May 9 04:48:20.478741 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 04:48:20.478741 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 04:48:20.478741 ignition[942]: INFO : mount: mount passed May 9 04:48:20.478741 ignition[942]: INFO : Ignition finished successfully May 9 04:48:20.481660 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 9 04:48:20.483294 systemd[1]: Starting ignition-files.service - Ignition (files)... May 9 04:48:20.996944 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 9 04:48:20.998383 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 04:48:21.015768 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (955) May 9 04:48:21.015802 kernel: BTRFS info (device vda6): first mount of filesystem 43f5fbf3-70bc-4d67-8861-0fe39cce4ad6 May 9 04:48:21.015813 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 9 04:48:21.016899 kernel: BTRFS info (device vda6): using free space tree May 9 04:48:21.018684 kernel: BTRFS info (device vda6): auto enabling async discard May 9 04:48:21.019943 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 04:48:21.054014 ignition[972]: INFO : Ignition 2.21.0 May 9 04:48:21.054014 ignition[972]: INFO : Stage: files May 9 04:48:21.055936 ignition[972]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 04:48:21.055936 ignition[972]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 04:48:21.057500 ignition[972]: DEBUG : files: compiled without relabeling support, skipping May 9 04:48:21.057500 ignition[972]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 9 04:48:21.057500 ignition[972]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 9 04:48:21.060415 ignition[972]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 9 04:48:21.060415 ignition[972]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 9 04:48:21.060415 ignition[972]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 9 04:48:21.060415 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 9 04:48:21.060415 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 9 04:48:21.058821 unknown[972]: wrote ssh authorized keys file for user: core May 9 04:48:21.134869 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 9 04:48:21.286533 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 9 04:48:21.286533 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 9 04:48:21.289250 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 9 04:48:21.696880 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 9 04:48:21.830872 systemd-networkd[790]: eth0: Gained IPv6LL May 9 04:48:21.846377 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 9 04:48:21.848229 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 9 04:48:21.848229 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 9 04:48:21.848229 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 9 04:48:21.848229 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 9 04:48:21.848229 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 04:48:21.848229 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 04:48:21.848229 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 04:48:21.848229 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 04:48:21.848229 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 9 04:48:21.848229 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 9 04:48:21.848229 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 9 04:48:21.848229 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 9 04:48:21.848229 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 9 04:48:21.848229 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 May 9 04:48:22.132466 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 9 04:48:22.738720 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 9 04:48:22.738720 ignition[972]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 9 04:48:22.741401 ignition[972]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 04:48:22.741401 ignition[972]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 04:48:22.741401 ignition[972]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 9 04:48:22.741401 ignition[972]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 9 04:48:22.741401 ignition[972]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 9 04:48:22.741401 ignition[972]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 9 04:48:22.741401 ignition[972]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 9 04:48:22.741401 ignition[972]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 9 04:48:22.756323 ignition[972]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 9 04:48:22.759522 ignition[972]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 9 04:48:22.760639 ignition[972]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 9 04:48:22.760639 ignition[972]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 9 04:48:22.760639 ignition[972]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 9 04:48:22.760639 ignition[972]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 9 04:48:22.760639 ignition[972]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 9 04:48:22.760639 ignition[972]: INFO : files: files passed May 9 04:48:22.760639 ignition[972]: INFO : Ignition finished successfully May 9 04:48:22.761648 systemd[1]: Finished ignition-files.service - Ignition (files). May 9 04:48:22.763825 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 9 04:48:22.766781 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 9 04:48:22.787346 systemd[1]: ignition-quench.service: Deactivated successfully. May 9 04:48:22.788154 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 9 04:48:22.789751 initrd-setup-root-after-ignition[1001]: grep: /sysroot/oem/oem-release: No such file or directory May 9 04:48:22.791703 initrd-setup-root-after-ignition[1004]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 04:48:22.791703 initrd-setup-root-after-ignition[1004]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 9 04:48:22.794171 initrd-setup-root-after-ignition[1008]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 04:48:22.793279 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 04:48:22.795331 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 9 04:48:22.797297 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 9 04:48:22.830970 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 9 04:48:22.831073 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 9 04:48:22.832836 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 9 04:48:22.834216 systemd[1]: Reached target initrd.target - Initrd Default Target. May 9 04:48:22.835512 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 9 04:48:22.836211 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 9 04:48:22.849048 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 04:48:22.851078 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 9 04:48:22.869084 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 9 04:48:22.870185 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 04:48:22.872055 systemd[1]: Stopped target timers.target - Timer Units. May 9 04:48:22.873367 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 9 04:48:22.873480 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 04:48:22.875313 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 9 04:48:22.876758 systemd[1]: Stopped target basic.target - Basic System. May 9 04:48:22.878070 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 9 04:48:22.879346 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 9 04:48:22.880757 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 9 04:48:22.882288 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 9 04:48:22.883779 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 9 04:48:22.885119 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 9 04:48:22.886572 systemd[1]: Stopped target sysinit.target - System Initialization. May 9 04:48:22.888111 systemd[1]: Stopped target local-fs.target - Local File Systems. May 9 04:48:22.889388 systemd[1]: Stopped target swap.target - Swaps. May 9 04:48:22.890485 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 9 04:48:22.890603 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 9 04:48:22.892404 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 9 04:48:22.893847 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 04:48:22.895261 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 9 04:48:22.896705 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 04:48:22.898719 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 9 04:48:22.898848 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 9 04:48:22.901451 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 9 04:48:22.901595 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 9 04:48:22.903236 systemd[1]: Stopped target paths.target - Path Units. May 9 04:48:22.904407 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 9 04:48:22.907717 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 04:48:22.908648 systemd[1]: Stopped target slices.target - Slice Units. May 9 04:48:22.910299 systemd[1]: Stopped target sockets.target - Socket Units. May 9 04:48:22.911426 systemd[1]: iscsid.socket: Deactivated successfully. May 9 04:48:22.911506 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 9 04:48:22.912792 systemd[1]: iscsiuio.socket: Deactivated successfully. May 9 04:48:22.912867 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 04:48:22.913983 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 9 04:48:22.914085 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 04:48:22.915391 systemd[1]: ignition-files.service: Deactivated successfully. May 9 04:48:22.915488 systemd[1]: Stopped ignition-files.service - Ignition (files). May 9 04:48:22.917322 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 9 04:48:22.919141 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 9 04:48:22.919955 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 9 04:48:22.920082 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 9 04:48:22.921433 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 9 04:48:22.921530 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 9 04:48:22.938887 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 9 04:48:22.939717 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 9 04:48:22.946611 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 9 04:48:22.951312 ignition[1029]: INFO : Ignition 2.21.0 May 9 04:48:22.951312 ignition[1029]: INFO : Stage: umount May 9 04:48:22.953366 ignition[1029]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 04:48:22.953366 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 04:48:22.953366 ignition[1029]: INFO : umount: umount passed May 9 04:48:22.953366 ignition[1029]: INFO : Ignition finished successfully May 9 04:48:22.954755 systemd[1]: ignition-mount.service: Deactivated successfully. May 9 04:48:22.954859 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 9 04:48:22.955817 systemd[1]: Stopped target network.target - Network. May 9 04:48:22.956811 systemd[1]: ignition-disks.service: Deactivated successfully. May 9 04:48:22.956868 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 9 04:48:22.958111 systemd[1]: ignition-kargs.service: Deactivated successfully. May 9 04:48:22.958152 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 9 04:48:22.959532 systemd[1]: ignition-setup.service: Deactivated successfully. May 9 04:48:22.959600 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 9 04:48:22.960561 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 9 04:48:22.960603 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 9 04:48:22.961705 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 9 04:48:22.963158 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 9 04:48:22.969762 systemd[1]: systemd-resolved.service: Deactivated successfully. May 9 04:48:22.969867 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 9 04:48:22.973277 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 9 04:48:22.973539 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 9 04:48:22.973594 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 04:48:22.976616 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 9 04:48:22.977360 systemd[1]: systemd-networkd.service: Deactivated successfully. May 9 04:48:22.977478 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 9 04:48:22.980035 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 9 04:48:22.980219 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 9 04:48:22.981415 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 9 04:48:22.981449 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 9 04:48:22.983700 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 9 04:48:22.984344 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 9 04:48:22.984395 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 04:48:22.985924 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 04:48:22.985965 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 04:48:22.988014 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 9 04:48:22.988054 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 9 04:48:22.989510 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 04:48:22.993437 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 9 04:48:23.004430 systemd[1]: systemd-udevd.service: Deactivated successfully. May 9 04:48:23.004582 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 04:48:23.006544 systemd[1]: network-cleanup.service: Deactivated successfully. May 9 04:48:23.006680 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 9 04:48:23.008407 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 9 04:48:23.008471 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 9 04:48:23.009462 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 9 04:48:23.009503 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 9 04:48:23.010921 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 9 04:48:23.010974 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 9 04:48:23.013126 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 9 04:48:23.013171 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 9 04:48:23.015349 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 04:48:23.015398 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 04:48:23.019954 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 9 04:48:23.021333 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 9 04:48:23.021395 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 9 04:48:23.024004 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 9 04:48:23.024050 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 04:48:23.026715 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 9 04:48:23.026760 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 04:48:23.029278 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 9 04:48:23.029320 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 9 04:48:23.031004 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 04:48:23.031052 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 04:48:23.034155 systemd[1]: sysroot-boot.service: Deactivated successfully. May 9 04:48:23.034257 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 9 04:48:23.035237 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 9 04:48:23.035323 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 9 04:48:23.037476 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 9 04:48:23.038786 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 9 04:48:23.038852 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 9 04:48:23.040842 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 9 04:48:23.070128 systemd[1]: Switching root. May 9 04:48:23.093433 systemd-journald[240]: Journal stopped May 9 04:48:23.864431 systemd-journald[240]: Received SIGTERM from PID 1 (systemd). May 9 04:48:23.864480 kernel: SELinux: policy capability network_peer_controls=1 May 9 04:48:23.864493 kernel: SELinux: policy capability open_perms=1 May 9 04:48:23.864509 kernel: SELinux: policy capability extended_socket_class=1 May 9 04:48:23.864519 kernel: SELinux: policy capability always_check_network=0 May 9 04:48:23.864528 kernel: SELinux: policy capability cgroup_seclabel=1 May 9 04:48:23.864551 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 9 04:48:23.864562 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 9 04:48:23.864571 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 9 04:48:23.864583 kernel: audit: type=1403 audit(1746766103.266:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 9 04:48:23.864599 systemd[1]: Successfully loaded SELinux policy in 35.223ms. May 9 04:48:23.864618 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.614ms. May 9 04:48:23.864629 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 9 04:48:23.864640 systemd[1]: Detected virtualization kvm. May 9 04:48:23.864728 systemd[1]: Detected architecture arm64. May 9 04:48:23.864746 systemd[1]: Detected first boot. May 9 04:48:23.864757 systemd[1]: Initializing machine ID from VM UUID. May 9 04:48:23.864768 zram_generator::config[1074]: No configuration found. May 9 04:48:23.864779 kernel: NET: Registered PF_VSOCK protocol family May 9 04:48:23.864789 systemd[1]: Populated /etc with preset unit settings. May 9 04:48:23.864800 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 9 04:48:23.864810 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 9 04:48:23.864820 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 9 04:48:23.864835 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 9 04:48:23.864846 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 9 04:48:23.864856 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 9 04:48:23.864866 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 9 04:48:23.864876 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 9 04:48:23.864887 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 9 04:48:23.864897 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 9 04:48:23.864908 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 9 04:48:23.864919 systemd[1]: Created slice user.slice - User and Session Slice. May 9 04:48:23.864931 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 04:48:23.864942 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 04:48:23.864952 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 9 04:48:23.864962 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 9 04:48:23.864973 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 9 04:48:23.864983 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 04:48:23.864994 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 9 04:48:23.865004 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 04:48:23.865016 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 04:48:23.865027 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 9 04:48:23.865037 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 9 04:48:23.865047 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 9 04:48:23.865058 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 9 04:48:23.865068 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 04:48:23.865078 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 04:48:23.865089 systemd[1]: Reached target slices.target - Slice Units. May 9 04:48:23.865101 systemd[1]: Reached target swap.target - Swaps. May 9 04:48:23.865112 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 9 04:48:23.865122 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 9 04:48:23.865133 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 9 04:48:23.865143 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 04:48:23.865153 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 04:48:23.865164 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 04:48:23.865174 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 9 04:48:23.865184 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 9 04:48:23.865196 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 9 04:48:23.865207 systemd[1]: Mounting media.mount - External Media Directory... May 9 04:48:23.865217 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 9 04:48:23.865227 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 9 04:48:23.865237 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 9 04:48:23.865251 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 9 04:48:23.865261 systemd[1]: Reached target machines.target - Containers. May 9 04:48:23.865271 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 9 04:48:23.865283 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 04:48:23.865294 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 04:48:23.865305 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 9 04:48:23.865315 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 04:48:23.865325 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 04:48:23.865335 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 04:48:23.865345 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 9 04:48:23.865356 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 04:48:23.865366 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 9 04:48:23.865378 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 9 04:48:23.865388 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 9 04:48:23.865398 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 9 04:48:23.865408 systemd[1]: Stopped systemd-fsck-usr.service. May 9 04:48:23.865419 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 9 04:48:23.865429 kernel: fuse: init (API version 7.39) May 9 04:48:23.865440 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 04:48:23.865449 kernel: loop: module loaded May 9 04:48:23.865461 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 04:48:23.865473 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 9 04:48:23.865483 kernel: ACPI: bus type drm_connector registered May 9 04:48:23.865493 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 9 04:48:23.865503 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 9 04:48:23.865514 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 04:48:23.865525 systemd[1]: verity-setup.service: Deactivated successfully. May 9 04:48:23.865543 systemd[1]: Stopped verity-setup.service. May 9 04:48:23.865556 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 9 04:48:23.865567 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 9 04:48:23.865578 systemd[1]: Mounted media.mount - External Media Directory. May 9 04:48:23.865588 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 9 04:48:23.865599 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 9 04:48:23.865635 systemd-journald[1142]: Collecting audit messages is disabled. May 9 04:48:23.865683 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 9 04:48:23.865696 systemd-journald[1142]: Journal started May 9 04:48:23.865721 systemd-journald[1142]: Runtime Journal (/run/log/journal/8e190c34ebae4274b8ef43459fb977c5) is 5.9M, max 47.3M, 41.4M free. May 9 04:48:23.673073 systemd[1]: Queued start job for default target multi-user.target. May 9 04:48:23.688534 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 9 04:48:23.688943 systemd[1]: systemd-journald.service: Deactivated successfully. May 9 04:48:23.867388 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 9 04:48:23.868930 systemd[1]: Started systemd-journald.service - Journal Service. May 9 04:48:23.869709 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 04:48:23.870895 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 9 04:48:23.871071 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 9 04:48:23.872189 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 04:48:23.872350 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 04:48:23.873469 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 04:48:23.873679 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 04:48:23.874854 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 04:48:23.875034 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 04:48:23.876321 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 9 04:48:23.876483 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 9 04:48:23.877605 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 04:48:23.877791 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 04:48:23.878870 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 04:48:23.879976 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 9 04:48:23.881324 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 9 04:48:23.882586 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 9 04:48:23.895628 systemd[1]: Reached target network-pre.target - Preparation for Network. May 9 04:48:23.897784 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 9 04:48:23.899650 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 9 04:48:23.900500 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 9 04:48:23.900528 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 04:48:23.902271 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 9 04:48:23.913549 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 9 04:48:23.914474 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 04:48:23.915850 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 9 04:48:23.917672 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 9 04:48:23.918598 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 04:48:23.919593 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 9 04:48:23.920526 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 04:48:23.921549 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 04:48:23.927099 systemd-journald[1142]: Time spent on flushing to /var/log/journal/8e190c34ebae4274b8ef43459fb977c5 is 23.006ms for 881 entries. May 9 04:48:23.927099 systemd-journald[1142]: System Journal (/var/log/journal/8e190c34ebae4274b8ef43459fb977c5) is 8M, max 195.6M, 187.6M free. May 9 04:48:23.955738 systemd-journald[1142]: Received client request to flush runtime journal. May 9 04:48:23.955793 kernel: loop0: detected capacity change from 0 to 138376 May 9 04:48:23.928360 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 9 04:48:23.930396 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 04:48:23.934818 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 04:48:23.936103 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 9 04:48:23.937278 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 9 04:48:23.938488 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 9 04:48:23.943212 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 9 04:48:23.946193 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 9 04:48:23.964900 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 9 04:48:23.960527 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 9 04:48:23.962122 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 04:48:23.974977 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. May 9 04:48:23.974992 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. May 9 04:48:23.979516 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 9 04:48:23.981811 kernel: loop1: detected capacity change from 0 to 189592 May 9 04:48:23.983651 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 04:48:23.988909 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 9 04:48:24.019686 kernel: loop2: detected capacity change from 0 to 107312 May 9 04:48:24.020625 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 9 04:48:24.023624 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 04:48:24.048112 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. May 9 04:48:24.048134 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. May 9 04:48:24.053975 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 04:48:24.057683 kernel: loop3: detected capacity change from 0 to 138376 May 9 04:48:24.067714 kernel: loop4: detected capacity change from 0 to 189592 May 9 04:48:24.078683 kernel: loop5: detected capacity change from 0 to 107312 May 9 04:48:24.088770 (sd-merge)[1216]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 9 04:48:24.089182 (sd-merge)[1216]: Merged extensions into '/usr'. May 9 04:48:24.092723 systemd[1]: Reload requested from client PID 1190 ('systemd-sysext') (unit systemd-sysext.service)... May 9 04:48:24.092740 systemd[1]: Reloading... May 9 04:48:24.148705 zram_generator::config[1246]: No configuration found. May 9 04:48:24.231716 ldconfig[1185]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 9 04:48:24.235176 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 04:48:24.310844 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 9 04:48:24.311302 systemd[1]: Reloading finished in 218 ms. May 9 04:48:24.330708 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 9 04:48:24.331878 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 9 04:48:24.351002 systemd[1]: Starting ensure-sysext.service... May 9 04:48:24.352611 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 04:48:24.364954 systemd[1]: Reload requested from client PID 1278 ('systemctl') (unit ensure-sysext.service)... May 9 04:48:24.364970 systemd[1]: Reloading... May 9 04:48:24.370980 systemd-tmpfiles[1279]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 9 04:48:24.371290 systemd-tmpfiles[1279]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 9 04:48:24.371596 systemd-tmpfiles[1279]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 9 04:48:24.371907 systemd-tmpfiles[1279]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 9 04:48:24.372619 systemd-tmpfiles[1279]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 9 04:48:24.372925 systemd-tmpfiles[1279]: ACLs are not supported, ignoring. May 9 04:48:24.373037 systemd-tmpfiles[1279]: ACLs are not supported, ignoring. May 9 04:48:24.376116 systemd-tmpfiles[1279]: Detected autofs mount point /boot during canonicalization of boot. May 9 04:48:24.376219 systemd-tmpfiles[1279]: Skipping /boot May 9 04:48:24.386060 systemd-tmpfiles[1279]: Detected autofs mount point /boot during canonicalization of boot. May 9 04:48:24.386164 systemd-tmpfiles[1279]: Skipping /boot May 9 04:48:24.404695 zram_generator::config[1302]: No configuration found. May 9 04:48:24.485484 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 04:48:24.560916 systemd[1]: Reloading finished in 195 ms. May 9 04:48:24.573235 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 9 04:48:24.594697 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 04:48:24.602958 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 9 04:48:24.605404 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 9 04:48:24.617974 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 9 04:48:24.622272 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 04:48:24.626932 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 04:48:24.629043 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 9 04:48:24.634687 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 04:48:24.647702 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 04:48:24.651560 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 04:48:24.653968 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 04:48:24.654945 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 04:48:24.655065 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 9 04:48:24.658364 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 9 04:48:24.660463 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 9 04:48:24.670895 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 9 04:48:24.672629 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 04:48:24.672887 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 04:48:24.675971 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 04:48:24.676137 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 04:48:24.677959 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 04:48:24.678115 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 04:48:24.679726 systemd-udevd[1352]: Using default interface naming scheme 'v255'. May 9 04:48:24.687342 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 04:48:24.689554 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 04:48:24.691635 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 04:48:24.695373 augenrules[1379]: No rules May 9 04:48:24.695993 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 04:48:24.696897 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 04:48:24.697089 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 9 04:48:24.712579 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 9 04:48:24.715718 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 9 04:48:24.718850 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 04:48:24.720625 systemd[1]: audit-rules.service: Deactivated successfully. May 9 04:48:24.720847 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 9 04:48:24.722247 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 04:48:24.722400 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 04:48:24.723841 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 04:48:24.724011 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 04:48:24.729748 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 9 04:48:24.731267 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 9 04:48:24.734714 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 9 04:48:24.739312 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 04:48:24.739473 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 04:48:24.758198 systemd[1]: Finished ensure-sysext.service. May 9 04:48:24.766741 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1409) May 9 04:48:24.767785 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 9 04:48:24.776878 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 9 04:48:24.777875 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 04:48:24.779631 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 04:48:24.782875 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 04:48:24.792093 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 04:48:24.801564 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 04:48:24.803387 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 04:48:24.803433 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 9 04:48:24.807581 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 04:48:24.812116 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 9 04:48:24.813027 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 9 04:48:24.813646 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 04:48:24.813852 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 04:48:24.814979 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 04:48:24.815143 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 04:48:24.816227 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 04:48:24.816387 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 04:48:24.817592 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 04:48:24.817771 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 04:48:24.829786 augenrules[1426]: /sbin/augenrules: No change May 9 04:48:24.836574 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 9 04:48:24.841519 augenrules[1459]: No rules May 9 04:48:24.843364 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 9 04:48:24.844630 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 04:48:24.844702 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 04:48:24.844992 systemd[1]: audit-rules.service: Deactivated successfully. May 9 04:48:24.845195 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 9 04:48:24.877988 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 9 04:48:24.917219 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 9 04:48:24.917425 systemd-resolved[1346]: Positive Trust Anchors: May 9 04:48:24.917444 systemd-resolved[1346]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 04:48:24.917476 systemd-resolved[1346]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 04:48:24.918450 systemd[1]: Reached target time-set.target - System Time Set. May 9 04:48:24.928972 systemd-networkd[1438]: lo: Link UP May 9 04:48:24.929210 systemd-networkd[1438]: lo: Gained carrier May 9 04:48:24.930138 systemd-networkd[1438]: Enumeration completed May 9 04:48:24.930293 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 04:48:24.930833 systemd-networkd[1438]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 04:48:24.930913 systemd-networkd[1438]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 04:48:24.931412 systemd-networkd[1438]: eth0: Link UP May 9 04:48:24.931580 systemd-networkd[1438]: eth0: Gained carrier May 9 04:48:24.931642 systemd-networkd[1438]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 04:48:24.931716 systemd-resolved[1346]: Defaulting to hostname 'linux'. May 9 04:48:24.934789 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 9 04:48:24.936736 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 9 04:48:24.937669 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 04:48:24.939442 systemd[1]: Reached target network.target - Network. May 9 04:48:24.940256 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 04:48:24.941131 systemd[1]: Reached target sysinit.target - System Initialization. May 9 04:48:24.942138 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 9 04:48:24.943171 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 9 04:48:24.944540 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 9 04:48:24.945517 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 9 04:48:24.946763 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 9 04:48:24.947762 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 9 04:48:24.947795 systemd[1]: Reached target paths.target - Path Units. May 9 04:48:24.948569 systemd[1]: Reached target timers.target - Timer Units. May 9 04:48:24.950960 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 9 04:48:24.960016 systemd[1]: Starting docker.socket - Docker Socket for the API... May 9 04:48:24.963063 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 9 04:48:24.964146 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 9 04:48:24.965099 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 9 04:48:24.968306 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 9 04:48:24.969751 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 9 04:48:24.971784 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 9 04:48:24.972919 systemd[1]: Reached target sockets.target - Socket Units. May 9 04:48:24.973717 systemd[1]: Reached target basic.target - Basic System. May 9 04:48:24.974369 systemd-networkd[1438]: eth0: DHCPv4 address 10.0.0.27/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 9 04:48:24.975251 systemd-timesyncd[1442]: Network configuration changed, trying to establish connection. May 9 04:48:24.975403 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 9 04:48:24.975430 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 9 04:48:24.976406 systemd[1]: Starting containerd.service - containerd container runtime... May 9 04:48:24.976903 systemd-timesyncd[1442]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 9 04:48:24.976949 systemd-timesyncd[1442]: Initial clock synchronization to Fri 2025-05-09 04:48:25.014818 UTC. May 9 04:48:24.980787 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 9 04:48:24.983871 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 9 04:48:24.986316 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 9 04:48:24.998813 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 9 04:48:24.999725 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 9 04:48:25.000707 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 9 04:48:25.002979 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 9 04:48:25.004589 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 9 04:48:25.005123 jq[1491]: false May 9 04:48:25.008797 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 9 04:48:25.012118 systemd[1]: Starting systemd-logind.service - User Login Management... May 9 04:48:25.013747 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 9 04:48:25.014147 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 9 04:48:25.014938 systemd[1]: Starting update-engine.service - Update Engine... May 9 04:48:25.016921 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 9 04:48:25.026270 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 9 04:48:25.029618 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 9 04:48:25.030963 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 9 04:48:25.031134 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 9 04:48:25.031375 systemd[1]: motdgen.service: Deactivated successfully. May 9 04:48:25.031521 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 9 04:48:25.034790 extend-filesystems[1492]: Found loop3 May 9 04:48:25.034790 extend-filesystems[1492]: Found loop4 May 9 04:48:25.034790 extend-filesystems[1492]: Found loop5 May 9 04:48:25.034790 extend-filesystems[1492]: Found vda May 9 04:48:25.034790 extend-filesystems[1492]: Found vda1 May 9 04:48:25.034790 extend-filesystems[1492]: Found vda2 May 9 04:48:25.034790 extend-filesystems[1492]: Found vda3 May 9 04:48:25.034790 extend-filesystems[1492]: Found usr May 9 04:48:25.034790 extend-filesystems[1492]: Found vda4 May 9 04:48:25.034790 extend-filesystems[1492]: Found vda6 May 9 04:48:25.034790 extend-filesystems[1492]: Found vda7 May 9 04:48:25.034790 extend-filesystems[1492]: Found vda9 May 9 04:48:25.034790 extend-filesystems[1492]: Checking size of /dev/vda9 May 9 04:48:25.057104 jq[1506]: true May 9 04:48:25.036535 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 9 04:48:25.057250 extend-filesystems[1492]: Resized partition /dev/vda9 May 9 04:48:25.038689 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 9 04:48:25.060966 (ntainerd)[1513]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 9 04:48:25.063639 extend-filesystems[1526]: resize2fs 1.47.2 (1-Jan-2025) May 9 04:48:25.067692 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 9 04:48:25.072475 jq[1512]: true May 9 04:48:25.072652 tar[1510]: linux-arm64/helm May 9 04:48:25.081969 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 04:48:25.095701 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1407) May 9 04:48:25.097034 dbus-daemon[1488]: [system] SELinux support is enabled May 9 04:48:25.115233 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 9 04:48:25.102143 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 9 04:48:25.115330 update_engine[1504]: I20250509 04:48:25.094499 1504 main.cc:92] Flatcar Update Engine starting May 9 04:48:25.115330 update_engine[1504]: I20250509 04:48:25.103755 1504 update_check_scheduler.cc:74] Next update check in 8m5s May 9 04:48:25.104898 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 9 04:48:25.104924 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 9 04:48:25.106352 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 9 04:48:25.106368 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 9 04:48:25.109674 systemd[1]: Started update-engine.service - Update Engine. May 9 04:48:25.127695 extend-filesystems[1526]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 9 04:48:25.127695 extend-filesystems[1526]: old_desc_blocks = 1, new_desc_blocks = 1 May 9 04:48:25.127695 extend-filesystems[1526]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 9 04:48:25.117706 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 9 04:48:25.144882 extend-filesystems[1492]: Resized filesystem in /dev/vda9 May 9 04:48:25.138911 systemd[1]: extend-filesystems.service: Deactivated successfully. May 9 04:48:25.140763 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 9 04:48:25.146906 systemd-logind[1503]: Watching system buttons on /dev/input/event0 (Power Button) May 9 04:48:25.147936 systemd-logind[1503]: New seat seat0. May 9 04:48:25.148757 systemd[1]: Started systemd-logind.service - User Login Management. May 9 04:48:25.172823 bash[1548]: Updated "/home/core/.ssh/authorized_keys" May 9 04:48:25.177166 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 9 04:48:25.180383 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 9 04:48:25.203758 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 04:48:25.208078 locksmithd[1535]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 9 04:48:25.306891 containerd[1513]: time="2025-05-09T04:48:25Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 9 04:48:25.308034 containerd[1513]: time="2025-05-09T04:48:25.307996134Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 9 04:48:25.318290 containerd[1513]: time="2025-05-09T04:48:25.318257862Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="49.738µs" May 9 04:48:25.318502 containerd[1513]: time="2025-05-09T04:48:25.318378284Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 9 04:48:25.318502 containerd[1513]: time="2025-05-09T04:48:25.318404436Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 9 04:48:25.318696 containerd[1513]: time="2025-05-09T04:48:25.318674436Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 9 04:48:25.319046 containerd[1513]: time="2025-05-09T04:48:25.318774074Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 9 04:48:25.319046 containerd[1513]: time="2025-05-09T04:48:25.318807994Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 9 04:48:25.319046 containerd[1513]: time="2025-05-09T04:48:25.318863580Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 9 04:48:25.319046 containerd[1513]: time="2025-05-09T04:48:25.318874193Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 9 04:48:25.319336 containerd[1513]: time="2025-05-09T04:48:25.319311151Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 9 04:48:25.319406 containerd[1513]: time="2025-05-09T04:48:25.319392087Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 9 04:48:25.319458 containerd[1513]: time="2025-05-09T04:48:25.319444669Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 9 04:48:25.319559 containerd[1513]: time="2025-05-09T04:48:25.319543506Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 9 04:48:25.319780 containerd[1513]: time="2025-05-09T04:48:25.319761124Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 9 04:48:25.320130 containerd[1513]: time="2025-05-09T04:48:25.320106694Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 9 04:48:25.320293 containerd[1513]: time="2025-05-09T04:48:25.320273652Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 9 04:48:25.320366 containerd[1513]: time="2025-05-09T04:48:25.320351704Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 9 04:48:25.321278 containerd[1513]: time="2025-05-09T04:48:25.321252773Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 9 04:48:25.321960 containerd[1513]: time="2025-05-09T04:48:25.321856248Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 9 04:48:25.322078 containerd[1513]: time="2025-05-09T04:48:25.322058528Z" level=info msg="metadata content store policy set" policy=shared May 9 04:48:25.325715 containerd[1513]: time="2025-05-09T04:48:25.325686190Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 9 04:48:25.326715 containerd[1513]: time="2025-05-09T04:48:25.325842135Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 9 04:48:25.326715 containerd[1513]: time="2025-05-09T04:48:25.325861678Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 9 04:48:25.326715 containerd[1513]: time="2025-05-09T04:48:25.325873892Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 9 04:48:25.326715 containerd[1513]: time="2025-05-09T04:48:25.325886347Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 9 04:48:25.326715 containerd[1513]: time="2025-05-09T04:48:25.325896719Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 9 04:48:25.326715 containerd[1513]: time="2025-05-09T04:48:25.325915421Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 9 04:48:25.326715 containerd[1513]: time="2025-05-09T04:48:25.325937608Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 9 04:48:25.326715 containerd[1513]: time="2025-05-09T04:48:25.325952345Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 9 04:48:25.326715 containerd[1513]: time="2025-05-09T04:48:25.325963198Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 9 04:48:25.326715 containerd[1513]: time="2025-05-09T04:48:25.325972449Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 9 04:48:25.326715 containerd[1513]: time="2025-05-09T04:48:25.325984824Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 9 04:48:25.326715 containerd[1513]: time="2025-05-09T04:48:25.326102683Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 9 04:48:25.326715 containerd[1513]: time="2025-05-09T04:48:25.326122947Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 9 04:48:25.326715 containerd[1513]: time="2025-05-09T04:48:25.326137004Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 9 04:48:25.326979 containerd[1513]: time="2025-05-09T04:48:25.326146896Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 9 04:48:25.326979 containerd[1513]: time="2025-05-09T04:48:25.326156828Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 9 04:48:25.326979 containerd[1513]: time="2025-05-09T04:48:25.326170123Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 9 04:48:25.326979 containerd[1513]: time="2025-05-09T04:48:25.326181457Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 9 04:48:25.326979 containerd[1513]: time="2025-05-09T04:48:25.326191349Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 9 04:48:25.326979 containerd[1513]: time="2025-05-09T04:48:25.326202041Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 9 04:48:25.326979 containerd[1513]: time="2025-05-09T04:48:25.326213054Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 9 04:48:25.326979 containerd[1513]: time="2025-05-09T04:48:25.326224147Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 9 04:48:25.326979 containerd[1513]: time="2025-05-09T04:48:25.326396432Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 9 04:48:25.326979 containerd[1513]: time="2025-05-09T04:48:25.326410448Z" level=info msg="Start snapshots syncer" May 9 04:48:25.326979 containerd[1513]: time="2025-05-09T04:48:25.326440604Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 9 04:48:25.327290 containerd[1513]: time="2025-05-09T04:48:25.326634114Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 9 04:48:25.327491 containerd[1513]: time="2025-05-09T04:48:25.327470786Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 9 04:48:25.327767 containerd[1513]: time="2025-05-09T04:48:25.327746633Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 9 04:48:25.328058 containerd[1513]: time="2025-05-09T04:48:25.327991563Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 9 04:48:25.328216 containerd[1513]: time="2025-05-09T04:48:25.328129767Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 9 04:48:25.328297 containerd[1513]: time="2025-05-09T04:48:25.328278303Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 9 04:48:25.328406 containerd[1513]: time="2025-05-09T04:48:25.328389395Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 9 04:48:25.328511 containerd[1513]: time="2025-05-09T04:48:25.328448945Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 9 04:48:25.328575 containerd[1513]: time="2025-05-09T04:48:25.328560317Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 9 04:48:25.328714 containerd[1513]: time="2025-05-09T04:48:25.328693355Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 9 04:48:25.328849 containerd[1513]: time="2025-05-09T04:48:25.328831439Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 9 04:48:25.328920 containerd[1513]: time="2025-05-09T04:48:25.328896716Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 9 04:48:25.329028 containerd[1513]: time="2025-05-09T04:48:25.329011132Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 9 04:48:25.329167 containerd[1513]: time="2025-05-09T04:48:25.329105444Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 9 04:48:25.329233 containerd[1513]: time="2025-05-09T04:48:25.329217537Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 9 04:48:25.329356 containerd[1513]: time="2025-05-09T04:48:25.329339682Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 9 04:48:25.329412 containerd[1513]: time="2025-05-09T04:48:25.329398471Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 9 04:48:25.329510 containerd[1513]: time="2025-05-09T04:48:25.329494705Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 9 04:48:25.329561 containerd[1513]: time="2025-05-09T04:48:25.329548770Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 9 04:48:25.329675 containerd[1513]: time="2025-05-09T04:48:25.329647326Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 9 04:48:25.329848 containerd[1513]: time="2025-05-09T04:48:25.329833227Z" level=info msg="runtime interface created" May 9 04:48:25.330090 containerd[1513]: time="2025-05-09T04:48:25.329934827Z" level=info msg="created NRI interface" May 9 04:48:25.330090 containerd[1513]: time="2025-05-09T04:48:25.329953409Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 9 04:48:25.330090 containerd[1513]: time="2025-05-09T04:48:25.329966825Z" level=info msg="Connect containerd service" May 9 04:48:25.330090 containerd[1513]: time="2025-05-09T04:48:25.330002868Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 9 04:48:25.331332 containerd[1513]: time="2025-05-09T04:48:25.331266206Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 04:48:25.425368 sshd_keygen[1508]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 9 04:48:25.450582 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 9 04:48:25.453802 containerd[1513]: time="2025-05-09T04:48:25.453230461Z" level=info msg="Start subscribing containerd event" May 9 04:48:25.453802 containerd[1513]: time="2025-05-09T04:48:25.453310676Z" level=info msg="Start recovering state" May 9 04:48:25.453802 containerd[1513]: time="2025-05-09T04:48:25.453405709Z" level=info msg="Start event monitor" May 9 04:48:25.453802 containerd[1513]: time="2025-05-09T04:48:25.453422048Z" level=info msg="Start cni network conf syncer for default" May 9 04:48:25.453802 containerd[1513]: time="2025-05-09T04:48:25.453432100Z" level=info msg="Start streaming server" May 9 04:48:25.453802 containerd[1513]: time="2025-05-09T04:48:25.453440670Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 9 04:48:25.453802 containerd[1513]: time="2025-05-09T04:48:25.453447518Z" level=info msg="runtime interface starting up..." May 9 04:48:25.453802 containerd[1513]: time="2025-05-09T04:48:25.453452684Z" level=info msg="starting plugins..." May 9 04:48:25.453802 containerd[1513]: time="2025-05-09T04:48:25.453466821Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 9 04:48:25.453802 containerd[1513]: time="2025-05-09T04:48:25.453531858Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 9 04:48:25.453802 containerd[1513]: time="2025-05-09T04:48:25.453577552Z" level=info msg=serving... address=/run/containerd/containerd.sock May 9 04:48:25.453802 containerd[1513]: time="2025-05-09T04:48:25.453620203Z" level=info msg="containerd successfully booted in 0.147555s" May 9 04:48:25.454479 systemd[1]: Starting issuegen.service - Generate /run/issue... May 9 04:48:25.455439 systemd[1]: Started containerd.service - containerd container runtime. May 9 04:48:25.472158 systemd[1]: issuegen.service: Deactivated successfully. May 9 04:48:25.472487 systemd[1]: Finished issuegen.service - Generate /run/issue. May 9 04:48:25.476910 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 9 04:48:25.498720 tar[1510]: linux-arm64/LICENSE May 9 04:48:25.498720 tar[1510]: linux-arm64/README.md May 9 04:48:25.500304 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 9 04:48:25.503850 systemd[1]: Started getty@tty1.service - Getty on tty1. May 9 04:48:25.512547 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 9 04:48:25.513566 systemd[1]: Reached target getty.target - Login Prompts. May 9 04:48:25.515757 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 9 04:48:26.374866 systemd-networkd[1438]: eth0: Gained IPv6LL May 9 04:48:26.378753 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 9 04:48:26.380234 systemd[1]: Reached target network-online.target - Network is Online. May 9 04:48:26.382549 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 9 04:48:26.384759 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 04:48:26.396188 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 9 04:48:26.415541 systemd[1]: coreos-metadata.service: Deactivated successfully. May 9 04:48:26.415803 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 9 04:48:26.417305 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 9 04:48:26.419089 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 9 04:48:26.883028 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 04:48:26.884383 systemd[1]: Reached target multi-user.target - Multi-User System. May 9 04:48:26.886039 systemd[1]: Startup finished in 2.156s (kernel) + 5.540s (initrd) + 3.661s (userspace) = 11.358s. May 9 04:48:26.886463 (kubelet)[1623]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 04:48:27.313230 kubelet[1623]: E0509 04:48:27.313100 1623 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 04:48:27.315481 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 04:48:27.315634 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 04:48:27.316796 systemd[1]: kubelet.service: Consumed 783ms CPU time, 234.8M memory peak. May 9 04:48:31.101490 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 9 04:48:31.102829 systemd[1]: Started sshd@0-10.0.0.27:22-10.0.0.1:50038.service - OpenSSH per-connection server daemon (10.0.0.1:50038). May 9 04:48:31.168120 sshd[1636]: Accepted publickey for core from 10.0.0.1 port 50038 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:48:31.175810 sshd-session[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:48:31.182123 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 9 04:48:31.183125 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 9 04:48:31.188646 systemd-logind[1503]: New session 1 of user core. May 9 04:48:31.204343 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 9 04:48:31.206876 systemd[1]: Starting user@500.service - User Manager for UID 500... May 9 04:48:31.221219 (systemd)[1640]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 9 04:48:31.223561 systemd-logind[1503]: New session c1 of user core. May 9 04:48:31.331933 systemd[1640]: Queued start job for default target default.target. May 9 04:48:31.341574 systemd[1640]: Created slice app.slice - User Application Slice. May 9 04:48:31.341603 systemd[1640]: Reached target paths.target - Paths. May 9 04:48:31.341640 systemd[1640]: Reached target timers.target - Timers. May 9 04:48:31.342862 systemd[1640]: Starting dbus.socket - D-Bus User Message Bus Socket... May 9 04:48:31.351964 systemd[1640]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 9 04:48:31.352029 systemd[1640]: Reached target sockets.target - Sockets. May 9 04:48:31.352065 systemd[1640]: Reached target basic.target - Basic System. May 9 04:48:31.352094 systemd[1640]: Reached target default.target - Main User Target. May 9 04:48:31.352119 systemd[1640]: Startup finished in 122ms. May 9 04:48:31.352297 systemd[1]: Started user@500.service - User Manager for UID 500. May 9 04:48:31.354074 systemd[1]: Started session-1.scope - Session 1 of User core. May 9 04:48:31.414125 systemd[1]: Started sshd@1-10.0.0.27:22-10.0.0.1:50048.service - OpenSSH per-connection server daemon (10.0.0.1:50048). May 9 04:48:31.468903 sshd[1651]: Accepted publickey for core from 10.0.0.1 port 50048 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:48:31.470112 sshd-session[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:48:31.474721 systemd-logind[1503]: New session 2 of user core. May 9 04:48:31.481814 systemd[1]: Started session-2.scope - Session 2 of User core. May 9 04:48:31.536790 sshd[1653]: Connection closed by 10.0.0.1 port 50048 May 9 04:48:31.537946 sshd-session[1651]: pam_unix(sshd:session): session closed for user core May 9 04:48:31.556524 systemd[1]: sshd@1-10.0.0.27:22-10.0.0.1:50048.service: Deactivated successfully. May 9 04:48:31.558103 systemd[1]: session-2.scope: Deactivated successfully. May 9 04:48:31.558801 systemd-logind[1503]: Session 2 logged out. Waiting for processes to exit. May 9 04:48:31.560467 systemd[1]: Started sshd@2-10.0.0.27:22-10.0.0.1:50064.service - OpenSSH per-connection server daemon (10.0.0.1:50064). May 9 04:48:31.561349 systemd-logind[1503]: Removed session 2. May 9 04:48:31.615877 sshd[1658]: Accepted publickey for core from 10.0.0.1 port 50064 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:48:31.617059 sshd-session[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:48:31.621672 systemd-logind[1503]: New session 3 of user core. May 9 04:48:31.632896 systemd[1]: Started session-3.scope - Session 3 of User core. May 9 04:48:31.681401 sshd[1661]: Connection closed by 10.0.0.1 port 50064 May 9 04:48:31.681859 sshd-session[1658]: pam_unix(sshd:session): session closed for user core May 9 04:48:31.691923 systemd[1]: sshd@2-10.0.0.27:22-10.0.0.1:50064.service: Deactivated successfully. May 9 04:48:31.694178 systemd[1]: session-3.scope: Deactivated successfully. May 9 04:48:31.695707 systemd-logind[1503]: Session 3 logged out. Waiting for processes to exit. May 9 04:48:31.697470 systemd[1]: Started sshd@3-10.0.0.27:22-10.0.0.1:50080.service - OpenSSH per-connection server daemon (10.0.0.1:50080). May 9 04:48:31.698536 systemd-logind[1503]: Removed session 3. May 9 04:48:31.747421 sshd[1666]: Accepted publickey for core from 10.0.0.1 port 50080 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:48:31.748579 sshd-session[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:48:31.752719 systemd-logind[1503]: New session 4 of user core. May 9 04:48:31.759841 systemd[1]: Started session-4.scope - Session 4 of User core. May 9 04:48:31.813437 sshd[1669]: Connection closed by 10.0.0.1 port 50080 May 9 04:48:31.813752 sshd-session[1666]: pam_unix(sshd:session): session closed for user core May 9 04:48:31.827335 systemd[1]: sshd@3-10.0.0.27:22-10.0.0.1:50080.service: Deactivated successfully. May 9 04:48:31.828932 systemd[1]: session-4.scope: Deactivated successfully. May 9 04:48:31.829650 systemd-logind[1503]: Session 4 logged out. Waiting for processes to exit. May 9 04:48:31.831587 systemd[1]: Started sshd@4-10.0.0.27:22-10.0.0.1:50092.service - OpenSSH per-connection server daemon (10.0.0.1:50092). May 9 04:48:31.832467 systemd-logind[1503]: Removed session 4. May 9 04:48:31.875943 sshd[1674]: Accepted publickey for core from 10.0.0.1 port 50092 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:48:31.877030 sshd-session[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:48:31.881391 systemd-logind[1503]: New session 5 of user core. May 9 04:48:31.888837 systemd[1]: Started session-5.scope - Session 5 of User core. May 9 04:48:31.953590 sudo[1678]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 9 04:48:31.953906 sudo[1678]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 04:48:31.972565 sudo[1678]: pam_unix(sudo:session): session closed for user root May 9 04:48:31.975214 sshd[1677]: Connection closed by 10.0.0.1 port 50092 May 9 04:48:31.974450 sshd-session[1674]: pam_unix(sshd:session): session closed for user core May 9 04:48:31.985107 systemd[1]: sshd@4-10.0.0.27:22-10.0.0.1:50092.service: Deactivated successfully. May 9 04:48:31.986583 systemd[1]: session-5.scope: Deactivated successfully. May 9 04:48:31.987334 systemd-logind[1503]: Session 5 logged out. Waiting for processes to exit. May 9 04:48:31.989136 systemd[1]: Started sshd@5-10.0.0.27:22-10.0.0.1:50106.service - OpenSSH per-connection server daemon (10.0.0.1:50106). May 9 04:48:31.991124 systemd-logind[1503]: Removed session 5. May 9 04:48:32.038431 sshd[1683]: Accepted publickey for core from 10.0.0.1 port 50106 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:48:32.039514 sshd-session[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:48:32.043588 systemd-logind[1503]: New session 6 of user core. May 9 04:48:32.052806 systemd[1]: Started session-6.scope - Session 6 of User core. May 9 04:48:32.103024 sudo[1688]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 9 04:48:32.103312 sudo[1688]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 04:48:32.106872 sudo[1688]: pam_unix(sudo:session): session closed for user root May 9 04:48:32.111446 sudo[1687]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 9 04:48:32.111736 sudo[1687]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 04:48:32.119927 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 9 04:48:32.151854 augenrules[1710]: No rules May 9 04:48:32.152491 systemd[1]: audit-rules.service: Deactivated successfully. May 9 04:48:32.152855 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 9 04:48:32.154052 sudo[1687]: pam_unix(sudo:session): session closed for user root May 9 04:48:32.155230 sshd[1686]: Connection closed by 10.0.0.1 port 50106 May 9 04:48:32.155529 sshd-session[1683]: pam_unix(sshd:session): session closed for user core May 9 04:48:32.168071 systemd[1]: sshd@5-10.0.0.27:22-10.0.0.1:50106.service: Deactivated successfully. May 9 04:48:32.169584 systemd[1]: session-6.scope: Deactivated successfully. May 9 04:48:32.171875 systemd-logind[1503]: Session 6 logged out. Waiting for processes to exit. May 9 04:48:32.172945 systemd[1]: Started sshd@6-10.0.0.27:22-10.0.0.1:50118.service - OpenSSH per-connection server daemon (10.0.0.1:50118). May 9 04:48:32.173834 systemd-logind[1503]: Removed session 6. May 9 04:48:32.230064 sshd[1718]: Accepted publickey for core from 10.0.0.1 port 50118 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:48:32.231309 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:48:32.236046 systemd-logind[1503]: New session 7 of user core. May 9 04:48:32.247824 systemd[1]: Started session-7.scope - Session 7 of User core. May 9 04:48:32.300229 sudo[1722]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 9 04:48:32.300854 sudo[1722]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 04:48:32.656401 systemd[1]: Starting docker.service - Docker Application Container Engine... May 9 04:48:32.673051 (dockerd)[1744]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 9 04:48:32.939438 dockerd[1744]: time="2025-05-09T04:48:32.938956199Z" level=info msg="Starting up" May 9 04:48:32.940592 dockerd[1744]: time="2025-05-09T04:48:32.940557119Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 9 04:48:33.039045 dockerd[1744]: time="2025-05-09T04:48:33.038998583Z" level=info msg="Loading containers: start." May 9 04:48:33.049668 kernel: Initializing XFRM netlink socket May 9 04:48:33.240055 systemd-networkd[1438]: docker0: Link UP May 9 04:48:33.249157 dockerd[1744]: time="2025-05-09T04:48:33.249109125Z" level=info msg="Loading containers: done." May 9 04:48:33.262326 dockerd[1744]: time="2025-05-09T04:48:33.262268152Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 9 04:48:33.262459 dockerd[1744]: time="2025-05-09T04:48:33.262358995Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 9 04:48:33.262485 dockerd[1744]: time="2025-05-09T04:48:33.262464732Z" level=info msg="Initializing buildkit" May 9 04:48:33.290283 dockerd[1744]: time="2025-05-09T04:48:33.290224672Z" level=info msg="Completed buildkit initialization" May 9 04:48:33.294919 dockerd[1744]: time="2025-05-09T04:48:33.294874830Z" level=info msg="Daemon has completed initialization" May 9 04:48:33.294985 dockerd[1744]: time="2025-05-09T04:48:33.294941852Z" level=info msg="API listen on /run/docker.sock" May 9 04:48:33.295147 systemd[1]: Started docker.service - Docker Application Container Engine. May 9 04:48:34.109607 containerd[1513]: time="2025-05-09T04:48:34.109555332Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 9 04:48:34.829379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3308960038.mount: Deactivated successfully. May 9 04:48:36.759256 containerd[1513]: time="2025-05-09T04:48:36.759193571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:48:36.759718 containerd[1513]: time="2025-05-09T04:48:36.759684741Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=25554610" May 9 04:48:36.760471 containerd[1513]: time="2025-05-09T04:48:36.760416434Z" level=info msg="ImageCreate event name:\"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:48:36.763244 containerd[1513]: time="2025-05-09T04:48:36.763189954Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:48:36.763794 containerd[1513]: time="2025-05-09T04:48:36.763763714Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"25551408\" in 2.654164263s" May 9 04:48:36.763851 containerd[1513]: time="2025-05-09T04:48:36.763797101Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\"" May 9 04:48:36.764484 containerd[1513]: time="2025-05-09T04:48:36.764325383Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 9 04:48:37.566047 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 9 04:48:37.567460 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 04:48:37.676531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 04:48:37.680038 (kubelet)[2015]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 04:48:37.713204 kubelet[2015]: E0509 04:48:37.713137 2015 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 04:48:37.716353 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 04:48:37.716509 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 04:48:37.716872 systemd[1]: kubelet.service: Consumed 128ms CPU time, 97.2M memory peak. May 9 04:48:38.464550 containerd[1513]: time="2025-05-09T04:48:38.464505504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:48:38.465422 containerd[1513]: time="2025-05-09T04:48:38.465216382Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=22458980" May 9 04:48:38.466168 containerd[1513]: time="2025-05-09T04:48:38.466124775Z" level=info msg="ImageCreate event name:\"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:48:38.468516 containerd[1513]: time="2025-05-09T04:48:38.468473179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:48:38.469390 containerd[1513]: time="2025-05-09T04:48:38.469364479Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"23900539\" in 1.705006548s" May 9 04:48:38.469390 containerd[1513]: time="2025-05-09T04:48:38.469388938Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\"" May 9 04:48:38.469972 containerd[1513]: time="2025-05-09T04:48:38.469782967Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 9 04:48:40.005619 containerd[1513]: time="2025-05-09T04:48:40.005575236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:48:40.006568 containerd[1513]: time="2025-05-09T04:48:40.006367099Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=17125815" May 9 04:48:40.007221 containerd[1513]: time="2025-05-09T04:48:40.007204917Z" level=info msg="ImageCreate event name:\"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:48:40.010517 containerd[1513]: time="2025-05-09T04:48:40.010461396Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:48:40.011431 containerd[1513]: time="2025-05-09T04:48:40.011401329Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"18567392\" in 1.541581173s" May 9 04:48:40.011492 containerd[1513]: time="2025-05-09T04:48:40.011434593Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\"" May 9 04:48:40.012200 containerd[1513]: time="2025-05-09T04:48:40.011935042Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 9 04:48:41.134628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4180342661.mount: Deactivated successfully. May 9 04:48:41.360900 containerd[1513]: time="2025-05-09T04:48:41.360834548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:48:41.361804 containerd[1513]: time="2025-05-09T04:48:41.361767334Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=26871919" May 9 04:48:41.362533 containerd[1513]: time="2025-05-09T04:48:41.362495133Z" level=info msg="ImageCreate event name:\"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:48:41.364117 containerd[1513]: time="2025-05-09T04:48:41.364079064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:48:41.364763 containerd[1513]: time="2025-05-09T04:48:41.364725686Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"26870936\" in 1.352761583s" May 9 04:48:41.364763 containerd[1513]: time="2025-05-09T04:48:41.364755067Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\"" May 9 04:48:41.365255 containerd[1513]: time="2025-05-09T04:48:41.365222440Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 9 04:48:41.933851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2448944671.mount: Deactivated successfully. May 9 04:48:42.809345 containerd[1513]: time="2025-05-09T04:48:42.809287363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:48:42.809829 containerd[1513]: time="2025-05-09T04:48:42.809795034Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 9 04:48:42.810512 containerd[1513]: time="2025-05-09T04:48:42.810466258Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:48:42.813726 containerd[1513]: time="2025-05-09T04:48:42.813399767Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:48:42.814239 containerd[1513]: time="2025-05-09T04:48:42.813971242Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.448713497s" May 9 04:48:42.814239 containerd[1513]: time="2025-05-09T04:48:42.813996139Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 9 04:48:42.814436 containerd[1513]: time="2025-05-09T04:48:42.814355548Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 9 04:48:43.238444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2304812790.mount: Deactivated successfully. May 9 04:48:43.242195 containerd[1513]: time="2025-05-09T04:48:43.242154077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 04:48:43.242888 containerd[1513]: time="2025-05-09T04:48:43.242842098Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 9 04:48:43.243443 containerd[1513]: time="2025-05-09T04:48:43.243418844Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 04:48:43.245391 containerd[1513]: time="2025-05-09T04:48:43.245349578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 04:48:43.245914 containerd[1513]: time="2025-05-09T04:48:43.245893142Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 431.513017ms" May 9 04:48:43.245966 containerd[1513]: time="2025-05-09T04:48:43.245921481Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 9 04:48:43.246436 containerd[1513]: time="2025-05-09T04:48:43.246412290Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 9 04:48:43.836824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2556984779.mount: Deactivated successfully. May 9 04:48:46.758703 containerd[1513]: time="2025-05-09T04:48:46.758636992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:48:46.759308 containerd[1513]: time="2025-05-09T04:48:46.759272539Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" May 9 04:48:46.760357 containerd[1513]: time="2025-05-09T04:48:46.760329903Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:48:46.764758 containerd[1513]: time="2025-05-09T04:48:46.764714613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:48:46.766018 containerd[1513]: time="2025-05-09T04:48:46.765990070Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.519543678s" May 9 04:48:46.766083 containerd[1513]: time="2025-05-09T04:48:46.766023531Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 9 04:48:47.733770 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 9 04:48:47.735326 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 04:48:47.835288 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 04:48:47.838475 (kubelet)[2171]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 04:48:47.868541 kubelet[2171]: E0509 04:48:47.868493 2171 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 04:48:47.870987 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 04:48:47.871123 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 04:48:47.871416 systemd[1]: kubelet.service: Consumed 117ms CPU time, 96.5M memory peak. May 9 04:48:51.482412 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 04:48:51.483157 systemd[1]: kubelet.service: Consumed 117ms CPU time, 96.5M memory peak. May 9 04:48:51.485629 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 04:48:51.508941 systemd[1]: Reload requested from client PID 2186 ('systemctl') (unit session-7.scope)... May 9 04:48:51.508957 systemd[1]: Reloading... May 9 04:48:51.591698 zram_generator::config[2232]: No configuration found. May 9 04:48:51.755263 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 04:48:51.859310 systemd[1]: Reloading finished in 350 ms. May 9 04:48:51.904537 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 9 04:48:51.906686 systemd[1]: kubelet.service: Deactivated successfully. May 9 04:48:51.907743 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 04:48:51.907797 systemd[1]: kubelet.service: Consumed 86ms CPU time, 82.5M memory peak. May 9 04:48:51.909254 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 04:48:52.024754 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 04:48:52.028678 (kubelet)[2277]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 04:48:52.061447 kubelet[2277]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 04:48:52.061447 kubelet[2277]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 9 04:48:52.061447 kubelet[2277]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 04:48:52.061825 kubelet[2277]: I0509 04:48:52.061618 2277 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 04:48:52.947985 kubelet[2277]: I0509 04:48:52.947935 2277 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 9 04:48:52.947985 kubelet[2277]: I0509 04:48:52.947971 2277 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 04:48:52.948233 kubelet[2277]: I0509 04:48:52.948205 2277 server.go:929] "Client rotation is on, will bootstrap in background" May 9 04:48:53.018866 kubelet[2277]: E0509 04:48:53.018824 2277 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.27:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" May 9 04:48:53.019887 kubelet[2277]: I0509 04:48:53.019788 2277 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 04:48:53.033805 kubelet[2277]: I0509 04:48:53.033752 2277 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 9 04:48:53.037382 kubelet[2277]: I0509 04:48:53.037353 2277 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 04:48:53.038147 kubelet[2277]: I0509 04:48:53.038112 2277 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 9 04:48:53.038277 kubelet[2277]: I0509 04:48:53.038250 2277 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 04:48:53.038427 kubelet[2277]: I0509 04:48:53.038279 2277 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 9 04:48:53.038577 kubelet[2277]: I0509 04:48:53.038560 2277 topology_manager.go:138] "Creating topology manager with none policy" May 9 04:48:53.038577 kubelet[2277]: I0509 04:48:53.038571 2277 container_manager_linux.go:300] "Creating device plugin manager" May 9 04:48:53.038785 kubelet[2277]: I0509 04:48:53.038770 2277 state_mem.go:36] "Initialized new in-memory state store" May 9 04:48:53.040106 kubelet[2277]: I0509 04:48:53.040075 2277 kubelet.go:408] "Attempting to sync node with API server" May 9 04:48:53.040106 kubelet[2277]: I0509 04:48:53.040097 2277 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 04:48:53.040959 kubelet[2277]: I0509 04:48:53.040935 2277 kubelet.go:314] "Adding apiserver pod source" May 9 04:48:53.041021 kubelet[2277]: I0509 04:48:53.040962 2277 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 04:48:53.043335 kubelet[2277]: W0509 04:48:53.043286 2277 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused May 9 04:48:53.043379 kubelet[2277]: E0509 04:48:53.043345 2277 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" May 9 04:48:53.045791 kubelet[2277]: W0509 04:48:53.045745 2277 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.27:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused May 9 04:48:53.045837 kubelet[2277]: E0509 04:48:53.045796 2277 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.27:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" May 9 04:48:53.046774 kubelet[2277]: I0509 04:48:53.046751 2277 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 9 04:48:53.051150 kubelet[2277]: I0509 04:48:53.051125 2277 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 04:48:53.051851 kubelet[2277]: W0509 04:48:53.051825 2277 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 9 04:48:53.052538 kubelet[2277]: I0509 04:48:53.052519 2277 server.go:1269] "Started kubelet" May 9 04:48:53.055004 kubelet[2277]: I0509 04:48:53.052635 2277 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 9 04:48:53.055004 kubelet[2277]: I0509 04:48:53.054133 2277 server.go:460] "Adding debug handlers to kubelet server" May 9 04:48:53.057745 kubelet[2277]: I0509 04:48:53.057328 2277 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 04:48:53.057745 kubelet[2277]: I0509 04:48:53.057574 2277 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 04:48:53.057745 kubelet[2277]: I0509 04:48:53.057620 2277 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 04:48:53.065515 kubelet[2277]: I0509 04:48:53.063245 2277 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 9 04:48:53.065515 kubelet[2277]: I0509 04:48:53.065313 2277 volume_manager.go:289] "Starting Kubelet Volume Manager" May 9 04:48:53.065828 kubelet[2277]: I0509 04:48:53.065697 2277 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 9 04:48:53.065828 kubelet[2277]: I0509 04:48:53.065749 2277 reconciler.go:26] "Reconciler: start to sync state" May 9 04:48:53.066069 kubelet[2277]: W0509 04:48:53.066021 2277 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused May 9 04:48:53.066125 kubelet[2277]: E0509 04:48:53.066080 2277 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" May 9 04:48:53.066361 kubelet[2277]: E0509 04:48:53.064919 2277 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.27:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.27:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183dc28b1246d76d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-09 04:48:53.052495725 +0000 UTC m=+1.020703903,LastTimestamp:2025-05-09 04:48:53.052495725 +0000 UTC m=+1.020703903,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 9 04:48:53.069954 kubelet[2277]: E0509 04:48:53.069911 2277 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 04:48:53.070206 kubelet[2277]: E0509 04:48:53.070019 2277 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.27:6443: connect: connection refused" interval="200ms" May 9 04:48:53.070939 kubelet[2277]: E0509 04:48:53.070903 2277 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 04:48:53.072837 kubelet[2277]: I0509 04:48:53.072804 2277 factory.go:221] Registration of the systemd container factory successfully May 9 04:48:53.072916 kubelet[2277]: I0509 04:48:53.072890 2277 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 04:48:53.074128 kubelet[2277]: I0509 04:48:53.074098 2277 factory.go:221] Registration of the containerd container factory successfully May 9 04:48:53.076686 kubelet[2277]: I0509 04:48:53.076205 2277 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 04:48:53.077140 kubelet[2277]: I0509 04:48:53.077120 2277 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 04:48:53.077140 kubelet[2277]: I0509 04:48:53.077139 2277 status_manager.go:217] "Starting to sync pod status with apiserver" May 9 04:48:53.077204 kubelet[2277]: I0509 04:48:53.077158 2277 kubelet.go:2321] "Starting kubelet main sync loop" May 9 04:48:53.077204 kubelet[2277]: E0509 04:48:53.077196 2277 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 04:48:53.081234 kubelet[2277]: W0509 04:48:53.081180 2277 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused May 9 04:48:53.081303 kubelet[2277]: E0509 04:48:53.081234 2277 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" May 9 04:48:53.082425 kubelet[2277]: I0509 04:48:53.082405 2277 cpu_manager.go:214] "Starting CPU manager" policy="none" May 9 04:48:53.082425 kubelet[2277]: I0509 04:48:53.082420 2277 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 9 04:48:53.082520 kubelet[2277]: I0509 04:48:53.082436 2277 state_mem.go:36] "Initialized new in-memory state store" May 9 04:48:53.171074 kubelet[2277]: E0509 04:48:53.171032 2277 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 04:48:53.178290 kubelet[2277]: E0509 04:48:53.178256 2277 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 9 04:48:53.250171 kubelet[2277]: I0509 04:48:53.249241 2277 policy_none.go:49] "None policy: Start" May 9 04:48:53.250814 kubelet[2277]: I0509 04:48:53.250353 2277 memory_manager.go:170] "Starting memorymanager" policy="None" May 9 04:48:53.250814 kubelet[2277]: I0509 04:48:53.250380 2277 state_mem.go:35] "Initializing new in-memory state store" May 9 04:48:53.257786 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 9 04:48:53.274563 kubelet[2277]: E0509 04:48:53.271243 2277 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 04:48:53.274563 kubelet[2277]: E0509 04:48:53.271328 2277 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.27:6443: connect: connection refused" interval="400ms" May 9 04:48:53.276459 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 9 04:48:53.279800 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 9 04:48:53.290396 kubelet[2277]: I0509 04:48:53.290355 2277 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 04:48:53.290871 kubelet[2277]: I0509 04:48:53.290552 2277 eviction_manager.go:189] "Eviction manager: starting control loop" May 9 04:48:53.290871 kubelet[2277]: I0509 04:48:53.290570 2277 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 04:48:53.290871 kubelet[2277]: I0509 04:48:53.290861 2277 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 04:48:53.292353 kubelet[2277]: E0509 04:48:53.292330 2277 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 9 04:48:53.387941 systemd[1]: Created slice kubepods-burstable-pod564998686442d2d5f48cf900c4c49349.slice - libcontainer container kubepods-burstable-pod564998686442d2d5f48cf900c4c49349.slice. May 9 04:48:53.392360 kubelet[2277]: I0509 04:48:53.392315 2277 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 9 04:48:53.392750 kubelet[2277]: E0509 04:48:53.392712 2277 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.27:6443/api/v1/nodes\": dial tcp 10.0.0.27:6443: connect: connection refused" node="localhost" May 9 04:48:53.400138 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 9 04:48:53.414171 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 9 04:48:53.468272 kubelet[2277]: I0509 04:48:53.468152 2277 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 04:48:53.468272 kubelet[2277]: I0509 04:48:53.468190 2277 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 04:48:53.468272 kubelet[2277]: I0509 04:48:53.468214 2277 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 04:48:53.468272 kubelet[2277]: I0509 04:48:53.468230 2277 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/564998686442d2d5f48cf900c4c49349-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"564998686442d2d5f48cf900c4c49349\") " pod="kube-system/kube-apiserver-localhost" May 9 04:48:53.468272 kubelet[2277]: I0509 04:48:53.468247 2277 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/564998686442d2d5f48cf900c4c49349-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"564998686442d2d5f48cf900c4c49349\") " pod="kube-system/kube-apiserver-localhost" May 9 04:48:53.468582 kubelet[2277]: I0509 04:48:53.468541 2277 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 04:48:53.468582 kubelet[2277]: I0509 04:48:53.468584 2277 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 04:48:53.468673 kubelet[2277]: I0509 04:48:53.468607 2277 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 9 04:48:53.468673 kubelet[2277]: I0509 04:48:53.468622 2277 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/564998686442d2d5f48cf900c4c49349-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"564998686442d2d5f48cf900c4c49349\") " pod="kube-system/kube-apiserver-localhost" May 9 04:48:53.594545 kubelet[2277]: I0509 04:48:53.594442 2277 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 9 04:48:53.594788 kubelet[2277]: E0509 04:48:53.594735 2277 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.27:6443/api/v1/nodes\": dial tcp 10.0.0.27:6443: connect: connection refused" node="localhost" May 9 04:48:53.672611 kubelet[2277]: E0509 04:48:53.672557 2277 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.27:6443: connect: connection refused" interval="800ms" May 9 04:48:53.700439 containerd[1513]: time="2025-05-09T04:48:53.700384103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:564998686442d2d5f48cf900c4c49349,Namespace:kube-system,Attempt:0,}" May 9 04:48:53.712987 containerd[1513]: time="2025-05-09T04:48:53.712940468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 9 04:48:53.716786 containerd[1513]: time="2025-05-09T04:48:53.716754328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 9 04:48:53.775636 containerd[1513]: time="2025-05-09T04:48:53.775593147Z" level=info msg="connecting to shim 04de7868adc51d05cd028c8a0df2e867af4256d4c5bdb874f4869cff5015d9e2" address="unix:///run/containerd/s/a27738198dd159190a450114506c52ae82ea44efdcf4fd5458289b86f2a2fc03" namespace=k8s.io protocol=ttrpc version=3 May 9 04:48:53.778897 containerd[1513]: time="2025-05-09T04:48:53.778844493Z" level=info msg="connecting to shim 6e06dee193e46067ef007d08ea6d026a2028398b3d67d3e56b903283af2fb164" address="unix:///run/containerd/s/5951be314175db0b53521e75255a372ab93a5124e5600c68aa89601cfca1e336" namespace=k8s.io protocol=ttrpc version=3 May 9 04:48:53.783249 containerd[1513]: time="2025-05-09T04:48:53.782790578Z" level=info msg="connecting to shim 9dc06db6f83bee394f559562f3dfaf92119944bc44f1eb6e4990c92b04ccb201" address="unix:///run/containerd/s/bbe78a37b0f8bbef448e40e02bcc101d852f76f574335a99fe297dc72aa571e8" namespace=k8s.io protocol=ttrpc version=3 May 9 04:48:53.803999 systemd[1]: Started cri-containerd-04de7868adc51d05cd028c8a0df2e867af4256d4c5bdb874f4869cff5015d9e2.scope - libcontainer container 04de7868adc51d05cd028c8a0df2e867af4256d4c5bdb874f4869cff5015d9e2. May 9 04:48:53.808029 systemd[1]: Started cri-containerd-6e06dee193e46067ef007d08ea6d026a2028398b3d67d3e56b903283af2fb164.scope - libcontainer container 6e06dee193e46067ef007d08ea6d026a2028398b3d67d3e56b903283af2fb164. May 9 04:48:53.809013 systemd[1]: Started cri-containerd-9dc06db6f83bee394f559562f3dfaf92119944bc44f1eb6e4990c92b04ccb201.scope - libcontainer container 9dc06db6f83bee394f559562f3dfaf92119944bc44f1eb6e4990c92b04ccb201. May 9 04:48:53.849395 containerd[1513]: time="2025-05-09T04:48:53.848878093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:564998686442d2d5f48cf900c4c49349,Namespace:kube-system,Attempt:0,} returns sandbox id \"04de7868adc51d05cd028c8a0df2e867af4256d4c5bdb874f4869cff5015d9e2\"" May 9 04:48:53.860789 containerd[1513]: time="2025-05-09T04:48:53.860757808Z" level=info msg="CreateContainer within sandbox \"04de7868adc51d05cd028c8a0df2e867af4256d4c5bdb874f4869cff5015d9e2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 9 04:48:53.861840 containerd[1513]: time="2025-05-09T04:48:53.861809441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e06dee193e46067ef007d08ea6d026a2028398b3d67d3e56b903283af2fb164\"" May 9 04:48:53.863062 containerd[1513]: time="2025-05-09T04:48:53.863033438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"9dc06db6f83bee394f559562f3dfaf92119944bc44f1eb6e4990c92b04ccb201\"" May 9 04:48:53.865551 containerd[1513]: time="2025-05-09T04:48:53.865526574Z" level=info msg="CreateContainer within sandbox \"6e06dee193e46067ef007d08ea6d026a2028398b3d67d3e56b903283af2fb164\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 9 04:48:53.865799 containerd[1513]: time="2025-05-09T04:48:53.865646352Z" level=info msg="CreateContainer within sandbox \"9dc06db6f83bee394f559562f3dfaf92119944bc44f1eb6e4990c92b04ccb201\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 9 04:48:53.870774 containerd[1513]: time="2025-05-09T04:48:53.870748281Z" level=info msg="Container 02544289ab7ba5cd4a6056b0ea61cd46ab2140e5eda2ea6470060a74f482ce4e: CDI devices from CRI Config.CDIDevices: []" May 9 04:48:53.875166 containerd[1513]: time="2025-05-09T04:48:53.875105126Z" level=info msg="Container aaa4b19d22f8b899fd3314b7eda35ddaa9c22a6d8b745206f122425877b41c1e: CDI devices from CRI Config.CDIDevices: []" May 9 04:48:53.877297 containerd[1513]: time="2025-05-09T04:48:53.877263459Z" level=info msg="Container 3662cc136b8ff64c4b7ee8075e834444fc136d7a66b40e522f74e5efb34a528a: CDI devices from CRI Config.CDIDevices: []" May 9 04:48:53.879635 containerd[1513]: time="2025-05-09T04:48:53.879587552Z" level=info msg="CreateContainer within sandbox \"04de7868adc51d05cd028c8a0df2e867af4256d4c5bdb874f4869cff5015d9e2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"02544289ab7ba5cd4a6056b0ea61cd46ab2140e5eda2ea6470060a74f482ce4e\"" May 9 04:48:53.880321 containerd[1513]: time="2025-05-09T04:48:53.880296138Z" level=info msg="StartContainer for \"02544289ab7ba5cd4a6056b0ea61cd46ab2140e5eda2ea6470060a74f482ce4e\"" May 9 04:48:53.881321 containerd[1513]: time="2025-05-09T04:48:53.881290663Z" level=info msg="connecting to shim 02544289ab7ba5cd4a6056b0ea61cd46ab2140e5eda2ea6470060a74f482ce4e" address="unix:///run/containerd/s/a27738198dd159190a450114506c52ae82ea44efdcf4fd5458289b86f2a2fc03" protocol=ttrpc version=3 May 9 04:48:53.882811 containerd[1513]: time="2025-05-09T04:48:53.882779509Z" level=info msg="CreateContainer within sandbox \"9dc06db6f83bee394f559562f3dfaf92119944bc44f1eb6e4990c92b04ccb201\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"aaa4b19d22f8b899fd3314b7eda35ddaa9c22a6d8b745206f122425877b41c1e\"" May 9 04:48:53.883190 containerd[1513]: time="2025-05-09T04:48:53.883164697Z" level=info msg="StartContainer for \"aaa4b19d22f8b899fd3314b7eda35ddaa9c22a6d8b745206f122425877b41c1e\"" May 9 04:48:53.884460 containerd[1513]: time="2025-05-09T04:48:53.884422871Z" level=info msg="connecting to shim aaa4b19d22f8b899fd3314b7eda35ddaa9c22a6d8b745206f122425877b41c1e" address="unix:///run/containerd/s/bbe78a37b0f8bbef448e40e02bcc101d852f76f574335a99fe297dc72aa571e8" protocol=ttrpc version=3 May 9 04:48:53.890937 containerd[1513]: time="2025-05-09T04:48:53.890883102Z" level=info msg="CreateContainer within sandbox \"6e06dee193e46067ef007d08ea6d026a2028398b3d67d3e56b903283af2fb164\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3662cc136b8ff64c4b7ee8075e834444fc136d7a66b40e522f74e5efb34a528a\"" May 9 04:48:53.891367 containerd[1513]: time="2025-05-09T04:48:53.891312391Z" level=info msg="StartContainer for \"3662cc136b8ff64c4b7ee8075e834444fc136d7a66b40e522f74e5efb34a528a\"" May 9 04:48:53.892872 containerd[1513]: time="2025-05-09T04:48:53.892824449Z" level=info msg="connecting to shim 3662cc136b8ff64c4b7ee8075e834444fc136d7a66b40e522f74e5efb34a528a" address="unix:///run/containerd/s/5951be314175db0b53521e75255a372ab93a5124e5600c68aa89601cfca1e336" protocol=ttrpc version=3 May 9 04:48:53.899025 systemd[1]: Started cri-containerd-02544289ab7ba5cd4a6056b0ea61cd46ab2140e5eda2ea6470060a74f482ce4e.scope - libcontainer container 02544289ab7ba5cd4a6056b0ea61cd46ab2140e5eda2ea6470060a74f482ce4e. May 9 04:48:53.902651 systemd[1]: Started cri-containerd-aaa4b19d22f8b899fd3314b7eda35ddaa9c22a6d8b745206f122425877b41c1e.scope - libcontainer container aaa4b19d22f8b899fd3314b7eda35ddaa9c22a6d8b745206f122425877b41c1e. May 9 04:48:53.924946 systemd[1]: Started cri-containerd-3662cc136b8ff64c4b7ee8075e834444fc136d7a66b40e522f74e5efb34a528a.scope - libcontainer container 3662cc136b8ff64c4b7ee8075e834444fc136d7a66b40e522f74e5efb34a528a. May 9 04:48:53.960020 containerd[1513]: time="2025-05-09T04:48:53.959963557Z" level=info msg="StartContainer for \"aaa4b19d22f8b899fd3314b7eda35ddaa9c22a6d8b745206f122425877b41c1e\" returns successfully" May 9 04:48:53.961542 containerd[1513]: time="2025-05-09T04:48:53.961504669Z" level=info msg="StartContainer for \"02544289ab7ba5cd4a6056b0ea61cd46ab2140e5eda2ea6470060a74f482ce4e\" returns successfully" May 9 04:48:53.996556 kubelet[2277]: I0509 04:48:53.996447 2277 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 9 04:48:53.996874 kubelet[2277]: E0509 04:48:53.996826 2277 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.27:6443/api/v1/nodes\": dial tcp 10.0.0.27:6443: connect: connection refused" node="localhost" May 9 04:48:54.008322 containerd[1513]: time="2025-05-09T04:48:54.008293759Z" level=info msg="StartContainer for \"3662cc136b8ff64c4b7ee8075e834444fc136d7a66b40e522f74e5efb34a528a\" returns successfully" May 9 04:48:54.047613 kubelet[2277]: W0509 04:48:54.047550 2277 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.27:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused May 9 04:48:54.047811 kubelet[2277]: E0509 04:48:54.047776 2277 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.27:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" May 9 04:48:54.092045 kubelet[2277]: W0509 04:48:54.091908 2277 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused May 9 04:48:54.092045 kubelet[2277]: E0509 04:48:54.092011 2277 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" May 9 04:48:54.093569 kubelet[2277]: W0509 04:48:54.093348 2277 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused May 9 04:48:54.093569 kubelet[2277]: E0509 04:48:54.093398 2277 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" May 9 04:48:54.798723 kubelet[2277]: I0509 04:48:54.798421 2277 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 9 04:48:55.182493 kubelet[2277]: E0509 04:48:55.182367 2277 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 9 04:48:55.256533 kubelet[2277]: I0509 04:48:55.254158 2277 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 9 04:48:55.256533 kubelet[2277]: E0509 04:48:55.254211 2277 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 9 04:48:55.274047 kubelet[2277]: E0509 04:48:55.274000 2277 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 04:48:55.374780 kubelet[2277]: E0509 04:48:55.374724 2277 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 04:48:55.475212 kubelet[2277]: E0509 04:48:55.475098 2277 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 04:48:55.575974 kubelet[2277]: E0509 04:48:55.575924 2277 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 04:48:55.676485 kubelet[2277]: E0509 04:48:55.676445 2277 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 04:48:55.777190 kubelet[2277]: E0509 04:48:55.777064 2277 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 04:48:55.877648 kubelet[2277]: E0509 04:48:55.877603 2277 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 04:48:55.978726 kubelet[2277]: E0509 04:48:55.978685 2277 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 04:48:56.079082 kubelet[2277]: E0509 04:48:56.078963 2277 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 04:48:56.992419 systemd[1]: Reload requested from client PID 2550 ('systemctl') (unit session-7.scope)... May 9 04:48:56.992435 systemd[1]: Reloading... May 9 04:48:57.047764 kubelet[2277]: I0509 04:48:57.047724 2277 apiserver.go:52] "Watching apiserver" May 9 04:48:57.066023 kubelet[2277]: I0509 04:48:57.065981 2277 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 9 04:48:57.066685 zram_generator::config[2596]: No configuration found. May 9 04:48:57.147576 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 04:48:57.269621 systemd[1]: Reloading finished in 276 ms. May 9 04:48:57.290521 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 9 04:48:57.311346 systemd[1]: kubelet.service: Deactivated successfully. May 9 04:48:57.311614 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 04:48:57.311691 systemd[1]: kubelet.service: Consumed 1.458s CPU time, 116.8M memory peak. May 9 04:48:57.313735 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 04:48:57.436775 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 04:48:57.440072 (kubelet)[2635]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 04:48:57.478324 kubelet[2635]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 04:48:57.478324 kubelet[2635]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 9 04:48:57.478324 kubelet[2635]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 04:48:57.480027 kubelet[2635]: I0509 04:48:57.478365 2635 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 04:48:57.486258 kubelet[2635]: I0509 04:48:57.486220 2635 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 9 04:48:57.486258 kubelet[2635]: I0509 04:48:57.486250 2635 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 04:48:57.486475 kubelet[2635]: I0509 04:48:57.486444 2635 server.go:929] "Client rotation is on, will bootstrap in background" May 9 04:48:57.487860 kubelet[2635]: I0509 04:48:57.487832 2635 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 9 04:48:57.489930 kubelet[2635]: I0509 04:48:57.489842 2635 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 04:48:57.494627 kubelet[2635]: I0509 04:48:57.494599 2635 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 9 04:48:57.496786 kubelet[2635]: I0509 04:48:57.496768 2635 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 04:48:57.496875 kubelet[2635]: I0509 04:48:57.496864 2635 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 9 04:48:57.496976 kubelet[2635]: I0509 04:48:57.496950 2635 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 04:48:57.497144 kubelet[2635]: I0509 04:48:57.496973 2635 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 9 04:48:57.497144 kubelet[2635]: I0509 04:48:57.497145 2635 topology_manager.go:138] "Creating topology manager with none policy" May 9 04:48:57.497243 kubelet[2635]: I0509 04:48:57.497154 2635 container_manager_linux.go:300] "Creating device plugin manager" May 9 04:48:57.497243 kubelet[2635]: I0509 04:48:57.497180 2635 state_mem.go:36] "Initialized new in-memory state store" May 9 04:48:57.497297 kubelet[2635]: I0509 04:48:57.497271 2635 kubelet.go:408] "Attempting to sync node with API server" May 9 04:48:57.497297 kubelet[2635]: I0509 04:48:57.497283 2635 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 04:48:57.497337 kubelet[2635]: I0509 04:48:57.497302 2635 kubelet.go:314] "Adding apiserver pod source" May 9 04:48:57.497337 kubelet[2635]: I0509 04:48:57.497310 2635 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 04:48:57.498104 kubelet[2635]: I0509 04:48:57.498007 2635 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 9 04:48:57.499216 kubelet[2635]: I0509 04:48:57.499146 2635 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 04:48:57.499501 kubelet[2635]: I0509 04:48:57.499473 2635 server.go:1269] "Started kubelet" May 9 04:48:57.501508 kubelet[2635]: I0509 04:48:57.499590 2635 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 9 04:48:57.501508 kubelet[2635]: I0509 04:48:57.499765 2635 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 04:48:57.501508 kubelet[2635]: I0509 04:48:57.499960 2635 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 04:48:57.501508 kubelet[2635]: I0509 04:48:57.500415 2635 server.go:460] "Adding debug handlers to kubelet server" May 9 04:48:57.501724 kubelet[2635]: I0509 04:48:57.501705 2635 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 04:48:57.501857 kubelet[2635]: I0509 04:48:57.501834 2635 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 9 04:48:57.502273 kubelet[2635]: I0509 04:48:57.502252 2635 volume_manager.go:289] "Starting Kubelet Volume Manager" May 9 04:48:57.502428 kubelet[2635]: I0509 04:48:57.502416 2635 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 9 04:48:57.502624 kubelet[2635]: I0509 04:48:57.502609 2635 reconciler.go:26] "Reconciler: start to sync state" May 9 04:48:57.503035 kubelet[2635]: E0509 04:48:57.502837 2635 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 04:48:57.504408 kubelet[2635]: I0509 04:48:57.504370 2635 factory.go:221] Registration of the systemd container factory successfully May 9 04:48:57.504473 kubelet[2635]: I0509 04:48:57.504453 2635 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 04:48:57.509673 kubelet[2635]: I0509 04:48:57.508958 2635 factory.go:221] Registration of the containerd container factory successfully May 9 04:48:57.531598 kubelet[2635]: I0509 04:48:57.531311 2635 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 04:48:57.533316 kubelet[2635]: I0509 04:48:57.533279 2635 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 04:48:57.533316 kubelet[2635]: I0509 04:48:57.533301 2635 status_manager.go:217] "Starting to sync pod status with apiserver" May 9 04:48:57.533316 kubelet[2635]: I0509 04:48:57.533319 2635 kubelet.go:2321] "Starting kubelet main sync loop" May 9 04:48:57.533435 kubelet[2635]: E0509 04:48:57.533371 2635 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 04:48:57.549850 kubelet[2635]: I0509 04:48:57.549813 2635 cpu_manager.go:214] "Starting CPU manager" policy="none" May 9 04:48:57.549850 kubelet[2635]: I0509 04:48:57.549829 2635 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 9 04:48:57.549850 kubelet[2635]: I0509 04:48:57.549846 2635 state_mem.go:36] "Initialized new in-memory state store" May 9 04:48:57.549992 kubelet[2635]: I0509 04:48:57.549969 2635 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 9 04:48:57.550029 kubelet[2635]: I0509 04:48:57.549985 2635 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 9 04:48:57.550029 kubelet[2635]: I0509 04:48:57.550002 2635 policy_none.go:49] "None policy: Start" May 9 04:48:57.550531 kubelet[2635]: I0509 04:48:57.550516 2635 memory_manager.go:170] "Starting memorymanager" policy="None" May 9 04:48:57.550568 kubelet[2635]: I0509 04:48:57.550540 2635 state_mem.go:35] "Initializing new in-memory state store" May 9 04:48:57.550713 kubelet[2635]: I0509 04:48:57.550699 2635 state_mem.go:75] "Updated machine memory state" May 9 04:48:57.554168 kubelet[2635]: I0509 04:48:57.554146 2635 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 04:48:57.554526 kubelet[2635]: I0509 04:48:57.554286 2635 eviction_manager.go:189] "Eviction manager: starting control loop" May 9 04:48:57.554526 kubelet[2635]: I0509 04:48:57.554303 2635 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 04:48:57.554526 kubelet[2635]: I0509 04:48:57.554481 2635 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 04:48:57.656038 kubelet[2635]: I0509 04:48:57.656012 2635 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 9 04:48:57.663013 kubelet[2635]: I0509 04:48:57.662899 2635 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 9 04:48:57.663013 kubelet[2635]: I0509 04:48:57.662969 2635 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 9 04:48:57.804384 kubelet[2635]: I0509 04:48:57.804052 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/564998686442d2d5f48cf900c4c49349-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"564998686442d2d5f48cf900c4c49349\") " pod="kube-system/kube-apiserver-localhost" May 9 04:48:57.804384 kubelet[2635]: I0509 04:48:57.804109 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/564998686442d2d5f48cf900c4c49349-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"564998686442d2d5f48cf900c4c49349\") " pod="kube-system/kube-apiserver-localhost" May 9 04:48:57.804384 kubelet[2635]: I0509 04:48:57.804138 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/564998686442d2d5f48cf900c4c49349-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"564998686442d2d5f48cf900c4c49349\") " pod="kube-system/kube-apiserver-localhost" May 9 04:48:57.804384 kubelet[2635]: I0509 04:48:57.804158 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 04:48:57.804384 kubelet[2635]: I0509 04:48:57.804181 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 04:48:57.804562 kubelet[2635]: I0509 04:48:57.804201 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 9 04:48:57.804562 kubelet[2635]: I0509 04:48:57.804231 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 04:48:57.804562 kubelet[2635]: I0509 04:48:57.804246 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 04:48:57.804562 kubelet[2635]: I0509 04:48:57.804286 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 04:48:57.996950 sudo[2668]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 9 04:48:57.997221 sudo[2668]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 9 04:48:58.419542 sudo[2668]: pam_unix(sudo:session): session closed for user root May 9 04:48:58.498589 kubelet[2635]: I0509 04:48:58.498366 2635 apiserver.go:52] "Watching apiserver" May 9 04:48:58.502826 kubelet[2635]: I0509 04:48:58.502805 2635 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 9 04:48:58.581648 kubelet[2635]: I0509 04:48:58.581538 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.581521994 podStartE2EDuration="1.581521994s" podCreationTimestamp="2025-05-09 04:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 04:48:58.580127574 +0000 UTC m=+1.137209192" watchObservedRunningTime="2025-05-09 04:48:58.581521994 +0000 UTC m=+1.138603612" May 9 04:48:58.594301 kubelet[2635]: I0509 04:48:58.594255 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.594242288 podStartE2EDuration="1.594242288s" podCreationTimestamp="2025-05-09 04:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 04:48:58.587936184 +0000 UTC m=+1.145017762" watchObservedRunningTime="2025-05-09 04:48:58.594242288 +0000 UTC m=+1.151323906" May 9 04:49:00.061068 sudo[1722]: pam_unix(sudo:session): session closed for user root May 9 04:49:00.063089 sshd[1721]: Connection closed by 10.0.0.1 port 50118 May 9 04:49:00.063647 sshd-session[1718]: pam_unix(sshd:session): session closed for user core May 9 04:49:00.067100 systemd[1]: sshd@6-10.0.0.27:22-10.0.0.1:50118.service: Deactivated successfully. May 9 04:49:00.068950 systemd[1]: session-7.scope: Deactivated successfully. May 9 04:49:00.069122 systemd[1]: session-7.scope: Consumed 6.725s CPU time, 267.1M memory peak. May 9 04:49:00.070062 systemd-logind[1503]: Session 7 logged out. Waiting for processes to exit. May 9 04:49:00.071289 systemd-logind[1503]: Removed session 7. May 9 04:49:01.451369 kubelet[2635]: I0509 04:49:01.451332 2635 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 9 04:49:01.451797 containerd[1513]: time="2025-05-09T04:49:01.451697885Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 9 04:49:01.452576 kubelet[2635]: I0509 04:49:01.452095 2635 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 9 04:49:02.352742 kubelet[2635]: I0509 04:49:02.352682 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.35264445 podStartE2EDuration="5.35264445s" podCreationTimestamp="2025-05-09 04:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 04:48:58.594633691 +0000 UTC m=+1.151715309" watchObservedRunningTime="2025-05-09 04:49:02.35264445 +0000 UTC m=+4.909726068" May 9 04:49:02.371296 systemd[1]: Created slice kubepods-burstable-pod762745fc_5d54_4e21_9564_4da41c1a05c2.slice - libcontainer container kubepods-burstable-pod762745fc_5d54_4e21_9564_4da41c1a05c2.slice. May 9 04:49:02.377402 systemd[1]: Created slice kubepods-besteffort-pod39a04c68_443a_4b16_bffe_cd92cef950fc.slice - libcontainer container kubepods-besteffort-pod39a04c68_443a_4b16_bffe_cd92cef950fc.slice. May 9 04:49:02.435320 kubelet[2635]: I0509 04:49:02.435260 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/762745fc-5d54-4e21-9564-4da41c1a05c2-cilium-cgroup\") pod \"cilium-j6bj4\" (UID: \"762745fc-5d54-4e21-9564-4da41c1a05c2\") " pod="kube-system/cilium-j6bj4" May 9 04:49:02.435320 kubelet[2635]: I0509 04:49:02.435304 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/762745fc-5d54-4e21-9564-4da41c1a05c2-cni-path\") pod \"cilium-j6bj4\" (UID: \"762745fc-5d54-4e21-9564-4da41c1a05c2\") " pod="kube-system/cilium-j6bj4" May 9 04:49:02.435320 kubelet[2635]: I0509 04:49:02.435324 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/762745fc-5d54-4e21-9564-4da41c1a05c2-host-proc-sys-net\") pod \"cilium-j6bj4\" (UID: \"762745fc-5d54-4e21-9564-4da41c1a05c2\") " pod="kube-system/cilium-j6bj4" May 9 04:49:02.435507 kubelet[2635]: I0509 04:49:02.435340 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/762745fc-5d54-4e21-9564-4da41c1a05c2-hubble-tls\") pod \"cilium-j6bj4\" (UID: \"762745fc-5d54-4e21-9564-4da41c1a05c2\") " pod="kube-system/cilium-j6bj4" May 9 04:49:02.435507 kubelet[2635]: I0509 04:49:02.435355 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/762745fc-5d54-4e21-9564-4da41c1a05c2-clustermesh-secrets\") pod \"cilium-j6bj4\" (UID: \"762745fc-5d54-4e21-9564-4da41c1a05c2\") " pod="kube-system/cilium-j6bj4" May 9 04:49:02.435507 kubelet[2635]: I0509 04:49:02.435369 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/39a04c68-443a-4b16-bffe-cd92cef950fc-lib-modules\") pod \"kube-proxy-tt6ml\" (UID: \"39a04c68-443a-4b16-bffe-cd92cef950fc\") " pod="kube-system/kube-proxy-tt6ml" May 9 04:49:02.435507 kubelet[2635]: I0509 04:49:02.435384 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/762745fc-5d54-4e21-9564-4da41c1a05c2-lib-modules\") pod \"cilium-j6bj4\" (UID: \"762745fc-5d54-4e21-9564-4da41c1a05c2\") " pod="kube-system/cilium-j6bj4" May 9 04:49:02.435507 kubelet[2635]: I0509 04:49:02.435398 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/39a04c68-443a-4b16-bffe-cd92cef950fc-kube-proxy\") pod \"kube-proxy-tt6ml\" (UID: \"39a04c68-443a-4b16-bffe-cd92cef950fc\") " pod="kube-system/kube-proxy-tt6ml" May 9 04:49:02.435507 kubelet[2635]: I0509 04:49:02.435414 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/39a04c68-443a-4b16-bffe-cd92cef950fc-xtables-lock\") pod \"kube-proxy-tt6ml\" (UID: \"39a04c68-443a-4b16-bffe-cd92cef950fc\") " pod="kube-system/kube-proxy-tt6ml" May 9 04:49:02.435632 kubelet[2635]: I0509 04:49:02.435430 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/762745fc-5d54-4e21-9564-4da41c1a05c2-xtables-lock\") pod \"cilium-j6bj4\" (UID: \"762745fc-5d54-4e21-9564-4da41c1a05c2\") " pod="kube-system/cilium-j6bj4" May 9 04:49:02.435632 kubelet[2635]: I0509 04:49:02.435444 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/762745fc-5d54-4e21-9564-4da41c1a05c2-cilium-config-path\") pod \"cilium-j6bj4\" (UID: \"762745fc-5d54-4e21-9564-4da41c1a05c2\") " pod="kube-system/cilium-j6bj4" May 9 04:49:02.435632 kubelet[2635]: I0509 04:49:02.435459 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/762745fc-5d54-4e21-9564-4da41c1a05c2-host-proc-sys-kernel\") pod \"cilium-j6bj4\" (UID: \"762745fc-5d54-4e21-9564-4da41c1a05c2\") " pod="kube-system/cilium-j6bj4" May 9 04:49:02.435632 kubelet[2635]: I0509 04:49:02.435474 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fgpn\" (UniqueName: \"kubernetes.io/projected/762745fc-5d54-4e21-9564-4da41c1a05c2-kube-api-access-2fgpn\") pod \"cilium-j6bj4\" (UID: \"762745fc-5d54-4e21-9564-4da41c1a05c2\") " pod="kube-system/cilium-j6bj4" May 9 04:49:02.435632 kubelet[2635]: I0509 04:49:02.435496 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkjmn\" (UniqueName: \"kubernetes.io/projected/39a04c68-443a-4b16-bffe-cd92cef950fc-kube-api-access-nkjmn\") pod \"kube-proxy-tt6ml\" (UID: \"39a04c68-443a-4b16-bffe-cd92cef950fc\") " pod="kube-system/kube-proxy-tt6ml" May 9 04:49:02.435760 kubelet[2635]: I0509 04:49:02.435619 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/762745fc-5d54-4e21-9564-4da41c1a05c2-cilium-run\") pod \"cilium-j6bj4\" (UID: \"762745fc-5d54-4e21-9564-4da41c1a05c2\") " pod="kube-system/cilium-j6bj4" May 9 04:49:02.435760 kubelet[2635]: I0509 04:49:02.435671 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/762745fc-5d54-4e21-9564-4da41c1a05c2-etc-cni-netd\") pod \"cilium-j6bj4\" (UID: \"762745fc-5d54-4e21-9564-4da41c1a05c2\") " pod="kube-system/cilium-j6bj4" May 9 04:49:02.435760 kubelet[2635]: I0509 04:49:02.435722 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/762745fc-5d54-4e21-9564-4da41c1a05c2-bpf-maps\") pod \"cilium-j6bj4\" (UID: \"762745fc-5d54-4e21-9564-4da41c1a05c2\") " pod="kube-system/cilium-j6bj4" May 9 04:49:02.435760 kubelet[2635]: I0509 04:49:02.435750 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/762745fc-5d54-4e21-9564-4da41c1a05c2-hostproc\") pod \"cilium-j6bj4\" (UID: \"762745fc-5d54-4e21-9564-4da41c1a05c2\") " pod="kube-system/cilium-j6bj4" May 9 04:49:02.618346 systemd[1]: Created slice kubepods-besteffort-pode7d1c837_ad9a_430e_bb45_3fc7237ab9c4.slice - libcontainer container kubepods-besteffort-pode7d1c837_ad9a_430e_bb45_3fc7237ab9c4.slice. May 9 04:49:02.637344 kubelet[2635]: I0509 04:49:02.637298 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e7d1c837-ad9a-430e-bb45-3fc7237ab9c4-cilium-config-path\") pod \"cilium-operator-5d85765b45-r7n6t\" (UID: \"e7d1c837-ad9a-430e-bb45-3fc7237ab9c4\") " pod="kube-system/cilium-operator-5d85765b45-r7n6t" May 9 04:49:02.637344 kubelet[2635]: I0509 04:49:02.637345 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk7r9\" (UniqueName: \"kubernetes.io/projected/e7d1c837-ad9a-430e-bb45-3fc7237ab9c4-kube-api-access-zk7r9\") pod \"cilium-operator-5d85765b45-r7n6t\" (UID: \"e7d1c837-ad9a-430e-bb45-3fc7237ab9c4\") " pod="kube-system/cilium-operator-5d85765b45-r7n6t" May 9 04:49:02.675146 containerd[1513]: time="2025-05-09T04:49:02.675108219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j6bj4,Uid:762745fc-5d54-4e21-9564-4da41c1a05c2,Namespace:kube-system,Attempt:0,}" May 9 04:49:02.695143 containerd[1513]: time="2025-05-09T04:49:02.695091265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tt6ml,Uid:39a04c68-443a-4b16-bffe-cd92cef950fc,Namespace:kube-system,Attempt:0,}" May 9 04:49:02.737829 containerd[1513]: time="2025-05-09T04:49:02.737439148Z" level=info msg="connecting to shim 2726dfb1d209d6fa76ea656e34390d275a16b2964671b2f5ef07ecf54406baa7" address="unix:///run/containerd/s/6b74738bb8417b40e58a7e8c7972eaf7f2056cd9b250b8b197d8add63a100bb2" namespace=k8s.io protocol=ttrpc version=3 May 9 04:49:02.742035 containerd[1513]: time="2025-05-09T04:49:02.741993098Z" level=info msg="connecting to shim cd266fde76e2a929a55aca9446368d31b37499227282ada169f2c80a8104616e" address="unix:///run/containerd/s/27cb248a5b87165916d35d9f529d494696b5c421053c2c9efd11c6ec3410589a" namespace=k8s.io protocol=ttrpc version=3 May 9 04:49:02.762935 systemd[1]: Started cri-containerd-cd266fde76e2a929a55aca9446368d31b37499227282ada169f2c80a8104616e.scope - libcontainer container cd266fde76e2a929a55aca9446368d31b37499227282ada169f2c80a8104616e. May 9 04:49:02.767055 systemd[1]: Started cri-containerd-2726dfb1d209d6fa76ea656e34390d275a16b2964671b2f5ef07ecf54406baa7.scope - libcontainer container 2726dfb1d209d6fa76ea656e34390d275a16b2964671b2f5ef07ecf54406baa7. May 9 04:49:02.794791 containerd[1513]: time="2025-05-09T04:49:02.794754839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tt6ml,Uid:39a04c68-443a-4b16-bffe-cd92cef950fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"2726dfb1d209d6fa76ea656e34390d275a16b2964671b2f5ef07ecf54406baa7\"" May 9 04:49:02.797500 containerd[1513]: time="2025-05-09T04:49:02.797467874Z" level=info msg="CreateContainer within sandbox \"2726dfb1d209d6fa76ea656e34390d275a16b2964671b2f5ef07ecf54406baa7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 9 04:49:02.798056 containerd[1513]: time="2025-05-09T04:49:02.798022037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j6bj4,Uid:762745fc-5d54-4e21-9564-4da41c1a05c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd266fde76e2a929a55aca9446368d31b37499227282ada169f2c80a8104616e\"" May 9 04:49:02.799734 containerd[1513]: time="2025-05-09T04:49:02.799703814Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 9 04:49:02.808907 containerd[1513]: time="2025-05-09T04:49:02.808866492Z" level=info msg="Container dcd09883bca8afb2c9bcb2ffd275ba94ac765350fa1859d4cd89decec3d899cb: CDI devices from CRI Config.CDIDevices: []" May 9 04:49:02.815167 containerd[1513]: time="2025-05-09T04:49:02.815123306Z" level=info msg="CreateContainer within sandbox \"2726dfb1d209d6fa76ea656e34390d275a16b2964671b2f5ef07ecf54406baa7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dcd09883bca8afb2c9bcb2ffd275ba94ac765350fa1859d4cd89decec3d899cb\"" May 9 04:49:02.815881 containerd[1513]: time="2025-05-09T04:49:02.815805236Z" level=info msg="StartContainer for \"dcd09883bca8afb2c9bcb2ffd275ba94ac765350fa1859d4cd89decec3d899cb\"" May 9 04:49:02.817572 containerd[1513]: time="2025-05-09T04:49:02.817497336Z" level=info msg="connecting to shim dcd09883bca8afb2c9bcb2ffd275ba94ac765350fa1859d4cd89decec3d899cb" address="unix:///run/containerd/s/6b74738bb8417b40e58a7e8c7972eaf7f2056cd9b250b8b197d8add63a100bb2" protocol=ttrpc version=3 May 9 04:49:02.839816 systemd[1]: Started cri-containerd-dcd09883bca8afb2c9bcb2ffd275ba94ac765350fa1859d4cd89decec3d899cb.scope - libcontainer container dcd09883bca8afb2c9bcb2ffd275ba94ac765350fa1859d4cd89decec3d899cb. May 9 04:49:02.875558 containerd[1513]: time="2025-05-09T04:49:02.875356186Z" level=info msg="StartContainer for \"dcd09883bca8afb2c9bcb2ffd275ba94ac765350fa1859d4cd89decec3d899cb\" returns successfully" May 9 04:49:02.926530 containerd[1513]: time="2025-05-09T04:49:02.926489971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-r7n6t,Uid:e7d1c837-ad9a-430e-bb45-3fc7237ab9c4,Namespace:kube-system,Attempt:0,}" May 9 04:49:02.946245 containerd[1513]: time="2025-05-09T04:49:02.945819337Z" level=info msg="connecting to shim 2b3abca3ab5be6607c0e82ef2c51e3ddd3d0c0213d2a444fee265df7de143d67" address="unix:///run/containerd/s/41844e8ea9ab6369bbb0a87e7d88943ae25301272c60ba1b18e261d25655c90d" namespace=k8s.io protocol=ttrpc version=3 May 9 04:49:02.982857 systemd[1]: Started cri-containerd-2b3abca3ab5be6607c0e82ef2c51e3ddd3d0c0213d2a444fee265df7de143d67.scope - libcontainer container 2b3abca3ab5be6607c0e82ef2c51e3ddd3d0c0213d2a444fee265df7de143d67. May 9 04:49:03.014072 containerd[1513]: time="2025-05-09T04:49:03.013945758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-r7n6t,Uid:e7d1c837-ad9a-430e-bb45-3fc7237ab9c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b3abca3ab5be6607c0e82ef2c51e3ddd3d0c0213d2a444fee265df7de143d67\"" May 9 04:49:04.142169 kubelet[2635]: I0509 04:49:04.142096 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tt6ml" podStartSLOduration=2.142073312 podStartE2EDuration="2.142073312s" podCreationTimestamp="2025-05-09 04:49:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 04:49:03.606363504 +0000 UTC m=+6.163445122" watchObservedRunningTime="2025-05-09 04:49:04.142073312 +0000 UTC m=+6.699154890" May 9 04:49:07.672868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3104561429.mount: Deactivated successfully. May 9 04:49:10.533503 update_engine[1504]: I20250509 04:49:10.533433 1504 update_attempter.cc:509] Updating boot flags... May 9 04:49:10.557695 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3043) May 9 04:49:10.605738 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3047) May 9 04:49:12.525210 containerd[1513]: time="2025-05-09T04:49:12.525164582Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:49:12.526182 containerd[1513]: time="2025-05-09T04:49:12.525997244Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 9 04:49:12.527133 containerd[1513]: time="2025-05-09T04:49:12.526856033Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:49:12.528315 containerd[1513]: time="2025-05-09T04:49:12.528278933Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.728533946s" May 9 04:49:12.528315 containerd[1513]: time="2025-05-09T04:49:12.528316463Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 9 04:49:12.542893 containerd[1513]: time="2025-05-09T04:49:12.542857584Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 9 04:49:12.543815 containerd[1513]: time="2025-05-09T04:49:12.543785271Z" level=info msg="CreateContainer within sandbox \"cd266fde76e2a929a55aca9446368d31b37499227282ada169f2c80a8104616e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 9 04:49:12.553036 containerd[1513]: time="2025-05-09T04:49:12.552388487Z" level=info msg="Container 98914f20149a323bba469a193df5df04503966951d78099b4517899739ea98b0: CDI devices from CRI Config.CDIDevices: []" May 9 04:49:12.560600 containerd[1513]: time="2025-05-09T04:49:12.560569151Z" level=info msg="CreateContainer within sandbox \"cd266fde76e2a929a55aca9446368d31b37499227282ada169f2c80a8104616e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"98914f20149a323bba469a193df5df04503966951d78099b4517899739ea98b0\"" May 9 04:49:12.561110 containerd[1513]: time="2025-05-09T04:49:12.561072165Z" level=info msg="StartContainer for \"98914f20149a323bba469a193df5df04503966951d78099b4517899739ea98b0\"" May 9 04:49:12.562459 containerd[1513]: time="2025-05-09T04:49:12.562431848Z" level=info msg="connecting to shim 98914f20149a323bba469a193df5df04503966951d78099b4517899739ea98b0" address="unix:///run/containerd/s/27cb248a5b87165916d35d9f529d494696b5c421053c2c9efd11c6ec3410589a" protocol=ttrpc version=3 May 9 04:49:12.611842 systemd[1]: Started cri-containerd-98914f20149a323bba469a193df5df04503966951d78099b4517899739ea98b0.scope - libcontainer container 98914f20149a323bba469a193df5df04503966951d78099b4517899739ea98b0. May 9 04:49:12.644622 containerd[1513]: time="2025-05-09T04:49:12.644583613Z" level=info msg="StartContainer for \"98914f20149a323bba469a193df5df04503966951d78099b4517899739ea98b0\" returns successfully" May 9 04:49:12.691441 systemd[1]: cri-containerd-98914f20149a323bba469a193df5df04503966951d78099b4517899739ea98b0.scope: Deactivated successfully. May 9 04:49:12.723981 containerd[1513]: time="2025-05-09T04:49:12.723945074Z" level=info msg="TaskExit event in podsandbox handler container_id:\"98914f20149a323bba469a193df5df04503966951d78099b4517899739ea98b0\" id:\"98914f20149a323bba469a193df5df04503966951d78099b4517899739ea98b0\" pid:3070 exited_at:{seconds:1746766152 nanos:712426720}" May 9 04:49:12.724718 containerd[1513]: time="2025-05-09T04:49:12.724680870Z" level=info msg="received exit event container_id:\"98914f20149a323bba469a193df5df04503966951d78099b4517899739ea98b0\" id:\"98914f20149a323bba469a193df5df04503966951d78099b4517899739ea98b0\" pid:3070 exited_at:{seconds:1746766152 nanos:712426720}" May 9 04:49:13.552160 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98914f20149a323bba469a193df5df04503966951d78099b4517899739ea98b0-rootfs.mount: Deactivated successfully. May 9 04:49:13.603680 containerd[1513]: time="2025-05-09T04:49:13.599424185Z" level=info msg="CreateContainer within sandbox \"cd266fde76e2a929a55aca9446368d31b37499227282ada169f2c80a8104616e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 9 04:49:13.620973 containerd[1513]: time="2025-05-09T04:49:13.620930026Z" level=info msg="Container 56d6b4a7f08c361f2d65a7cf595ff6cf922e44758f9f8f458313c2f1c24abfc7: CDI devices from CRI Config.CDIDevices: []" May 9 04:49:13.648810 containerd[1513]: time="2025-05-09T04:49:13.648747178Z" level=info msg="CreateContainer within sandbox \"cd266fde76e2a929a55aca9446368d31b37499227282ada169f2c80a8104616e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"56d6b4a7f08c361f2d65a7cf595ff6cf922e44758f9f8f458313c2f1c24abfc7\"" May 9 04:49:13.651337 containerd[1513]: time="2025-05-09T04:49:13.651169804Z" level=info msg="StartContainer for \"56d6b4a7f08c361f2d65a7cf595ff6cf922e44758f9f8f458313c2f1c24abfc7\"" May 9 04:49:13.652305 containerd[1513]: time="2025-05-09T04:49:13.652278931Z" level=info msg="connecting to shim 56d6b4a7f08c361f2d65a7cf595ff6cf922e44758f9f8f458313c2f1c24abfc7" address="unix:///run/containerd/s/27cb248a5b87165916d35d9f529d494696b5c421053c2c9efd11c6ec3410589a" protocol=ttrpc version=3 May 9 04:49:13.669835 systemd[1]: Started cri-containerd-56d6b4a7f08c361f2d65a7cf595ff6cf922e44758f9f8f458313c2f1c24abfc7.scope - libcontainer container 56d6b4a7f08c361f2d65a7cf595ff6cf922e44758f9f8f458313c2f1c24abfc7. May 9 04:49:13.695883 containerd[1513]: time="2025-05-09T04:49:13.695826150Z" level=info msg="StartContainer for \"56d6b4a7f08c361f2d65a7cf595ff6cf922e44758f9f8f458313c2f1c24abfc7\" returns successfully" May 9 04:49:13.706127 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 04:49:13.706367 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 04:49:13.706632 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 9 04:49:13.708036 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 04:49:13.709650 systemd[1]: cri-containerd-56d6b4a7f08c361f2d65a7cf595ff6cf922e44758f9f8f458313c2f1c24abfc7.scope: Deactivated successfully. May 9 04:49:13.713074 containerd[1513]: time="2025-05-09T04:49:13.713032879Z" level=info msg="TaskExit event in podsandbox handler container_id:\"56d6b4a7f08c361f2d65a7cf595ff6cf922e44758f9f8f458313c2f1c24abfc7\" id:\"56d6b4a7f08c361f2d65a7cf595ff6cf922e44758f9f8f458313c2f1c24abfc7\" pid:3115 exited_at:{seconds:1746766153 nanos:712014255}" May 9 04:49:13.719941 containerd[1513]: time="2025-05-09T04:49:13.718805611Z" level=info msg="received exit event container_id:\"56d6b4a7f08c361f2d65a7cf595ff6cf922e44758f9f8f458313c2f1c24abfc7\" id:\"56d6b4a7f08c361f2d65a7cf595ff6cf922e44758f9f8f458313c2f1c24abfc7\" pid:3115 exited_at:{seconds:1746766153 nanos:712014255}" May 9 04:49:13.732006 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 04:49:14.320363 containerd[1513]: time="2025-05-09T04:49:14.320320516Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:49:14.321165 containerd[1513]: time="2025-05-09T04:49:14.320790274Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 9 04:49:14.321787 containerd[1513]: time="2025-05-09T04:49:14.321759917Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 04:49:14.322947 containerd[1513]: time="2025-05-09T04:49:14.322921568Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.780029975s" May 9 04:49:14.323150 containerd[1513]: time="2025-05-09T04:49:14.323044039Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 9 04:49:14.325330 containerd[1513]: time="2025-05-09T04:49:14.324924750Z" level=info msg="CreateContainer within sandbox \"2b3abca3ab5be6607c0e82ef2c51e3ddd3d0c0213d2a444fee265df7de143d67\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 9 04:49:14.330932 containerd[1513]: time="2025-05-09T04:49:14.330896085Z" level=info msg="Container 8622a0f59abd90a974ec5db64beb0ed5ead8f43833bdd38ab8d55d1c97b66e8b: CDI devices from CRI Config.CDIDevices: []" May 9 04:49:14.336498 containerd[1513]: time="2025-05-09T04:49:14.336443315Z" level=info msg="CreateContainer within sandbox \"2b3abca3ab5be6607c0e82ef2c51e3ddd3d0c0213d2a444fee265df7de143d67\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8622a0f59abd90a974ec5db64beb0ed5ead8f43833bdd38ab8d55d1c97b66e8b\"" May 9 04:49:14.336966 containerd[1513]: time="2025-05-09T04:49:14.336896188Z" level=info msg="StartContainer for \"8622a0f59abd90a974ec5db64beb0ed5ead8f43833bdd38ab8d55d1c97b66e8b\"" May 9 04:49:14.337646 containerd[1513]: time="2025-05-09T04:49:14.337619769Z" level=info msg="connecting to shim 8622a0f59abd90a974ec5db64beb0ed5ead8f43833bdd38ab8d55d1c97b66e8b" address="unix:///run/containerd/s/41844e8ea9ab6369bbb0a87e7d88943ae25301272c60ba1b18e261d25655c90d" protocol=ttrpc version=3 May 9 04:49:14.355849 systemd[1]: Started cri-containerd-8622a0f59abd90a974ec5db64beb0ed5ead8f43833bdd38ab8d55d1c97b66e8b.scope - libcontainer container 8622a0f59abd90a974ec5db64beb0ed5ead8f43833bdd38ab8d55d1c97b66e8b. May 9 04:49:14.381783 containerd[1513]: time="2025-05-09T04:49:14.381682726Z" level=info msg="StartContainer for \"8622a0f59abd90a974ec5db64beb0ed5ead8f43833bdd38ab8d55d1c97b66e8b\" returns successfully" May 9 04:49:14.596233 containerd[1513]: time="2025-05-09T04:49:14.596122917Z" level=info msg="CreateContainer within sandbox \"cd266fde76e2a929a55aca9446368d31b37499227282ada169f2c80a8104616e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 9 04:49:14.606789 containerd[1513]: time="2025-05-09T04:49:14.606353560Z" level=info msg="Container 9491de7f26c5bcc3200552b308291c34c1dc942b80ddba419d3fc210b4cefc7a: CDI devices from CRI Config.CDIDevices: []" May 9 04:49:14.615586 containerd[1513]: time="2025-05-09T04:49:14.615379621Z" level=info msg="CreateContainer within sandbox \"cd266fde76e2a929a55aca9446368d31b37499227282ada169f2c80a8104616e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9491de7f26c5bcc3200552b308291c34c1dc942b80ddba419d3fc210b4cefc7a\"" May 9 04:49:14.615955 containerd[1513]: time="2025-05-09T04:49:14.615927598Z" level=info msg="StartContainer for \"9491de7f26c5bcc3200552b308291c34c1dc942b80ddba419d3fc210b4cefc7a\"" May 9 04:49:14.617853 kubelet[2635]: I0509 04:49:14.617749 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-r7n6t" podStartSLOduration=1.309346618 podStartE2EDuration="12.617688959s" podCreationTimestamp="2025-05-09 04:49:02 +0000 UTC" firstStartedPulling="2025-05-09 04:49:03.015380148 +0000 UTC m=+5.572461766" lastFinishedPulling="2025-05-09 04:49:14.323722489 +0000 UTC m=+16.880804107" observedRunningTime="2025-05-09 04:49:14.617407369 +0000 UTC m=+17.174488987" watchObservedRunningTime="2025-05-09 04:49:14.617688959 +0000 UTC m=+17.174770577" May 9 04:49:14.619163 containerd[1513]: time="2025-05-09T04:49:14.619123839Z" level=info msg="connecting to shim 9491de7f26c5bcc3200552b308291c34c1dc942b80ddba419d3fc210b4cefc7a" address="unix:///run/containerd/s/27cb248a5b87165916d35d9f529d494696b5c421053c2c9efd11c6ec3410589a" protocol=ttrpc version=3 May 9 04:49:14.648869 systemd[1]: Started cri-containerd-9491de7f26c5bcc3200552b308291c34c1dc942b80ddba419d3fc210b4cefc7a.scope - libcontainer container 9491de7f26c5bcc3200552b308291c34c1dc942b80ddba419d3fc210b4cefc7a. May 9 04:49:14.700803 systemd[1]: cri-containerd-9491de7f26c5bcc3200552b308291c34c1dc942b80ddba419d3fc210b4cefc7a.scope: Deactivated successfully. May 9 04:49:14.717447 containerd[1513]: time="2025-05-09T04:49:14.717410817Z" level=info msg="received exit event container_id:\"9491de7f26c5bcc3200552b308291c34c1dc942b80ddba419d3fc210b4cefc7a\" id:\"9491de7f26c5bcc3200552b308291c34c1dc942b80ddba419d3fc210b4cefc7a\" pid:3214 exited_at:{seconds:1746766154 nanos:717207486}" May 9 04:49:14.718089 containerd[1513]: time="2025-05-09T04:49:14.718038694Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9491de7f26c5bcc3200552b308291c34c1dc942b80ddba419d3fc210b4cefc7a\" id:\"9491de7f26c5bcc3200552b308291c34c1dc942b80ddba419d3fc210b4cefc7a\" pid:3214 exited_at:{seconds:1746766154 nanos:717207486}" May 9 04:49:14.718387 containerd[1513]: time="2025-05-09T04:49:14.718355253Z" level=info msg="StartContainer for \"9491de7f26c5bcc3200552b308291c34c1dc942b80ddba419d3fc210b4cefc7a\" returns successfully" May 9 04:49:14.738982 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9491de7f26c5bcc3200552b308291c34c1dc942b80ddba419d3fc210b4cefc7a-rootfs.mount: Deactivated successfully. May 9 04:49:15.599873 containerd[1513]: time="2025-05-09T04:49:15.599817027Z" level=info msg="CreateContainer within sandbox \"cd266fde76e2a929a55aca9446368d31b37499227282ada169f2c80a8104616e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 9 04:49:15.621388 containerd[1513]: time="2025-05-09T04:49:15.621335649Z" level=info msg="Container b9a004e69bb0adbe16cb5e09f1e41cf0ba2c9acca3d167361932664b82ae7bba: CDI devices from CRI Config.CDIDevices: []" May 9 04:49:15.623591 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3061209931.mount: Deactivated successfully. May 9 04:49:15.628217 containerd[1513]: time="2025-05-09T04:49:15.628173388Z" level=info msg="CreateContainer within sandbox \"cd266fde76e2a929a55aca9446368d31b37499227282ada169f2c80a8104616e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b9a004e69bb0adbe16cb5e09f1e41cf0ba2c9acca3d167361932664b82ae7bba\"" May 9 04:49:15.628727 containerd[1513]: time="2025-05-09T04:49:15.628582407Z" level=info msg="StartContainer for \"b9a004e69bb0adbe16cb5e09f1e41cf0ba2c9acca3d167361932664b82ae7bba\"" May 9 04:49:15.630889 containerd[1513]: time="2025-05-09T04:49:15.630858559Z" level=info msg="connecting to shim b9a004e69bb0adbe16cb5e09f1e41cf0ba2c9acca3d167361932664b82ae7bba" address="unix:///run/containerd/s/27cb248a5b87165916d35d9f529d494696b5c421053c2c9efd11c6ec3410589a" protocol=ttrpc version=3 May 9 04:49:15.648802 systemd[1]: Started cri-containerd-b9a004e69bb0adbe16cb5e09f1e41cf0ba2c9acca3d167361932664b82ae7bba.scope - libcontainer container b9a004e69bb0adbe16cb5e09f1e41cf0ba2c9acca3d167361932664b82ae7bba. May 9 04:49:15.668741 containerd[1513]: time="2025-05-09T04:49:15.668707623Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b9a004e69bb0adbe16cb5e09f1e41cf0ba2c9acca3d167361932664b82ae7bba\" id:\"b9a004e69bb0adbe16cb5e09f1e41cf0ba2c9acca3d167361932664b82ae7bba\" pid:3253 exited_at:{seconds:1746766155 nanos:668495292}" May 9 04:49:15.668727 systemd[1]: cri-containerd-b9a004e69bb0adbe16cb5e09f1e41cf0ba2c9acca3d167361932664b82ae7bba.scope: Deactivated successfully. May 9 04:49:15.671543 containerd[1513]: time="2025-05-09T04:49:15.671493179Z" level=info msg="received exit event container_id:\"b9a004e69bb0adbe16cb5e09f1e41cf0ba2c9acca3d167361932664b82ae7bba\" id:\"b9a004e69bb0adbe16cb5e09f1e41cf0ba2c9acca3d167361932664b82ae7bba\" pid:3253 exited_at:{seconds:1746766155 nanos:668495292}" May 9 04:49:15.677406 containerd[1513]: time="2025-05-09T04:49:15.677376247Z" level=info msg="StartContainer for \"b9a004e69bb0adbe16cb5e09f1e41cf0ba2c9acca3d167361932664b82ae7bba\" returns successfully" May 9 04:49:15.688831 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9a004e69bb0adbe16cb5e09f1e41cf0ba2c9acca3d167361932664b82ae7bba-rootfs.mount: Deactivated successfully. May 9 04:49:16.632683 containerd[1513]: time="2025-05-09T04:49:16.629792941Z" level=info msg="CreateContainer within sandbox \"cd266fde76e2a929a55aca9446368d31b37499227282ada169f2c80a8104616e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 9 04:49:16.652005 containerd[1513]: time="2025-05-09T04:49:16.650059985Z" level=info msg="Container cae12d7fd6346e82d38de66bc53c45c957f31722226ea2cd3cb42ff461aab9f9: CDI devices from CRI Config.CDIDevices: []" May 9 04:49:16.663221 containerd[1513]: time="2025-05-09T04:49:16.663182350Z" level=info msg="CreateContainer within sandbox \"cd266fde76e2a929a55aca9446368d31b37499227282ada169f2c80a8104616e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cae12d7fd6346e82d38de66bc53c45c957f31722226ea2cd3cb42ff461aab9f9\"" May 9 04:49:16.663981 containerd[1513]: time="2025-05-09T04:49:16.663955092Z" level=info msg="StartContainer for \"cae12d7fd6346e82d38de66bc53c45c957f31722226ea2cd3cb42ff461aab9f9\"" May 9 04:49:16.664828 containerd[1513]: time="2025-05-09T04:49:16.664804491Z" level=info msg="connecting to shim cae12d7fd6346e82d38de66bc53c45c957f31722226ea2cd3cb42ff461aab9f9" address="unix:///run/containerd/s/27cb248a5b87165916d35d9f529d494696b5c421053c2c9efd11c6ec3410589a" protocol=ttrpc version=3 May 9 04:49:16.686809 systemd[1]: Started cri-containerd-cae12d7fd6346e82d38de66bc53c45c957f31722226ea2cd3cb42ff461aab9f9.scope - libcontainer container cae12d7fd6346e82d38de66bc53c45c957f31722226ea2cd3cb42ff461aab9f9. May 9 04:49:16.729272 containerd[1513]: time="2025-05-09T04:49:16.729230796Z" level=info msg="StartContainer for \"cae12d7fd6346e82d38de66bc53c45c957f31722226ea2cd3cb42ff461aab9f9\" returns successfully" May 9 04:49:16.829468 containerd[1513]: time="2025-05-09T04:49:16.829430469Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cae12d7fd6346e82d38de66bc53c45c957f31722226ea2cd3cb42ff461aab9f9\" id:\"fd42fa9eb66eb88c087ff76cb4df9b8bc445d366a3e632816d872f51e7ce147d\" pid:3322 exited_at:{seconds:1746766156 nanos:829124358}" May 9 04:49:16.842461 kubelet[2635]: I0509 04:49:16.842283 2635 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 9 04:49:16.879408 systemd[1]: Created slice kubepods-burstable-pod08950b56_b35c_4cf2_9a3c_0e7ddb8bcbf6.slice - libcontainer container kubepods-burstable-pod08950b56_b35c_4cf2_9a3c_0e7ddb8bcbf6.slice. May 9 04:49:16.889522 systemd[1]: Created slice kubepods-burstable-pod1e9694e8_44b1_4807_acf1_1c07e7f72a74.slice - libcontainer container kubepods-burstable-pod1e9694e8_44b1_4807_acf1_1c07e7f72a74.slice. May 9 04:49:16.945095 kubelet[2635]: I0509 04:49:16.945030 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e9694e8-44b1-4807-acf1-1c07e7f72a74-config-volume\") pod \"coredns-6f6b679f8f-wccv8\" (UID: \"1e9694e8-44b1-4807-acf1-1c07e7f72a74\") " pod="kube-system/coredns-6f6b679f8f-wccv8" May 9 04:49:16.945256 kubelet[2635]: I0509 04:49:16.945097 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08950b56-b35c-4cf2-9a3c-0e7ddb8bcbf6-config-volume\") pod \"coredns-6f6b679f8f-5ztdj\" (UID: \"08950b56-b35c-4cf2-9a3c-0e7ddb8bcbf6\") " pod="kube-system/coredns-6f6b679f8f-5ztdj" May 9 04:49:16.945256 kubelet[2635]: I0509 04:49:16.945157 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6pm8\" (UniqueName: \"kubernetes.io/projected/08950b56-b35c-4cf2-9a3c-0e7ddb8bcbf6-kube-api-access-g6pm8\") pod \"coredns-6f6b679f8f-5ztdj\" (UID: \"08950b56-b35c-4cf2-9a3c-0e7ddb8bcbf6\") " pod="kube-system/coredns-6f6b679f8f-5ztdj" May 9 04:49:16.945256 kubelet[2635]: I0509 04:49:16.945180 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m92fw\" (UniqueName: \"kubernetes.io/projected/1e9694e8-44b1-4807-acf1-1c07e7f72a74-kube-api-access-m92fw\") pod \"coredns-6f6b679f8f-wccv8\" (UID: \"1e9694e8-44b1-4807-acf1-1c07e7f72a74\") " pod="kube-system/coredns-6f6b679f8f-wccv8" May 9 04:49:17.184573 containerd[1513]: time="2025-05-09T04:49:17.184239532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5ztdj,Uid:08950b56-b35c-4cf2-9a3c-0e7ddb8bcbf6,Namespace:kube-system,Attempt:0,}" May 9 04:49:17.193609 containerd[1513]: time="2025-05-09T04:49:17.193561935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wccv8,Uid:1e9694e8-44b1-4807-acf1-1c07e7f72a74,Namespace:kube-system,Attempt:0,}" May 9 04:49:17.633031 kubelet[2635]: I0509 04:49:17.632978 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-j6bj4" podStartSLOduration=5.889767649 podStartE2EDuration="15.632961557s" podCreationTimestamp="2025-05-09 04:49:02 +0000 UTC" firstStartedPulling="2025-05-09 04:49:02.799227679 +0000 UTC m=+5.356309257" lastFinishedPulling="2025-05-09 04:49:12.542421547 +0000 UTC m=+15.099503165" observedRunningTime="2025-05-09 04:49:17.631398521 +0000 UTC m=+20.188480139" watchObservedRunningTime="2025-05-09 04:49:17.632961557 +0000 UTC m=+20.190043175" May 9 04:49:18.916732 systemd-networkd[1438]: cilium_host: Link UP May 9 04:49:18.916917 systemd-networkd[1438]: cilium_net: Link UP May 9 04:49:18.918894 systemd-networkd[1438]: cilium_net: Gained carrier May 9 04:49:18.919119 systemd-networkd[1438]: cilium_host: Gained carrier May 9 04:49:18.919224 systemd-networkd[1438]: cilium_net: Gained IPv6LL May 9 04:49:18.919329 systemd-networkd[1438]: cilium_host: Gained IPv6LL May 9 04:49:18.996831 systemd-networkd[1438]: cilium_vxlan: Link UP May 9 04:49:18.996837 systemd-networkd[1438]: cilium_vxlan: Gained carrier May 9 04:49:19.288687 kernel: NET: Registered PF_ALG protocol family May 9 04:49:19.844493 systemd-networkd[1438]: lxc_health: Link UP May 9 04:49:19.853426 systemd-networkd[1438]: lxc_health: Gained carrier May 9 04:49:20.312703 systemd-networkd[1438]: lxc1995de66209b: Link UP May 9 04:49:20.312902 systemd-networkd[1438]: lxc0b7ae99321f0: Link UP May 9 04:49:20.313678 kernel: eth0: renamed from tmp1f287 May 9 04:49:20.326678 kernel: eth0: renamed from tmp15a06 May 9 04:49:20.334077 systemd-networkd[1438]: lxc1995de66209b: Gained carrier May 9 04:49:20.334456 systemd-networkd[1438]: lxc0b7ae99321f0: Gained carrier May 9 04:49:20.775105 systemd-networkd[1438]: cilium_vxlan: Gained IPv6LL May 9 04:49:21.031121 systemd-networkd[1438]: lxc_health: Gained IPv6LL May 9 04:49:21.543139 systemd-networkd[1438]: lxc1995de66209b: Gained IPv6LL May 9 04:49:21.543379 systemd-networkd[1438]: lxc0b7ae99321f0: Gained IPv6LL May 9 04:49:23.821398 containerd[1513]: time="2025-05-09T04:49:23.821336657Z" level=info msg="connecting to shim 15a0620114a3bbd5b19a02f13393bc0a5d8ee084b4135309139b23c014a5bd56" address="unix:///run/containerd/s/a524bf86d77221e2b3f97a6343422bb9b325f8214ac240b81d452a9c36cb0e54" namespace=k8s.io protocol=ttrpc version=3 May 9 04:49:23.824566 containerd[1513]: time="2025-05-09T04:49:23.823332033Z" level=info msg="connecting to shim 1f287cc535270b0b675c0fcfa99f54c9ce6bd6f773b33cb50136265fb2a68e5d" address="unix:///run/containerd/s/9f14f00dbd96c54cb2ad933d24fa5f3817764e80fa21b6cae279238e11cc74e5" namespace=k8s.io protocol=ttrpc version=3 May 9 04:49:23.848846 systemd[1]: Started cri-containerd-15a0620114a3bbd5b19a02f13393bc0a5d8ee084b4135309139b23c014a5bd56.scope - libcontainer container 15a0620114a3bbd5b19a02f13393bc0a5d8ee084b4135309139b23c014a5bd56. May 9 04:49:23.858588 systemd[1]: Started cri-containerd-1f287cc535270b0b675c0fcfa99f54c9ce6bd6f773b33cb50136265fb2a68e5d.scope - libcontainer container 1f287cc535270b0b675c0fcfa99f54c9ce6bd6f773b33cb50136265fb2a68e5d. May 9 04:49:23.864865 systemd-resolved[1346]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 04:49:23.871602 systemd-resolved[1346]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 04:49:23.893545 containerd[1513]: time="2025-05-09T04:49:23.893504641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wccv8,Uid:1e9694e8-44b1-4807-acf1-1c07e7f72a74,Namespace:kube-system,Attempt:0,} returns sandbox id \"15a0620114a3bbd5b19a02f13393bc0a5d8ee084b4135309139b23c014a5bd56\"" May 9 04:49:23.898572 containerd[1513]: time="2025-05-09T04:49:23.898516505Z" level=info msg="CreateContainer within sandbox \"15a0620114a3bbd5b19a02f13393bc0a5d8ee084b4135309139b23c014a5bd56\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 9 04:49:23.903710 containerd[1513]: time="2025-05-09T04:49:23.903501443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5ztdj,Uid:08950b56-b35c-4cf2-9a3c-0e7ddb8bcbf6,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f287cc535270b0b675c0fcfa99f54c9ce6bd6f773b33cb50136265fb2a68e5d\"" May 9 04:49:23.907446 containerd[1513]: time="2025-05-09T04:49:23.907407458Z" level=info msg="CreateContainer within sandbox \"1f287cc535270b0b675c0fcfa99f54c9ce6bd6f773b33cb50136265fb2a68e5d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 9 04:49:23.920911 containerd[1513]: time="2025-05-09T04:49:23.920874113Z" level=info msg="Container 202e53fcfa30829ae6db8cf7ccbf87249bb6eac6f290eac08cd24a9c251e81a1: CDI devices from CRI Config.CDIDevices: []" May 9 04:49:23.922721 containerd[1513]: time="2025-05-09T04:49:23.922683694Z" level=info msg="Container 442dd99fa2b9522012a059957b5ac494762086d6129ed7fc70dbb8435ff4f2e8: CDI devices from CRI Config.CDIDevices: []" May 9 04:49:23.926732 containerd[1513]: time="2025-05-09T04:49:23.926690008Z" level=info msg="CreateContainer within sandbox \"1f287cc535270b0b675c0fcfa99f54c9ce6bd6f773b33cb50136265fb2a68e5d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"202e53fcfa30829ae6db8cf7ccbf87249bb6eac6f290eac08cd24a9c251e81a1\"" May 9 04:49:23.927244 containerd[1513]: time="2025-05-09T04:49:23.927219148Z" level=info msg="StartContainer for \"202e53fcfa30829ae6db8cf7ccbf87249bb6eac6f290eac08cd24a9c251e81a1\"" May 9 04:49:23.928545 containerd[1513]: time="2025-05-09T04:49:23.928519872Z" level=info msg="connecting to shim 202e53fcfa30829ae6db8cf7ccbf87249bb6eac6f290eac08cd24a9c251e81a1" address="unix:///run/containerd/s/9f14f00dbd96c54cb2ad933d24fa5f3817764e80fa21b6cae279238e11cc74e5" protocol=ttrpc version=3 May 9 04:49:23.931125 containerd[1513]: time="2025-05-09T04:49:23.931005860Z" level=info msg="CreateContainer within sandbox \"15a0620114a3bbd5b19a02f13393bc0a5d8ee084b4135309139b23c014a5bd56\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"442dd99fa2b9522012a059957b5ac494762086d6129ed7fc70dbb8435ff4f2e8\"" May 9 04:49:23.932404 containerd[1513]: time="2025-05-09T04:49:23.932368077Z" level=info msg="StartContainer for \"442dd99fa2b9522012a059957b5ac494762086d6129ed7fc70dbb8435ff4f2e8\"" May 9 04:49:23.933163 containerd[1513]: time="2025-05-09T04:49:23.933139062Z" level=info msg="connecting to shim 442dd99fa2b9522012a059957b5ac494762086d6129ed7fc70dbb8435ff4f2e8" address="unix:///run/containerd/s/a524bf86d77221e2b3f97a6343422bb9b325f8214ac240b81d452a9c36cb0e54" protocol=ttrpc version=3 May 9 04:49:23.955840 systemd[1]: Started cri-containerd-202e53fcfa30829ae6db8cf7ccbf87249bb6eac6f290eac08cd24a9c251e81a1.scope - libcontainer container 202e53fcfa30829ae6db8cf7ccbf87249bb6eac6f290eac08cd24a9c251e81a1. May 9 04:49:23.959354 systemd[1]: Started cri-containerd-442dd99fa2b9522012a059957b5ac494762086d6129ed7fc70dbb8435ff4f2e8.scope - libcontainer container 442dd99fa2b9522012a059957b5ac494762086d6129ed7fc70dbb8435ff4f2e8. May 9 04:49:23.991445 containerd[1513]: time="2025-05-09T04:49:23.991379505Z" level=info msg="StartContainer for \"202e53fcfa30829ae6db8cf7ccbf87249bb6eac6f290eac08cd24a9c251e81a1\" returns successfully" May 9 04:49:23.999566 containerd[1513]: time="2025-05-09T04:49:23.999531079Z" level=info msg="StartContainer for \"442dd99fa2b9522012a059957b5ac494762086d6129ed7fc70dbb8435ff4f2e8\" returns successfully" May 9 04:49:24.641723 kubelet[2635]: I0509 04:49:24.641599 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-5ztdj" podStartSLOduration=22.641160818 podStartE2EDuration="22.641160818s" podCreationTimestamp="2025-05-09 04:49:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 04:49:24.641004189 +0000 UTC m=+27.198085807" watchObservedRunningTime="2025-05-09 04:49:24.641160818 +0000 UTC m=+27.198242436" May 9 04:49:24.672195 kubelet[2635]: I0509 04:49:24.672107 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-wccv8" podStartSLOduration=22.672060212 podStartE2EDuration="22.672060212s" podCreationTimestamp="2025-05-09 04:49:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 04:49:24.671935749 +0000 UTC m=+27.229017367" watchObservedRunningTime="2025-05-09 04:49:24.672060212 +0000 UTC m=+27.229141830" May 9 04:49:24.807468 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2269276666.mount: Deactivated successfully. May 9 04:49:26.200320 systemd[1]: Started sshd@7-10.0.0.27:22-10.0.0.1:46460.service - OpenSSH per-connection server daemon (10.0.0.1:46460). May 9 04:49:26.254239 sshd[3987]: Accepted publickey for core from 10.0.0.1 port 46460 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:49:26.255489 sshd-session[3987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:49:26.260244 systemd-logind[1503]: New session 8 of user core. May 9 04:49:26.271836 systemd[1]: Started session-8.scope - Session 8 of User core. May 9 04:49:26.393808 sshd[3989]: Connection closed by 10.0.0.1 port 46460 May 9 04:49:26.394239 sshd-session[3987]: pam_unix(sshd:session): session closed for user core May 9 04:49:26.398100 systemd[1]: sshd@7-10.0.0.27:22-10.0.0.1:46460.service: Deactivated successfully. May 9 04:49:26.400781 systemd[1]: session-8.scope: Deactivated successfully. May 9 04:49:26.402363 systemd-logind[1503]: Session 8 logged out. Waiting for processes to exit. May 9 04:49:26.403204 systemd-logind[1503]: Removed session 8. May 9 04:49:31.410338 systemd[1]: Started sshd@8-10.0.0.27:22-10.0.0.1:46462.service - OpenSSH per-connection server daemon (10.0.0.1:46462). May 9 04:49:31.462666 sshd[4007]: Accepted publickey for core from 10.0.0.1 port 46462 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:49:31.463917 sshd-session[4007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:49:31.468551 systemd-logind[1503]: New session 9 of user core. May 9 04:49:31.478824 systemd[1]: Started session-9.scope - Session 9 of User core. May 9 04:49:31.589190 sshd[4009]: Connection closed by 10.0.0.1 port 46462 May 9 04:49:31.590535 sshd-session[4007]: pam_unix(sshd:session): session closed for user core May 9 04:49:31.594441 systemd[1]: sshd@8-10.0.0.27:22-10.0.0.1:46462.service: Deactivated successfully. May 9 04:49:31.596122 systemd[1]: session-9.scope: Deactivated successfully. May 9 04:49:31.597254 systemd-logind[1503]: Session 9 logged out. Waiting for processes to exit. May 9 04:49:31.598410 systemd-logind[1503]: Removed session 9. May 9 04:49:36.602031 systemd[1]: Started sshd@9-10.0.0.27:22-10.0.0.1:39682.service - OpenSSH per-connection server daemon (10.0.0.1:39682). May 9 04:49:36.647264 sshd[4026]: Accepted publickey for core from 10.0.0.1 port 39682 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:49:36.648343 sshd-session[4026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:49:36.652217 systemd-logind[1503]: New session 10 of user core. May 9 04:49:36.659821 systemd[1]: Started session-10.scope - Session 10 of User core. May 9 04:49:36.815373 sshd[4028]: Connection closed by 10.0.0.1 port 39682 May 9 04:49:36.815844 sshd-session[4026]: pam_unix(sshd:session): session closed for user core May 9 04:49:36.819266 systemd[1]: sshd@9-10.0.0.27:22-10.0.0.1:39682.service: Deactivated successfully. May 9 04:49:36.821895 systemd[1]: session-10.scope: Deactivated successfully. May 9 04:49:36.823016 systemd-logind[1503]: Session 10 logged out. Waiting for processes to exit. May 9 04:49:36.824438 systemd-logind[1503]: Removed session 10. May 9 04:49:41.833272 systemd[1]: Started sshd@10-10.0.0.27:22-10.0.0.1:39686.service - OpenSSH per-connection server daemon (10.0.0.1:39686). May 9 04:49:41.887170 sshd[4043]: Accepted publickey for core from 10.0.0.1 port 39686 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:49:41.888448 sshd-session[4043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:49:41.892733 systemd-logind[1503]: New session 11 of user core. May 9 04:49:41.902867 systemd[1]: Started session-11.scope - Session 11 of User core. May 9 04:49:42.011181 sshd[4045]: Connection closed by 10.0.0.1 port 39686 May 9 04:49:42.011492 sshd-session[4043]: pam_unix(sshd:session): session closed for user core May 9 04:49:42.028749 systemd[1]: sshd@10-10.0.0.27:22-10.0.0.1:39686.service: Deactivated successfully. May 9 04:49:42.031990 systemd[1]: session-11.scope: Deactivated successfully. May 9 04:49:42.032802 systemd-logind[1503]: Session 11 logged out. Waiting for processes to exit. May 9 04:49:42.034486 systemd[1]: Started sshd@11-10.0.0.27:22-10.0.0.1:39692.service - OpenSSH per-connection server daemon (10.0.0.1:39692). May 9 04:49:42.035506 systemd-logind[1503]: Removed session 11. May 9 04:49:42.083724 sshd[4059]: Accepted publickey for core from 10.0.0.1 port 39692 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:49:42.084644 sshd-session[4059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:49:42.088721 systemd-logind[1503]: New session 12 of user core. May 9 04:49:42.100814 systemd[1]: Started session-12.scope - Session 12 of User core. May 9 04:49:42.243251 sshd[4062]: Connection closed by 10.0.0.1 port 39692 May 9 04:49:42.243994 sshd-session[4059]: pam_unix(sshd:session): session closed for user core May 9 04:49:42.261294 systemd[1]: sshd@11-10.0.0.27:22-10.0.0.1:39692.service: Deactivated successfully. May 9 04:49:42.264297 systemd[1]: session-12.scope: Deactivated successfully. May 9 04:49:42.266069 systemd-logind[1503]: Session 12 logged out. Waiting for processes to exit. May 9 04:49:42.268921 systemd[1]: Started sshd@12-10.0.0.27:22-10.0.0.1:39708.service - OpenSSH per-connection server daemon (10.0.0.1:39708). May 9 04:49:42.269629 systemd-logind[1503]: Removed session 12. May 9 04:49:42.314323 sshd[4073]: Accepted publickey for core from 10.0.0.1 port 39708 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:49:42.315403 sshd-session[4073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:49:42.319299 systemd-logind[1503]: New session 13 of user core. May 9 04:49:42.327807 systemd[1]: Started session-13.scope - Session 13 of User core. May 9 04:49:42.441013 sshd[4076]: Connection closed by 10.0.0.1 port 39708 May 9 04:49:42.440792 sshd-session[4073]: pam_unix(sshd:session): session closed for user core May 9 04:49:42.444527 systemd[1]: sshd@12-10.0.0.27:22-10.0.0.1:39708.service: Deactivated successfully. May 9 04:49:42.446587 systemd[1]: session-13.scope: Deactivated successfully. May 9 04:49:42.447267 systemd-logind[1503]: Session 13 logged out. Waiting for processes to exit. May 9 04:49:42.448056 systemd-logind[1503]: Removed session 13. May 9 04:49:47.456312 systemd[1]: Started sshd@13-10.0.0.27:22-10.0.0.1:35806.service - OpenSSH per-connection server daemon (10.0.0.1:35806). May 9 04:49:47.501918 sshd[4089]: Accepted publickey for core from 10.0.0.1 port 35806 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:49:47.503296 sshd-session[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:49:47.508055 systemd-logind[1503]: New session 14 of user core. May 9 04:49:47.523880 systemd[1]: Started session-14.scope - Session 14 of User core. May 9 04:49:47.638434 sshd[4091]: Connection closed by 10.0.0.1 port 35806 May 9 04:49:47.638893 sshd-session[4089]: pam_unix(sshd:session): session closed for user core May 9 04:49:47.642938 systemd[1]: sshd@13-10.0.0.27:22-10.0.0.1:35806.service: Deactivated successfully. May 9 04:49:47.645220 systemd[1]: session-14.scope: Deactivated successfully. May 9 04:49:47.646025 systemd-logind[1503]: Session 14 logged out. Waiting for processes to exit. May 9 04:49:47.647122 systemd-logind[1503]: Removed session 14. May 9 04:49:52.650372 systemd[1]: Started sshd@14-10.0.0.27:22-10.0.0.1:58946.service - OpenSSH per-connection server daemon (10.0.0.1:58946). May 9 04:49:52.705323 sshd[4105]: Accepted publickey for core from 10.0.0.1 port 58946 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:49:52.706511 sshd-session[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:49:52.710509 systemd-logind[1503]: New session 15 of user core. May 9 04:49:52.720849 systemd[1]: Started session-15.scope - Session 15 of User core. May 9 04:49:52.834602 sshd[4107]: Connection closed by 10.0.0.1 port 58946 May 9 04:49:52.834990 sshd-session[4105]: pam_unix(sshd:session): session closed for user core May 9 04:49:52.848208 systemd[1]: sshd@14-10.0.0.27:22-10.0.0.1:58946.service: Deactivated successfully. May 9 04:49:52.850168 systemd[1]: session-15.scope: Deactivated successfully. May 9 04:49:52.851535 systemd-logind[1503]: Session 15 logged out. Waiting for processes to exit. May 9 04:49:52.853082 systemd[1]: Started sshd@15-10.0.0.27:22-10.0.0.1:58962.service - OpenSSH per-connection server daemon (10.0.0.1:58962). May 9 04:49:52.855885 systemd-logind[1503]: Removed session 15. May 9 04:49:52.910086 sshd[4119]: Accepted publickey for core from 10.0.0.1 port 58962 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:49:52.911009 sshd-session[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:49:52.916399 systemd-logind[1503]: New session 16 of user core. May 9 04:49:52.937876 systemd[1]: Started session-16.scope - Session 16 of User core. May 9 04:49:53.156417 sshd[4122]: Connection closed by 10.0.0.1 port 58962 May 9 04:49:53.156910 sshd-session[4119]: pam_unix(sshd:session): session closed for user core May 9 04:49:53.172329 systemd[1]: sshd@15-10.0.0.27:22-10.0.0.1:58962.service: Deactivated successfully. May 9 04:49:53.174484 systemd[1]: session-16.scope: Deactivated successfully. May 9 04:49:53.175413 systemd-logind[1503]: Session 16 logged out. Waiting for processes to exit. May 9 04:49:53.177459 systemd[1]: Started sshd@16-10.0.0.27:22-10.0.0.1:58968.service - OpenSSH per-connection server daemon (10.0.0.1:58968). May 9 04:49:53.178632 systemd-logind[1503]: Removed session 16. May 9 04:49:53.234987 sshd[4132]: Accepted publickey for core from 10.0.0.1 port 58968 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:49:53.236236 sshd-session[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:49:53.240740 systemd-logind[1503]: New session 17 of user core. May 9 04:49:53.256881 systemd[1]: Started session-17.scope - Session 17 of User core. May 9 04:49:54.509853 sshd[4135]: Connection closed by 10.0.0.1 port 58968 May 9 04:49:54.510452 sshd-session[4132]: pam_unix(sshd:session): session closed for user core May 9 04:49:54.522894 systemd[1]: sshd@16-10.0.0.27:22-10.0.0.1:58968.service: Deactivated successfully. May 9 04:49:54.525171 systemd[1]: session-17.scope: Deactivated successfully. May 9 04:49:54.527140 systemd-logind[1503]: Session 17 logged out. Waiting for processes to exit. May 9 04:49:54.529994 systemd[1]: Started sshd@17-10.0.0.27:22-10.0.0.1:58978.service - OpenSSH per-connection server daemon (10.0.0.1:58978). May 9 04:49:54.532943 systemd-logind[1503]: Removed session 17. May 9 04:49:54.577053 sshd[4156]: Accepted publickey for core from 10.0.0.1 port 58978 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:49:54.578329 sshd-session[4156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:49:54.582721 systemd-logind[1503]: New session 18 of user core. May 9 04:49:54.588859 systemd[1]: Started session-18.scope - Session 18 of User core. May 9 04:49:54.806211 sshd[4159]: Connection closed by 10.0.0.1 port 58978 May 9 04:49:54.806889 sshd-session[4156]: pam_unix(sshd:session): session closed for user core May 9 04:49:54.821576 systemd[1]: sshd@17-10.0.0.27:22-10.0.0.1:58978.service: Deactivated successfully. May 9 04:49:54.824250 systemd[1]: session-18.scope: Deactivated successfully. May 9 04:49:54.825011 systemd-logind[1503]: Session 18 logged out. Waiting for processes to exit. May 9 04:49:54.826939 systemd[1]: Started sshd@18-10.0.0.27:22-10.0.0.1:58994.service - OpenSSH per-connection server daemon (10.0.0.1:58994). May 9 04:49:54.828244 systemd-logind[1503]: Removed session 18. May 9 04:49:54.884524 sshd[4170]: Accepted publickey for core from 10.0.0.1 port 58994 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:49:54.885821 sshd-session[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:49:54.890307 systemd-logind[1503]: New session 19 of user core. May 9 04:49:54.896800 systemd[1]: Started session-19.scope - Session 19 of User core. May 9 04:49:55.004518 sshd[4173]: Connection closed by 10.0.0.1 port 58994 May 9 04:49:55.004066 sshd-session[4170]: pam_unix(sshd:session): session closed for user core May 9 04:49:55.007246 systemd[1]: sshd@18-10.0.0.27:22-10.0.0.1:58994.service: Deactivated successfully. May 9 04:49:55.008890 systemd[1]: session-19.scope: Deactivated successfully. May 9 04:49:55.009869 systemd-logind[1503]: Session 19 logged out. Waiting for processes to exit. May 9 04:49:55.010758 systemd-logind[1503]: Removed session 19. May 9 04:50:00.016324 systemd[1]: Started sshd@19-10.0.0.27:22-10.0.0.1:59004.service - OpenSSH per-connection server daemon (10.0.0.1:59004). May 9 04:50:00.066543 sshd[4192]: Accepted publickey for core from 10.0.0.1 port 59004 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:50:00.067761 sshd-session[4192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:50:00.072750 systemd-logind[1503]: New session 20 of user core. May 9 04:50:00.086819 systemd[1]: Started session-20.scope - Session 20 of User core. May 9 04:50:00.206260 sshd[4194]: Connection closed by 10.0.0.1 port 59004 May 9 04:50:00.206592 sshd-session[4192]: pam_unix(sshd:session): session closed for user core May 9 04:50:00.210202 systemd[1]: sshd@19-10.0.0.27:22-10.0.0.1:59004.service: Deactivated successfully. May 9 04:50:00.213167 systemd[1]: session-20.scope: Deactivated successfully. May 9 04:50:00.214062 systemd-logind[1503]: Session 20 logged out. Waiting for processes to exit. May 9 04:50:00.215090 systemd-logind[1503]: Removed session 20. May 9 04:50:05.219877 systemd[1]: Started sshd@20-10.0.0.27:22-10.0.0.1:46842.service - OpenSSH per-connection server daemon (10.0.0.1:46842). May 9 04:50:05.265694 sshd[4210]: Accepted publickey for core from 10.0.0.1 port 46842 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:50:05.266577 sshd-session[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:50:05.271258 systemd-logind[1503]: New session 21 of user core. May 9 04:50:05.284863 systemd[1]: Started session-21.scope - Session 21 of User core. May 9 04:50:05.391104 sshd[4212]: Connection closed by 10.0.0.1 port 46842 May 9 04:50:05.391419 sshd-session[4210]: pam_unix(sshd:session): session closed for user core May 9 04:50:05.394910 systemd[1]: sshd@20-10.0.0.27:22-10.0.0.1:46842.service: Deactivated successfully. May 9 04:50:05.396614 systemd[1]: session-21.scope: Deactivated successfully. May 9 04:50:05.398553 systemd-logind[1503]: Session 21 logged out. Waiting for processes to exit. May 9 04:50:05.399478 systemd-logind[1503]: Removed session 21. May 9 04:50:10.403276 systemd[1]: Started sshd@21-10.0.0.27:22-10.0.0.1:46844.service - OpenSSH per-connection server daemon (10.0.0.1:46844). May 9 04:50:10.448781 sshd[4226]: Accepted publickey for core from 10.0.0.1 port 46844 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:50:10.449866 sshd-session[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:50:10.454762 systemd-logind[1503]: New session 22 of user core. May 9 04:50:10.463799 systemd[1]: Started session-22.scope - Session 22 of User core. May 9 04:50:10.573731 sshd[4228]: Connection closed by 10.0.0.1 port 46844 May 9 04:50:10.574042 sshd-session[4226]: pam_unix(sshd:session): session closed for user core May 9 04:50:10.587130 systemd[1]: sshd@21-10.0.0.27:22-10.0.0.1:46844.service: Deactivated successfully. May 9 04:50:10.588640 systemd[1]: session-22.scope: Deactivated successfully. May 9 04:50:10.589835 systemd-logind[1503]: Session 22 logged out. Waiting for processes to exit. May 9 04:50:10.591601 systemd[1]: Started sshd@22-10.0.0.27:22-10.0.0.1:46848.service - OpenSSH per-connection server daemon (10.0.0.1:46848). May 9 04:50:10.592939 systemd-logind[1503]: Removed session 22. May 9 04:50:10.634981 sshd[4241]: Accepted publickey for core from 10.0.0.1 port 46848 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:50:10.636216 sshd-session[4241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:50:10.643605 systemd-logind[1503]: New session 23 of user core. May 9 04:50:10.650804 systemd[1]: Started session-23.scope - Session 23 of User core. May 9 04:50:12.962748 containerd[1513]: time="2025-05-09T04:50:12.962630314Z" level=info msg="StopContainer for \"8622a0f59abd90a974ec5db64beb0ed5ead8f43833bdd38ab8d55d1c97b66e8b\" with timeout 30 (s)" May 9 04:50:12.965760 containerd[1513]: time="2025-05-09T04:50:12.965725437Z" level=info msg="Stop container \"8622a0f59abd90a974ec5db64beb0ed5ead8f43833bdd38ab8d55d1c97b66e8b\" with signal terminated" May 9 04:50:12.990088 systemd[1]: cri-containerd-8622a0f59abd90a974ec5db64beb0ed5ead8f43833bdd38ab8d55d1c97b66e8b.scope: Deactivated successfully. May 9 04:50:12.997131 containerd[1513]: time="2025-05-09T04:50:12.997085203Z" level=info msg="received exit event container_id:\"8622a0f59abd90a974ec5db64beb0ed5ead8f43833bdd38ab8d55d1c97b66e8b\" id:\"8622a0f59abd90a974ec5db64beb0ed5ead8f43833bdd38ab8d55d1c97b66e8b\" pid:3178 exited_at:{seconds:1746766212 nanos:996623265}" May 9 04:50:12.997246 containerd[1513]: time="2025-05-09T04:50:12.997118565Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8622a0f59abd90a974ec5db64beb0ed5ead8f43833bdd38ab8d55d1c97b66e8b\" id:\"8622a0f59abd90a974ec5db64beb0ed5ead8f43833bdd38ab8d55d1c97b66e8b\" pid:3178 exited_at:{seconds:1746766212 nanos:996623265}" May 9 04:50:13.009183 containerd[1513]: time="2025-05-09T04:50:13.009125632Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 04:50:13.013548 containerd[1513]: time="2025-05-09T04:50:13.013506321Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cae12d7fd6346e82d38de66bc53c45c957f31722226ea2cd3cb42ff461aab9f9\" id:\"712404440a0f2fd5955e79b4b622a48faf0a7389709a09e46133f1c9aae17822\" pid:4270 exited_at:{seconds:1746766213 nanos:13206910}" May 9 04:50:13.016492 containerd[1513]: time="2025-05-09T04:50:13.016440234Z" level=info msg="StopContainer for \"cae12d7fd6346e82d38de66bc53c45c957f31722226ea2cd3cb42ff461aab9f9\" with timeout 2 (s)" May 9 04:50:13.016959 containerd[1513]: time="2025-05-09T04:50:13.016937733Z" level=info msg="Stop container \"cae12d7fd6346e82d38de66bc53c45c957f31722226ea2cd3cb42ff461aab9f9\" with signal terminated" May 9 04:50:13.019617 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8622a0f59abd90a974ec5db64beb0ed5ead8f43833bdd38ab8d55d1c97b66e8b-rootfs.mount: Deactivated successfully. May 9 04:50:13.023873 systemd-networkd[1438]: lxc_health: Link DOWN May 9 04:50:13.023880 systemd-networkd[1438]: lxc_health: Lost carrier May 9 04:50:13.031289 containerd[1513]: time="2025-05-09T04:50:13.031252324Z" level=info msg="StopContainer for \"8622a0f59abd90a974ec5db64beb0ed5ead8f43833bdd38ab8d55d1c97b66e8b\" returns successfully" May 9 04:50:13.036346 containerd[1513]: time="2025-05-09T04:50:13.036290478Z" level=info msg="StopPodSandbox for \"2b3abca3ab5be6607c0e82ef2c51e3ddd3d0c0213d2a444fee265df7de143d67\"" May 9 04:50:13.037221 systemd[1]: cri-containerd-cae12d7fd6346e82d38de66bc53c45c957f31722226ea2cd3cb42ff461aab9f9.scope: Deactivated successfully. May 9 04:50:13.037608 systemd[1]: cri-containerd-cae12d7fd6346e82d38de66bc53c45c957f31722226ea2cd3cb42ff461aab9f9.scope: Consumed 6.367s CPU time, 121.8M memory peak, 1.4M read from disk, 12.9M written to disk. May 9 04:50:13.038071 containerd[1513]: time="2025-05-09T04:50:13.038030265Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cae12d7fd6346e82d38de66bc53c45c957f31722226ea2cd3cb42ff461aab9f9\" id:\"cae12d7fd6346e82d38de66bc53c45c957f31722226ea2cd3cb42ff461aab9f9\" pid:3291 exited_at:{seconds:1746766213 nanos:37695252}" May 9 04:50:13.038262 containerd[1513]: time="2025-05-09T04:50:13.038237433Z" level=info msg="received exit event container_id:\"cae12d7fd6346e82d38de66bc53c45c957f31722226ea2cd3cb42ff461aab9f9\" id:\"cae12d7fd6346e82d38de66bc53c45c957f31722226ea2cd3cb42ff461aab9f9\" pid:3291 exited_at:{seconds:1746766213 nanos:37695252}" May 9 04:50:13.044138 containerd[1513]: time="2025-05-09T04:50:13.044096098Z" level=info msg="Container to stop \"8622a0f59abd90a974ec5db64beb0ed5ead8f43833bdd38ab8d55d1c97b66e8b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 04:50:13.055558 systemd[1]: cri-containerd-2b3abca3ab5be6607c0e82ef2c51e3ddd3d0c0213d2a444fee265df7de143d67.scope: Deactivated successfully. May 9 04:50:13.056955 containerd[1513]: time="2025-05-09T04:50:13.056921712Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2b3abca3ab5be6607c0e82ef2c51e3ddd3d0c0213d2a444fee265df7de143d67\" id:\"2b3abca3ab5be6607c0e82ef2c51e3ddd3d0c0213d2a444fee265df7de143d67\" pid:2876 exit_status:137 exited_at:{seconds:1746766213 nanos:56373491}" May 9 04:50:13.062952 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cae12d7fd6346e82d38de66bc53c45c957f31722226ea2cd3cb42ff461aab9f9-rootfs.mount: Deactivated successfully. May 9 04:50:13.074432 containerd[1513]: time="2025-05-09T04:50:13.074386904Z" level=info msg="StopContainer for \"cae12d7fd6346e82d38de66bc53c45c957f31722226ea2cd3cb42ff461aab9f9\" returns successfully" May 9 04:50:13.075842 containerd[1513]: time="2025-05-09T04:50:13.075811519Z" level=info msg="StopPodSandbox for \"cd266fde76e2a929a55aca9446368d31b37499227282ada169f2c80a8104616e\"" May 9 04:50:13.075939 containerd[1513]: time="2025-05-09T04:50:13.075915443Z" level=info msg="Container to stop \"b9a004e69bb0adbe16cb5e09f1e41cf0ba2c9acca3d167361932664b82ae7bba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 04:50:13.075939 containerd[1513]: time="2025-05-09T04:50:13.075930684Z" level=info msg="Container to stop \"cae12d7fd6346e82d38de66bc53c45c957f31722226ea2cd3cb42ff461aab9f9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 04:50:13.075992 containerd[1513]: time="2025-05-09T04:50:13.075939764Z" level=info msg="Container to stop \"56d6b4a7f08c361f2d65a7cf595ff6cf922e44758f9f8f458313c2f1c24abfc7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 04:50:13.075992 containerd[1513]: time="2025-05-09T04:50:13.075948204Z" level=info msg="Container to stop \"9491de7f26c5bcc3200552b308291c34c1dc942b80ddba419d3fc210b4cefc7a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 04:50:13.075992 containerd[1513]: time="2025-05-09T04:50:13.075957485Z" level=info msg="Container to stop \"98914f20149a323bba469a193df5df04503966951d78099b4517899739ea98b0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 04:50:13.084886 systemd[1]: cri-containerd-cd266fde76e2a929a55aca9446368d31b37499227282ada169f2c80a8104616e.scope: Deactivated successfully. May 9 04:50:13.091440 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b3abca3ab5be6607c0e82ef2c51e3ddd3d0c0213d2a444fee265df7de143d67-rootfs.mount: Deactivated successfully. May 9 04:50:13.102506 containerd[1513]: time="2025-05-09T04:50:13.102428184Z" level=info msg="shim disconnected" id=2b3abca3ab5be6607c0e82ef2c51e3ddd3d0c0213d2a444fee265df7de143d67 namespace=k8s.io May 9 04:50:13.104649 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd266fde76e2a929a55aca9446368d31b37499227282ada169f2c80a8104616e-rootfs.mount: Deactivated successfully. May 9 04:50:13.116468 containerd[1513]: time="2025-05-09T04:50:13.102468705Z" level=warning msg="cleaning up after shim disconnected" id=2b3abca3ab5be6607c0e82ef2c51e3ddd3d0c0213d2a444fee265df7de143d67 namespace=k8s.io May 9 04:50:13.116468 containerd[1513]: time="2025-05-09T04:50:13.116462084Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 04:50:13.116681 containerd[1513]: time="2025-05-09T04:50:13.109938633Z" level=info msg="shim disconnected" id=cd266fde76e2a929a55aca9446368d31b37499227282ada169f2c80a8104616e namespace=k8s.io May 9 04:50:13.116718 containerd[1513]: time="2025-05-09T04:50:13.116682332Z" level=warning msg="cleaning up after shim disconnected" id=cd266fde76e2a929a55aca9446368d31b37499227282ada169f2c80a8104616e namespace=k8s.io May 9 04:50:13.116718 containerd[1513]: time="2025-05-09T04:50:13.116713773Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 04:50:13.133679 containerd[1513]: time="2025-05-09T04:50:13.132298493Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cd266fde76e2a929a55aca9446368d31b37499227282ada169f2c80a8104616e\" id:\"cd266fde76e2a929a55aca9446368d31b37499227282ada169f2c80a8104616e\" pid:2777 exit_status:137 exited_at:{seconds:1746766213 nanos:85828865}" May 9 04:50:13.133679 containerd[1513]: time="2025-05-09T04:50:13.132367656Z" level=info msg="received exit event sandbox_id:\"2b3abca3ab5be6607c0e82ef2c51e3ddd3d0c0213d2a444fee265df7de143d67\" exit_status:137 exited_at:{seconds:1746766213 nanos:56373491}" May 9 04:50:13.133679 containerd[1513]: time="2025-05-09T04:50:13.132448459Z" level=info msg="TearDown network for sandbox \"2b3abca3ab5be6607c0e82ef2c51e3ddd3d0c0213d2a444fee265df7de143d67\" successfully" May 9 04:50:13.133679 containerd[1513]: time="2025-05-09T04:50:13.132468500Z" level=info msg="StopPodSandbox for \"2b3abca3ab5be6607c0e82ef2c51e3ddd3d0c0213d2a444fee265df7de143d67\" returns successfully" May 9 04:50:13.135252 containerd[1513]: time="2025-05-09T04:50:13.135199405Z" level=info msg="TearDown network for sandbox \"cd266fde76e2a929a55aca9446368d31b37499227282ada169f2c80a8104616e\" successfully" May 9 04:50:13.135252 containerd[1513]: time="2025-05-09T04:50:13.135231846Z" level=info msg="StopPodSandbox for \"cd266fde76e2a929a55aca9446368d31b37499227282ada169f2c80a8104616e\" returns successfully" May 9 04:50:13.135418 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2b3abca3ab5be6607c0e82ef2c51e3ddd3d0c0213d2a444fee265df7de143d67-shm.mount: Deactivated successfully. May 9 04:50:13.135556 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cd266fde76e2a929a55aca9446368d31b37499227282ada169f2c80a8104616e-shm.mount: Deactivated successfully. May 9 04:50:13.141192 containerd[1513]: time="2025-05-09T04:50:13.141111313Z" level=info msg="received exit event sandbox_id:\"cd266fde76e2a929a55aca9446368d31b37499227282ada169f2c80a8104616e\" exit_status:137 exited_at:{seconds:1746766213 nanos:85828865}" May 9 04:50:13.196791 kubelet[2635]: I0509 04:50:13.196594 2635 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/762745fc-5d54-4e21-9564-4da41c1a05c2-etc-cni-netd\") pod \"762745fc-5d54-4e21-9564-4da41c1a05c2\" (UID: \"762745fc-5d54-4e21-9564-4da41c1a05c2\") " May 9 04:50:13.196791 kubelet[2635]: I0509 04:50:13.196639 2635 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/762745fc-5d54-4e21-9564-4da41c1a05c2-hostproc\") pod \"762745fc-5d54-4e21-9564-4da41c1a05c2\" (UID: \"762745fc-5d54-4e21-9564-4da41c1a05c2\") " May 9 04:50:13.196791 kubelet[2635]: I0509 04:50:13.196679 2635 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e7d1c837-ad9a-430e-bb45-3fc7237ab9c4-cilium-config-path\") pod \"e7d1c837-ad9a-430e-bb45-3fc7237ab9c4\" (UID: \"e7d1c837-ad9a-430e-bb45-3fc7237ab9c4\") " May 9 04:50:13.196791 kubelet[2635]: I0509 04:50:13.196703 2635 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/762745fc-5d54-4e21-9564-4da41c1a05c2-clustermesh-secrets\") pod \"762745fc-5d54-4e21-9564-4da41c1a05c2\" (UID: \"762745fc-5d54-4e21-9564-4da41c1a05c2\") " May 9 04:50:13.196791 kubelet[2635]: I0509 04:50:13.196722 2635 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/762745fc-5d54-4e21-9564-4da41c1a05c2-lib-modules\") pod \"762745fc-5d54-4e21-9564-4da41c1a05c2\" (UID: \"762745fc-5d54-4e21-9564-4da41c1a05c2\") " May 9 04:50:13.196791 kubelet[2635]: I0509 04:50:13.196759 2635 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/762745fc-5d54-4e21-9564-4da41c1a05c2-cilium-cgroup\") pod \"762745fc-5d54-4e21-9564-4da41c1a05c2\" (UID: \"762745fc-5d54-4e21-9564-4da41c1a05c2\") " May 9 04:50:13.197967 kubelet[2635]: I0509 04:50:13.196774 2635 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/762745fc-5d54-4e21-9564-4da41c1a05c2-cilium-run\") pod \"762745fc-5d54-4e21-9564-4da41c1a05c2\" (UID: \"762745fc-5d54-4e21-9564-4da41c1a05c2\") " May 9 04:50:13.197967 kubelet[2635]: I0509 04:50:13.196794 2635 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fgpn\" (UniqueName: \"kubernetes.io/projected/762745fc-5d54-4e21-9564-4da41c1a05c2-kube-api-access-2fgpn\") pod \"762745fc-5d54-4e21-9564-4da41c1a05c2\" (UID: \"762745fc-5d54-4e21-9564-4da41c1a05c2\") " May 9 04:50:13.197967 kubelet[2635]: I0509 04:50:13.196809 2635 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/762745fc-5d54-4e21-9564-4da41c1a05c2-bpf-maps\") pod \"762745fc-5d54-4e21-9564-4da41c1a05c2\" (UID: \"762745fc-5d54-4e21-9564-4da41c1a05c2\") " May 9 04:50:13.197967 kubelet[2635]: I0509 04:50:13.196824 2635 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/762745fc-5d54-4e21-9564-4da41c1a05c2-host-proc-sys-net\") pod \"762745fc-5d54-4e21-9564-4da41c1a05c2\" (UID: \"762745fc-5d54-4e21-9564-4da41c1a05c2\") " May 9 04:50:13.197967 kubelet[2635]: I0509 04:50:13.196839 2635 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/762745fc-5d54-4e21-9564-4da41c1a05c2-cni-path\") pod \"762745fc-5d54-4e21-9564-4da41c1a05c2\" (UID: \"762745fc-5d54-4e21-9564-4da41c1a05c2\") " May 9 04:50:13.197967 kubelet[2635]: I0509 04:50:13.196857 2635 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/762745fc-5d54-4e21-9564-4da41c1a05c2-xtables-lock\") pod \"762745fc-5d54-4e21-9564-4da41c1a05c2\" (UID: \"762745fc-5d54-4e21-9564-4da41c1a05c2\") " May 9 04:50:13.198105 kubelet[2635]: I0509 04:50:13.196874 2635 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/762745fc-5d54-4e21-9564-4da41c1a05c2-host-proc-sys-kernel\") pod \"762745fc-5d54-4e21-9564-4da41c1a05c2\" (UID: \"762745fc-5d54-4e21-9564-4da41c1a05c2\") " May 9 04:50:13.198105 kubelet[2635]: I0509 04:50:13.196891 2635 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/762745fc-5d54-4e21-9564-4da41c1a05c2-cilium-config-path\") pod \"762745fc-5d54-4e21-9564-4da41c1a05c2\" (UID: \"762745fc-5d54-4e21-9564-4da41c1a05c2\") " May 9 04:50:13.198105 kubelet[2635]: I0509 04:50:13.196907 2635 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zk7r9\" (UniqueName: \"kubernetes.io/projected/e7d1c837-ad9a-430e-bb45-3fc7237ab9c4-kube-api-access-zk7r9\") pod \"e7d1c837-ad9a-430e-bb45-3fc7237ab9c4\" (UID: \"e7d1c837-ad9a-430e-bb45-3fc7237ab9c4\") " May 9 04:50:13.198105 kubelet[2635]: I0509 04:50:13.196925 2635 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/762745fc-5d54-4e21-9564-4da41c1a05c2-hubble-tls\") pod \"762745fc-5d54-4e21-9564-4da41c1a05c2\" (UID: \"762745fc-5d54-4e21-9564-4da41c1a05c2\") " May 9 04:50:13.201369 kubelet[2635]: I0509 04:50:13.200760 2635 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/762745fc-5d54-4e21-9564-4da41c1a05c2-hostproc" (OuterVolumeSpecName: "hostproc") pod "762745fc-5d54-4e21-9564-4da41c1a05c2" (UID: "762745fc-5d54-4e21-9564-4da41c1a05c2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 04:50:13.201369 kubelet[2635]: I0509 04:50:13.200851 2635 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/762745fc-5d54-4e21-9564-4da41c1a05c2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "762745fc-5d54-4e21-9564-4da41c1a05c2" (UID: "762745fc-5d54-4e21-9564-4da41c1a05c2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 04:50:13.201369 kubelet[2635]: I0509 04:50:13.201168 2635 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7d1c837-ad9a-430e-bb45-3fc7237ab9c4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e7d1c837-ad9a-430e-bb45-3fc7237ab9c4" (UID: "e7d1c837-ad9a-430e-bb45-3fc7237ab9c4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 9 04:50:13.201369 kubelet[2635]: I0509 04:50:13.201218 2635 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/762745fc-5d54-4e21-9564-4da41c1a05c2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "762745fc-5d54-4e21-9564-4da41c1a05c2" (UID: "762745fc-5d54-4e21-9564-4da41c1a05c2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 04:50:13.201369 kubelet[2635]: I0509 04:50:13.201235 2635 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/762745fc-5d54-4e21-9564-4da41c1a05c2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "762745fc-5d54-4e21-9564-4da41c1a05c2" (UID: "762745fc-5d54-4e21-9564-4da41c1a05c2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 04:50:13.201567 kubelet[2635]: I0509 04:50:13.201250 2635 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/762745fc-5d54-4e21-9564-4da41c1a05c2-cni-path" (OuterVolumeSpecName: "cni-path") pod "762745fc-5d54-4e21-9564-4da41c1a05c2" (UID: "762745fc-5d54-4e21-9564-4da41c1a05c2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 04:50:13.202916 kubelet[2635]: I0509 04:50:13.202881 2635 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/762745fc-5d54-4e21-9564-4da41c1a05c2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "762745fc-5d54-4e21-9564-4da41c1a05c2" (UID: "762745fc-5d54-4e21-9564-4da41c1a05c2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 04:50:13.203334 kubelet[2635]: I0509 04:50:13.202943 2635 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/762745fc-5d54-4e21-9564-4da41c1a05c2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "762745fc-5d54-4e21-9564-4da41c1a05c2" (UID: "762745fc-5d54-4e21-9564-4da41c1a05c2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 9 04:50:13.203425 kubelet[2635]: I0509 04:50:13.202967 2635 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/762745fc-5d54-4e21-9564-4da41c1a05c2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "762745fc-5d54-4e21-9564-4da41c1a05c2" (UID: "762745fc-5d54-4e21-9564-4da41c1a05c2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 04:50:13.203515 kubelet[2635]: I0509 04:50:13.203498 2635 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/762745fc-5d54-4e21-9564-4da41c1a05c2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "762745fc-5d54-4e21-9564-4da41c1a05c2" (UID: "762745fc-5d54-4e21-9564-4da41c1a05c2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 04:50:13.203614 kubelet[2635]: I0509 04:50:13.203600 2635 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/762745fc-5d54-4e21-9564-4da41c1a05c2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "762745fc-5d54-4e21-9564-4da41c1a05c2" (UID: "762745fc-5d54-4e21-9564-4da41c1a05c2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 04:50:13.203705 kubelet[2635]: I0509 04:50:13.203691 2635 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/762745fc-5d54-4e21-9564-4da41c1a05c2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "762745fc-5d54-4e21-9564-4da41c1a05c2" (UID: "762745fc-5d54-4e21-9564-4da41c1a05c2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 04:50:13.204134 kubelet[2635]: I0509 04:50:13.204106 2635 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/762745fc-5d54-4e21-9564-4da41c1a05c2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "762745fc-5d54-4e21-9564-4da41c1a05c2" (UID: "762745fc-5d54-4e21-9564-4da41c1a05c2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 9 04:50:13.205209 kubelet[2635]: I0509 04:50:13.205174 2635 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7d1c837-ad9a-430e-bb45-3fc7237ab9c4-kube-api-access-zk7r9" (OuterVolumeSpecName: "kube-api-access-zk7r9") pod "e7d1c837-ad9a-430e-bb45-3fc7237ab9c4" (UID: "e7d1c837-ad9a-430e-bb45-3fc7237ab9c4"). InnerVolumeSpecName "kube-api-access-zk7r9". PluginName "kubernetes.io/projected", VolumeGidValue "" May 9 04:50:13.205555 kubelet[2635]: I0509 04:50:13.205532 2635 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/762745fc-5d54-4e21-9564-4da41c1a05c2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "762745fc-5d54-4e21-9564-4da41c1a05c2" (UID: "762745fc-5d54-4e21-9564-4da41c1a05c2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 9 04:50:13.205888 kubelet[2635]: I0509 04:50:13.205857 2635 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/762745fc-5d54-4e21-9564-4da41c1a05c2-kube-api-access-2fgpn" (OuterVolumeSpecName: "kube-api-access-2fgpn") pod "762745fc-5d54-4e21-9564-4da41c1a05c2" (UID: "762745fc-5d54-4e21-9564-4da41c1a05c2"). InnerVolumeSpecName "kube-api-access-2fgpn". PluginName "kubernetes.io/projected", VolumeGidValue "" May 9 04:50:13.297490 kubelet[2635]: I0509 04:50:13.297318 2635 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/762745fc-5d54-4e21-9564-4da41c1a05c2-lib-modules\") on node \"localhost\" DevicePath \"\"" May 9 04:50:13.297490 kubelet[2635]: I0509 04:50:13.297357 2635 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/762745fc-5d54-4e21-9564-4da41c1a05c2-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 9 04:50:13.297490 kubelet[2635]: I0509 04:50:13.297368 2635 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/762745fc-5d54-4e21-9564-4da41c1a05c2-cilium-run\") on node \"localhost\" DevicePath \"\"" May 9 04:50:13.297490 kubelet[2635]: I0509 04:50:13.297376 2635 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-2fgpn\" (UniqueName: \"kubernetes.io/projected/762745fc-5d54-4e21-9564-4da41c1a05c2-kube-api-access-2fgpn\") on node \"localhost\" DevicePath \"\"" May 9 04:50:13.297490 kubelet[2635]: I0509 04:50:13.297386 2635 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/762745fc-5d54-4e21-9564-4da41c1a05c2-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 9 04:50:13.297490 kubelet[2635]: I0509 04:50:13.297394 2635 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/762745fc-5d54-4e21-9564-4da41c1a05c2-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 9 04:50:13.297490 kubelet[2635]: I0509 04:50:13.297402 2635 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/762745fc-5d54-4e21-9564-4da41c1a05c2-cni-path\") on node \"localhost\" DevicePath \"\"" May 9 04:50:13.297490 kubelet[2635]: I0509 04:50:13.297408 2635 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/762745fc-5d54-4e21-9564-4da41c1a05c2-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 9 04:50:13.297806 kubelet[2635]: I0509 04:50:13.297415 2635 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/762745fc-5d54-4e21-9564-4da41c1a05c2-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 9 04:50:13.297806 kubelet[2635]: I0509 04:50:13.297423 2635 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/762745fc-5d54-4e21-9564-4da41c1a05c2-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 9 04:50:13.297806 kubelet[2635]: I0509 04:50:13.297431 2635 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-zk7r9\" (UniqueName: \"kubernetes.io/projected/e7d1c837-ad9a-430e-bb45-3fc7237ab9c4-kube-api-access-zk7r9\") on node \"localhost\" DevicePath \"\"" May 9 04:50:13.297806 kubelet[2635]: I0509 04:50:13.297439 2635 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/762745fc-5d54-4e21-9564-4da41c1a05c2-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 9 04:50:13.297806 kubelet[2635]: I0509 04:50:13.297448 2635 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/762745fc-5d54-4e21-9564-4da41c1a05c2-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 9 04:50:13.297806 kubelet[2635]: I0509 04:50:13.297455 2635 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/762745fc-5d54-4e21-9564-4da41c1a05c2-hostproc\") on node \"localhost\" DevicePath \"\"" May 9 04:50:13.297806 kubelet[2635]: I0509 04:50:13.297462 2635 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e7d1c837-ad9a-430e-bb45-3fc7237ab9c4-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 9 04:50:13.297806 kubelet[2635]: I0509 04:50:13.297469 2635 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/762745fc-5d54-4e21-9564-4da41c1a05c2-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 9 04:50:13.543477 systemd[1]: Removed slice kubepods-besteffort-pode7d1c837_ad9a_430e_bb45_3fc7237ab9c4.slice - libcontainer container kubepods-besteffort-pode7d1c837_ad9a_430e_bb45_3fc7237ab9c4.slice. May 9 04:50:13.544943 systemd[1]: Removed slice kubepods-burstable-pod762745fc_5d54_4e21_9564_4da41c1a05c2.slice - libcontainer container kubepods-burstable-pod762745fc_5d54_4e21_9564_4da41c1a05c2.slice. May 9 04:50:13.545075 systemd[1]: kubepods-burstable-pod762745fc_5d54_4e21_9564_4da41c1a05c2.slice: Consumed 6.513s CPU time, 122.1M memory peak, 1.4M read from disk, 12.9M written to disk. May 9 04:50:13.739524 kubelet[2635]: I0509 04:50:13.739071 2635 scope.go:117] "RemoveContainer" containerID="8622a0f59abd90a974ec5db64beb0ed5ead8f43833bdd38ab8d55d1c97b66e8b" May 9 04:50:13.741963 containerd[1513]: time="2025-05-09T04:50:13.740784434Z" level=info msg="RemoveContainer for \"8622a0f59abd90a974ec5db64beb0ed5ead8f43833bdd38ab8d55d1c97b66e8b\"" May 9 04:50:13.867111 containerd[1513]: time="2025-05-09T04:50:13.867070254Z" level=info msg="RemoveContainer for \"8622a0f59abd90a974ec5db64beb0ed5ead8f43833bdd38ab8d55d1c97b66e8b\" returns successfully" May 9 04:50:13.867449 kubelet[2635]: I0509 04:50:13.867419 2635 scope.go:117] "RemoveContainer" containerID="8622a0f59abd90a974ec5db64beb0ed5ead8f43833bdd38ab8d55d1c97b66e8b" May 9 04:50:13.867786 containerd[1513]: time="2025-05-09T04:50:13.867750441Z" level=error msg="ContainerStatus for \"8622a0f59abd90a974ec5db64beb0ed5ead8f43833bdd38ab8d55d1c97b66e8b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8622a0f59abd90a974ec5db64beb0ed5ead8f43833bdd38ab8d55d1c97b66e8b\": not found" May 9 04:50:13.879469 kubelet[2635]: E0509 04:50:13.879415 2635 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8622a0f59abd90a974ec5db64beb0ed5ead8f43833bdd38ab8d55d1c97b66e8b\": not found" containerID="8622a0f59abd90a974ec5db64beb0ed5ead8f43833bdd38ab8d55d1c97b66e8b" May 9 04:50:13.880869 kubelet[2635]: I0509 04:50:13.880761 2635 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8622a0f59abd90a974ec5db64beb0ed5ead8f43833bdd38ab8d55d1c97b66e8b"} err="failed to get container status \"8622a0f59abd90a974ec5db64beb0ed5ead8f43833bdd38ab8d55d1c97b66e8b\": rpc error: code = NotFound desc = an error occurred when try to find container \"8622a0f59abd90a974ec5db64beb0ed5ead8f43833bdd38ab8d55d1c97b66e8b\": not found" May 9 04:50:13.880869 kubelet[2635]: I0509 04:50:13.880864 2635 scope.go:117] "RemoveContainer" containerID="cae12d7fd6346e82d38de66bc53c45c957f31722226ea2cd3cb42ff461aab9f9" May 9 04:50:13.885483 containerd[1513]: time="2025-05-09T04:50:13.885450522Z" level=info msg="RemoveContainer for \"cae12d7fd6346e82d38de66bc53c45c957f31722226ea2cd3cb42ff461aab9f9\"" May 9 04:50:13.895313 containerd[1513]: time="2025-05-09T04:50:13.895270780Z" level=info msg="RemoveContainer for \"cae12d7fd6346e82d38de66bc53c45c957f31722226ea2cd3cb42ff461aab9f9\" returns successfully" May 9 04:50:13.895504 kubelet[2635]: I0509 04:50:13.895477 2635 scope.go:117] "RemoveContainer" containerID="b9a004e69bb0adbe16cb5e09f1e41cf0ba2c9acca3d167361932664b82ae7bba" May 9 04:50:13.896876 containerd[1513]: time="2025-05-09T04:50:13.896845120Z" level=info msg="RemoveContainer for \"b9a004e69bb0adbe16cb5e09f1e41cf0ba2c9acca3d167361932664b82ae7bba\"" May 9 04:50:13.900830 containerd[1513]: time="2025-05-09T04:50:13.900780872Z" level=info msg="RemoveContainer for \"b9a004e69bb0adbe16cb5e09f1e41cf0ba2c9acca3d167361932664b82ae7bba\" returns successfully" May 9 04:50:13.901050 kubelet[2635]: I0509 04:50:13.901025 2635 scope.go:117] "RemoveContainer" containerID="9491de7f26c5bcc3200552b308291c34c1dc942b80ddba419d3fc210b4cefc7a" May 9 04:50:13.903221 containerd[1513]: time="2025-05-09T04:50:13.903195485Z" level=info msg="RemoveContainer for \"9491de7f26c5bcc3200552b308291c34c1dc942b80ddba419d3fc210b4cefc7a\"" May 9 04:50:13.911376 containerd[1513]: time="2025-05-09T04:50:13.911147391Z" level=info msg="RemoveContainer for \"9491de7f26c5bcc3200552b308291c34c1dc942b80ddba419d3fc210b4cefc7a\" returns successfully" May 9 04:50:13.911458 kubelet[2635]: I0509 04:50:13.911342 2635 scope.go:117] "RemoveContainer" containerID="56d6b4a7f08c361f2d65a7cf595ff6cf922e44758f9f8f458313c2f1c24abfc7" May 9 04:50:13.914580 containerd[1513]: time="2025-05-09T04:50:13.914552802Z" level=info msg="RemoveContainer for \"56d6b4a7f08c361f2d65a7cf595ff6cf922e44758f9f8f458313c2f1c24abfc7\"" May 9 04:50:13.920487 containerd[1513]: time="2025-05-09T04:50:13.920454149Z" level=info msg="RemoveContainer for \"56d6b4a7f08c361f2d65a7cf595ff6cf922e44758f9f8f458313c2f1c24abfc7\" returns successfully" May 9 04:50:13.920742 kubelet[2635]: I0509 04:50:13.920638 2635 scope.go:117] "RemoveContainer" containerID="98914f20149a323bba469a193df5df04503966951d78099b4517899739ea98b0" May 9 04:50:13.922126 containerd[1513]: time="2025-05-09T04:50:13.922102813Z" level=info msg="RemoveContainer for \"98914f20149a323bba469a193df5df04503966951d78099b4517899739ea98b0\"" May 9 04:50:13.938217 containerd[1513]: time="2025-05-09T04:50:13.938186072Z" level=info msg="RemoveContainer for \"98914f20149a323bba469a193df5df04503966951d78099b4517899739ea98b0\" returns successfully" May 9 04:50:13.938511 kubelet[2635]: I0509 04:50:13.938389 2635 scope.go:117] "RemoveContainer" containerID="cae12d7fd6346e82d38de66bc53c45c957f31722226ea2cd3cb42ff461aab9f9" May 9 04:50:13.938668 containerd[1513]: time="2025-05-09T04:50:13.938623368Z" level=error msg="ContainerStatus for \"cae12d7fd6346e82d38de66bc53c45c957f31722226ea2cd3cb42ff461aab9f9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cae12d7fd6346e82d38de66bc53c45c957f31722226ea2cd3cb42ff461aab9f9\": not found" May 9 04:50:13.938837 kubelet[2635]: E0509 04:50:13.938792 2635 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cae12d7fd6346e82d38de66bc53c45c957f31722226ea2cd3cb42ff461aab9f9\": not found" containerID="cae12d7fd6346e82d38de66bc53c45c957f31722226ea2cd3cb42ff461aab9f9" May 9 04:50:13.938875 kubelet[2635]: I0509 04:50:13.938842 2635 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cae12d7fd6346e82d38de66bc53c45c957f31722226ea2cd3cb42ff461aab9f9"} err="failed to get container status \"cae12d7fd6346e82d38de66bc53c45c957f31722226ea2cd3cb42ff461aab9f9\": rpc error: code = NotFound desc = an error occurred when try to find container \"cae12d7fd6346e82d38de66bc53c45c957f31722226ea2cd3cb42ff461aab9f9\": not found" May 9 04:50:13.938875 kubelet[2635]: I0509 04:50:13.938867 2635 scope.go:117] "RemoveContainer" containerID="b9a004e69bb0adbe16cb5e09f1e41cf0ba2c9acca3d167361932664b82ae7bba" May 9 04:50:13.941679 containerd[1513]: time="2025-05-09T04:50:13.939350396Z" level=error msg="ContainerStatus for \"b9a004e69bb0adbe16cb5e09f1e41cf0ba2c9acca3d167361932664b82ae7bba\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b9a004e69bb0adbe16cb5e09f1e41cf0ba2c9acca3d167361932664b82ae7bba\": not found" May 9 04:50:13.941679 containerd[1513]: time="2025-05-09T04:50:13.939828895Z" level=error msg="ContainerStatus for \"9491de7f26c5bcc3200552b308291c34c1dc942b80ddba419d3fc210b4cefc7a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9491de7f26c5bcc3200552b308291c34c1dc942b80ddba419d3fc210b4cefc7a\": not found" May 9 04:50:13.941679 containerd[1513]: time="2025-05-09T04:50:13.940215470Z" level=error msg="ContainerStatus for \"56d6b4a7f08c361f2d65a7cf595ff6cf922e44758f9f8f458313c2f1c24abfc7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"56d6b4a7f08c361f2d65a7cf595ff6cf922e44758f9f8f458313c2f1c24abfc7\": not found" May 9 04:50:13.941679 containerd[1513]: time="2025-05-09T04:50:13.940484880Z" level=error msg="ContainerStatus for \"98914f20149a323bba469a193df5df04503966951d78099b4517899739ea98b0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"98914f20149a323bba469a193df5df04503966951d78099b4517899739ea98b0\": not found" May 9 04:50:13.941823 kubelet[2635]: E0509 04:50:13.939478 2635 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b9a004e69bb0adbe16cb5e09f1e41cf0ba2c9acca3d167361932664b82ae7bba\": not found" containerID="b9a004e69bb0adbe16cb5e09f1e41cf0ba2c9acca3d167361932664b82ae7bba" May 9 04:50:13.941823 kubelet[2635]: I0509 04:50:13.939517 2635 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b9a004e69bb0adbe16cb5e09f1e41cf0ba2c9acca3d167361932664b82ae7bba"} err="failed to get container status \"b9a004e69bb0adbe16cb5e09f1e41cf0ba2c9acca3d167361932664b82ae7bba\": rpc error: code = NotFound desc = an error occurred when try to find container \"b9a004e69bb0adbe16cb5e09f1e41cf0ba2c9acca3d167361932664b82ae7bba\": not found" May 9 04:50:13.941823 kubelet[2635]: I0509 04:50:13.939534 2635 scope.go:117] "RemoveContainer" containerID="9491de7f26c5bcc3200552b308291c34c1dc942b80ddba419d3fc210b4cefc7a" May 9 04:50:13.941823 kubelet[2635]: E0509 04:50:13.940016 2635 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9491de7f26c5bcc3200552b308291c34c1dc942b80ddba419d3fc210b4cefc7a\": not found" containerID="9491de7f26c5bcc3200552b308291c34c1dc942b80ddba419d3fc210b4cefc7a" May 9 04:50:13.941823 kubelet[2635]: I0509 04:50:13.940041 2635 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9491de7f26c5bcc3200552b308291c34c1dc942b80ddba419d3fc210b4cefc7a"} err="failed to get container status \"9491de7f26c5bcc3200552b308291c34c1dc942b80ddba419d3fc210b4cefc7a\": rpc error: code = NotFound desc = an error occurred when try to find container \"9491de7f26c5bcc3200552b308291c34c1dc942b80ddba419d3fc210b4cefc7a\": not found" May 9 04:50:13.941823 kubelet[2635]: I0509 04:50:13.940054 2635 scope.go:117] "RemoveContainer" containerID="56d6b4a7f08c361f2d65a7cf595ff6cf922e44758f9f8f458313c2f1c24abfc7" May 9 04:50:13.941995 kubelet[2635]: E0509 04:50:13.940327 2635 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"56d6b4a7f08c361f2d65a7cf595ff6cf922e44758f9f8f458313c2f1c24abfc7\": not found" containerID="56d6b4a7f08c361f2d65a7cf595ff6cf922e44758f9f8f458313c2f1c24abfc7" May 9 04:50:13.941995 kubelet[2635]: I0509 04:50:13.940344 2635 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"56d6b4a7f08c361f2d65a7cf595ff6cf922e44758f9f8f458313c2f1c24abfc7"} err="failed to get container status \"56d6b4a7f08c361f2d65a7cf595ff6cf922e44758f9f8f458313c2f1c24abfc7\": rpc error: code = NotFound desc = an error occurred when try to find container \"56d6b4a7f08c361f2d65a7cf595ff6cf922e44758f9f8f458313c2f1c24abfc7\": not found" May 9 04:50:13.941995 kubelet[2635]: I0509 04:50:13.940359 2635 scope.go:117] "RemoveContainer" containerID="98914f20149a323bba469a193df5df04503966951d78099b4517899739ea98b0" May 9 04:50:13.941995 kubelet[2635]: E0509 04:50:13.940731 2635 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"98914f20149a323bba469a193df5df04503966951d78099b4517899739ea98b0\": not found" containerID="98914f20149a323bba469a193df5df04503966951d78099b4517899739ea98b0" May 9 04:50:13.941995 kubelet[2635]: I0509 04:50:13.940776 2635 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"98914f20149a323bba469a193df5df04503966951d78099b4517899739ea98b0"} err="failed to get container status \"98914f20149a323bba469a193df5df04503966951d78099b4517899739ea98b0\": rpc error: code = NotFound desc = an error occurred when try to find container \"98914f20149a323bba469a193df5df04503966951d78099b4517899739ea98b0\": not found" May 9 04:50:14.017593 systemd[1]: var-lib-kubelet-pods-e7d1c837\x2dad9a\x2d430e\x2dbb45\x2d3fc7237ab9c4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzk7r9.mount: Deactivated successfully. May 9 04:50:14.017733 systemd[1]: var-lib-kubelet-pods-762745fc\x2d5d54\x2d4e21\x2d9564\x2d4da41c1a05c2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2fgpn.mount: Deactivated successfully. May 9 04:50:14.017794 systemd[1]: var-lib-kubelet-pods-762745fc\x2d5d54\x2d4e21\x2d9564\x2d4da41c1a05c2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 9 04:50:14.018076 systemd[1]: var-lib-kubelet-pods-762745fc\x2d5d54\x2d4e21\x2d9564\x2d4da41c1a05c2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 9 04:50:14.928354 sshd[4244]: Connection closed by 10.0.0.1 port 46848 May 9 04:50:14.930505 sshd-session[4241]: pam_unix(sshd:session): session closed for user core May 9 04:50:14.940298 systemd[1]: sshd@22-10.0.0.27:22-10.0.0.1:46848.service: Deactivated successfully. May 9 04:50:14.942239 systemd[1]: session-23.scope: Deactivated successfully. May 9 04:50:14.942537 systemd[1]: session-23.scope: Consumed 1.654s CPU time, 27.7M memory peak. May 9 04:50:14.943927 systemd-logind[1503]: Session 23 logged out. Waiting for processes to exit. May 9 04:50:14.946067 systemd[1]: Started sshd@23-10.0.0.27:22-10.0.0.1:46486.service - OpenSSH per-connection server daemon (10.0.0.1:46486). May 9 04:50:14.947602 systemd-logind[1503]: Removed session 23. May 9 04:50:14.993681 sshd[4392]: Accepted publickey for core from 10.0.0.1 port 46486 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:50:14.994719 sshd-session[4392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:50:14.999478 systemd-logind[1503]: New session 24 of user core. May 9 04:50:15.007838 systemd[1]: Started session-24.scope - Session 24 of User core. May 9 04:50:15.538599 kubelet[2635]: I0509 04:50:15.538554 2635 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="762745fc-5d54-4e21-9564-4da41c1a05c2" path="/var/lib/kubelet/pods/762745fc-5d54-4e21-9564-4da41c1a05c2/volumes" May 9 04:50:15.539159 kubelet[2635]: I0509 04:50:15.539130 2635 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7d1c837-ad9a-430e-bb45-3fc7237ab9c4" path="/var/lib/kubelet/pods/e7d1c837-ad9a-430e-bb45-3fc7237ab9c4/volumes" May 9 04:50:15.879791 sshd[4395]: Connection closed by 10.0.0.1 port 46486 May 9 04:50:15.880328 sshd-session[4392]: pam_unix(sshd:session): session closed for user core May 9 04:50:15.894513 systemd[1]: sshd@23-10.0.0.27:22-10.0.0.1:46486.service: Deactivated successfully. May 9 04:50:15.896410 systemd[1]: session-24.scope: Deactivated successfully. May 9 04:50:15.898154 kubelet[2635]: E0509 04:50:15.898091 2635 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="762745fc-5d54-4e21-9564-4da41c1a05c2" containerName="mount-bpf-fs" May 9 04:50:15.898154 kubelet[2635]: E0509 04:50:15.898137 2635 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="762745fc-5d54-4e21-9564-4da41c1a05c2" containerName="cilium-agent" May 9 04:50:15.898154 kubelet[2635]: E0509 04:50:15.898146 2635 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e7d1c837-ad9a-430e-bb45-3fc7237ab9c4" containerName="cilium-operator" May 9 04:50:15.898154 kubelet[2635]: E0509 04:50:15.898152 2635 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="762745fc-5d54-4e21-9564-4da41c1a05c2" containerName="mount-cgroup" May 9 04:50:15.898154 kubelet[2635]: E0509 04:50:15.898158 2635 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="762745fc-5d54-4e21-9564-4da41c1a05c2" containerName="apply-sysctl-overwrites" May 9 04:50:15.898154 kubelet[2635]: E0509 04:50:15.898164 2635 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="762745fc-5d54-4e21-9564-4da41c1a05c2" containerName="clean-cilium-state" May 9 04:50:15.898337 kubelet[2635]: I0509 04:50:15.898193 2635 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7d1c837-ad9a-430e-bb45-3fc7237ab9c4" containerName="cilium-operator" May 9 04:50:15.898337 kubelet[2635]: I0509 04:50:15.898199 2635 memory_manager.go:354] "RemoveStaleState removing state" podUID="762745fc-5d54-4e21-9564-4da41c1a05c2" containerName="cilium-agent" May 9 04:50:15.901024 systemd-logind[1503]: Session 24 logged out. Waiting for processes to exit. May 9 04:50:15.905578 systemd[1]: Started sshd@24-10.0.0.27:22-10.0.0.1:46492.service - OpenSSH per-connection server daemon (10.0.0.1:46492). May 9 04:50:15.913843 systemd-logind[1503]: Removed session 24. May 9 04:50:15.929593 systemd[1]: Created slice kubepods-burstable-pod21bff87f_98ba_4b72_8010_9e92de6e5f6b.slice - libcontainer container kubepods-burstable-pod21bff87f_98ba_4b72_8010_9e92de6e5f6b.slice. May 9 04:50:15.961545 sshd[4406]: Accepted publickey for core from 10.0.0.1 port 46492 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:50:15.962805 sshd-session[4406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:50:15.966919 systemd-logind[1503]: New session 25 of user core. May 9 04:50:15.975825 systemd[1]: Started session-25.scope - Session 25 of User core. May 9 04:50:16.013130 kubelet[2635]: I0509 04:50:16.013086 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/21bff87f-98ba-4b72-8010-9e92de6e5f6b-cilium-ipsec-secrets\") pod \"cilium-x868j\" (UID: \"21bff87f-98ba-4b72-8010-9e92de6e5f6b\") " pod="kube-system/cilium-x868j" May 9 04:50:16.013350 kubelet[2635]: I0509 04:50:16.013306 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/21bff87f-98ba-4b72-8010-9e92de6e5f6b-bpf-maps\") pod \"cilium-x868j\" (UID: \"21bff87f-98ba-4b72-8010-9e92de6e5f6b\") " pod="kube-system/cilium-x868j" May 9 04:50:16.013498 kubelet[2635]: I0509 04:50:16.013335 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/21bff87f-98ba-4b72-8010-9e92de6e5f6b-cilium-run\") pod \"cilium-x868j\" (UID: \"21bff87f-98ba-4b72-8010-9e92de6e5f6b\") " pod="kube-system/cilium-x868j" May 9 04:50:16.013498 kubelet[2635]: I0509 04:50:16.013457 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/21bff87f-98ba-4b72-8010-9e92de6e5f6b-cilium-cgroup\") pod \"cilium-x868j\" (UID: \"21bff87f-98ba-4b72-8010-9e92de6e5f6b\") " pod="kube-system/cilium-x868j" May 9 04:50:16.013665 kubelet[2635]: I0509 04:50:16.013579 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/21bff87f-98ba-4b72-8010-9e92de6e5f6b-clustermesh-secrets\") pod \"cilium-x868j\" (UID: \"21bff87f-98ba-4b72-8010-9e92de6e5f6b\") " pod="kube-system/cilium-x868j" May 9 04:50:16.013665 kubelet[2635]: I0509 04:50:16.013605 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fjtj\" (UniqueName: \"kubernetes.io/projected/21bff87f-98ba-4b72-8010-9e92de6e5f6b-kube-api-access-6fjtj\") pod \"cilium-x868j\" (UID: \"21bff87f-98ba-4b72-8010-9e92de6e5f6b\") " pod="kube-system/cilium-x868j" May 9 04:50:16.013665 kubelet[2635]: I0509 04:50:16.013626 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/21bff87f-98ba-4b72-8010-9e92de6e5f6b-hubble-tls\") pod \"cilium-x868j\" (UID: \"21bff87f-98ba-4b72-8010-9e92de6e5f6b\") " pod="kube-system/cilium-x868j" May 9 04:50:16.013918 kubelet[2635]: I0509 04:50:16.013641 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/21bff87f-98ba-4b72-8010-9e92de6e5f6b-hostproc\") pod \"cilium-x868j\" (UID: \"21bff87f-98ba-4b72-8010-9e92de6e5f6b\") " pod="kube-system/cilium-x868j" May 9 04:50:16.013918 kubelet[2635]: I0509 04:50:16.013789 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/21bff87f-98ba-4b72-8010-9e92de6e5f6b-cilium-config-path\") pod \"cilium-x868j\" (UID: \"21bff87f-98ba-4b72-8010-9e92de6e5f6b\") " pod="kube-system/cilium-x868j" May 9 04:50:16.013918 kubelet[2635]: I0509 04:50:16.013808 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/21bff87f-98ba-4b72-8010-9e92de6e5f6b-cni-path\") pod \"cilium-x868j\" (UID: \"21bff87f-98ba-4b72-8010-9e92de6e5f6b\") " pod="kube-system/cilium-x868j" May 9 04:50:16.013918 kubelet[2635]: I0509 04:50:16.013822 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/21bff87f-98ba-4b72-8010-9e92de6e5f6b-etc-cni-netd\") pod \"cilium-x868j\" (UID: \"21bff87f-98ba-4b72-8010-9e92de6e5f6b\") " pod="kube-system/cilium-x868j" May 9 04:50:16.013918 kubelet[2635]: I0509 04:50:16.013853 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/21bff87f-98ba-4b72-8010-9e92de6e5f6b-host-proc-sys-net\") pod \"cilium-x868j\" (UID: \"21bff87f-98ba-4b72-8010-9e92de6e5f6b\") " pod="kube-system/cilium-x868j" May 9 04:50:16.013918 kubelet[2635]: I0509 04:50:16.013871 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/21bff87f-98ba-4b72-8010-9e92de6e5f6b-host-proc-sys-kernel\") pod \"cilium-x868j\" (UID: \"21bff87f-98ba-4b72-8010-9e92de6e5f6b\") " pod="kube-system/cilium-x868j" May 9 04:50:16.014061 kubelet[2635]: I0509 04:50:16.013885 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/21bff87f-98ba-4b72-8010-9e92de6e5f6b-xtables-lock\") pod \"cilium-x868j\" (UID: \"21bff87f-98ba-4b72-8010-9e92de6e5f6b\") " pod="kube-system/cilium-x868j" May 9 04:50:16.014061 kubelet[2635]: I0509 04:50:16.013900 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/21bff87f-98ba-4b72-8010-9e92de6e5f6b-lib-modules\") pod \"cilium-x868j\" (UID: \"21bff87f-98ba-4b72-8010-9e92de6e5f6b\") " pod="kube-system/cilium-x868j" May 9 04:50:16.027878 sshd[4410]: Connection closed by 10.0.0.1 port 46492 May 9 04:50:16.027015 sshd-session[4406]: pam_unix(sshd:session): session closed for user core May 9 04:50:16.038179 systemd[1]: sshd@24-10.0.0.27:22-10.0.0.1:46492.service: Deactivated successfully. May 9 04:50:16.040619 systemd[1]: session-25.scope: Deactivated successfully. May 9 04:50:16.041554 systemd-logind[1503]: Session 25 logged out. Waiting for processes to exit. May 9 04:50:16.044149 systemd[1]: Started sshd@25-10.0.0.27:22-10.0.0.1:46502.service - OpenSSH per-connection server daemon (10.0.0.1:46502). May 9 04:50:16.045319 systemd-logind[1503]: Removed session 25. May 9 04:50:16.094637 sshd[4416]: Accepted publickey for core from 10.0.0.1 port 46502 ssh2: RSA SHA256:cGfwtCSR41ihX2TEzmGxMnuwv2fv9xnRwxIIs+hE9lQ May 9 04:50:16.095927 sshd-session[4416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 04:50:16.100113 systemd-logind[1503]: New session 26 of user core. May 9 04:50:16.111811 systemd[1]: Started session-26.scope - Session 26 of User core. May 9 04:50:16.235599 containerd[1513]: time="2025-05-09T04:50:16.235433343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x868j,Uid:21bff87f-98ba-4b72-8010-9e92de6e5f6b,Namespace:kube-system,Attempt:0,}" May 9 04:50:16.249864 containerd[1513]: time="2025-05-09T04:50:16.249804566Z" level=info msg="connecting to shim 88b975c58cb3813c4a2d729900010ce9b4e5d488bb6bf274f7390fef69f5c048" address="unix:///run/containerd/s/1474be8ea88829a7e8a6f5ccad5ea6d9c8cb05f3a0f2898fb30310470f32e531" namespace=k8s.io protocol=ttrpc version=3 May 9 04:50:16.273825 systemd[1]: Started cri-containerd-88b975c58cb3813c4a2d729900010ce9b4e5d488bb6bf274f7390fef69f5c048.scope - libcontainer container 88b975c58cb3813c4a2d729900010ce9b4e5d488bb6bf274f7390fef69f5c048. May 9 04:50:16.297254 containerd[1513]: time="2025-05-09T04:50:16.297206985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x868j,Uid:21bff87f-98ba-4b72-8010-9e92de6e5f6b,Namespace:kube-system,Attempt:0,} returns sandbox id \"88b975c58cb3813c4a2d729900010ce9b4e5d488bb6bf274f7390fef69f5c048\"" May 9 04:50:16.300665 containerd[1513]: time="2025-05-09T04:50:16.300621464Z" level=info msg="CreateContainer within sandbox \"88b975c58cb3813c4a2d729900010ce9b4e5d488bb6bf274f7390fef69f5c048\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 9 04:50:16.312517 containerd[1513]: time="2025-05-09T04:50:16.312462799Z" level=info msg="Container f2a96cfb75b568eb7dd5a32ad45bdd5b38b61846c2ea263c0ce055659322b339: CDI devices from CRI Config.CDIDevices: []" May 9 04:50:16.319770 containerd[1513]: time="2025-05-09T04:50:16.319729453Z" level=info msg="CreateContainer within sandbox \"88b975c58cb3813c4a2d729900010ce9b4e5d488bb6bf274f7390fef69f5c048\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f2a96cfb75b568eb7dd5a32ad45bdd5b38b61846c2ea263c0ce055659322b339\"" May 9 04:50:16.320616 containerd[1513]: time="2025-05-09T04:50:16.320209670Z" level=info msg="StartContainer for \"f2a96cfb75b568eb7dd5a32ad45bdd5b38b61846c2ea263c0ce055659322b339\"" May 9 04:50:16.321158 containerd[1513]: time="2025-05-09T04:50:16.321116581Z" level=info msg="connecting to shim f2a96cfb75b568eb7dd5a32ad45bdd5b38b61846c2ea263c0ce055659322b339" address="unix:///run/containerd/s/1474be8ea88829a7e8a6f5ccad5ea6d9c8cb05f3a0f2898fb30310470f32e531" protocol=ttrpc version=3 May 9 04:50:16.341839 systemd[1]: Started cri-containerd-f2a96cfb75b568eb7dd5a32ad45bdd5b38b61846c2ea263c0ce055659322b339.scope - libcontainer container f2a96cfb75b568eb7dd5a32ad45bdd5b38b61846c2ea263c0ce055659322b339. May 9 04:50:16.367688 containerd[1513]: time="2025-05-09T04:50:16.367637049Z" level=info msg="StartContainer for \"f2a96cfb75b568eb7dd5a32ad45bdd5b38b61846c2ea263c0ce055659322b339\" returns successfully" May 9 04:50:16.376936 systemd[1]: cri-containerd-f2a96cfb75b568eb7dd5a32ad45bdd5b38b61846c2ea263c0ce055659322b339.scope: Deactivated successfully. May 9 04:50:16.378397 containerd[1513]: time="2025-05-09T04:50:16.378365425Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f2a96cfb75b568eb7dd5a32ad45bdd5b38b61846c2ea263c0ce055659322b339\" id:\"f2a96cfb75b568eb7dd5a32ad45bdd5b38b61846c2ea263c0ce055659322b339\" pid:4488 exited_at:{seconds:1746766216 nanos:378013412}" May 9 04:50:16.378451 containerd[1513]: time="2025-05-09T04:50:16.378436467Z" level=info msg="received exit event container_id:\"f2a96cfb75b568eb7dd5a32ad45bdd5b38b61846c2ea263c0ce055659322b339\" id:\"f2a96cfb75b568eb7dd5a32ad45bdd5b38b61846c2ea263c0ce055659322b339\" pid:4488 exited_at:{seconds:1746766216 nanos:378013412}" May 9 04:50:16.755378 containerd[1513]: time="2025-05-09T04:50:16.754949122Z" level=info msg="CreateContainer within sandbox \"88b975c58cb3813c4a2d729900010ce9b4e5d488bb6bf274f7390fef69f5c048\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 9 04:50:16.760288 containerd[1513]: time="2025-05-09T04:50:16.760195426Z" level=info msg="Container c1467aa79bfcdbf2829863547edfba80b85cefe964a113ba6e2ac22da24b42f8: CDI devices from CRI Config.CDIDevices: []" May 9 04:50:16.767058 containerd[1513]: time="2025-05-09T04:50:16.766904021Z" level=info msg="CreateContainer within sandbox \"88b975c58cb3813c4a2d729900010ce9b4e5d488bb6bf274f7390fef69f5c048\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c1467aa79bfcdbf2829863547edfba80b85cefe964a113ba6e2ac22da24b42f8\"" May 9 04:50:16.768456 containerd[1513]: time="2025-05-09T04:50:16.768431914Z" level=info msg="StartContainer for \"c1467aa79bfcdbf2829863547edfba80b85cefe964a113ba6e2ac22da24b42f8\"" May 9 04:50:16.769244 containerd[1513]: time="2025-05-09T04:50:16.769211742Z" level=info msg="connecting to shim c1467aa79bfcdbf2829863547edfba80b85cefe964a113ba6e2ac22da24b42f8" address="unix:///run/containerd/s/1474be8ea88829a7e8a6f5ccad5ea6d9c8cb05f3a0f2898fb30310470f32e531" protocol=ttrpc version=3 May 9 04:50:16.791825 systemd[1]: Started cri-containerd-c1467aa79bfcdbf2829863547edfba80b85cefe964a113ba6e2ac22da24b42f8.scope - libcontainer container c1467aa79bfcdbf2829863547edfba80b85cefe964a113ba6e2ac22da24b42f8. May 9 04:50:16.823460 containerd[1513]: time="2025-05-09T04:50:16.821750940Z" level=info msg="StartContainer for \"c1467aa79bfcdbf2829863547edfba80b85cefe964a113ba6e2ac22da24b42f8\" returns successfully" May 9 04:50:16.826539 systemd[1]: cri-containerd-c1467aa79bfcdbf2829863547edfba80b85cefe964a113ba6e2ac22da24b42f8.scope: Deactivated successfully. May 9 04:50:16.827000 containerd[1513]: time="2025-05-09T04:50:16.826972403Z" level=info msg="received exit event container_id:\"c1467aa79bfcdbf2829863547edfba80b85cefe964a113ba6e2ac22da24b42f8\" id:\"c1467aa79bfcdbf2829863547edfba80b85cefe964a113ba6e2ac22da24b42f8\" pid:4533 exited_at:{seconds:1746766216 nanos:826733714}" May 9 04:50:16.827768 containerd[1513]: time="2025-05-09T04:50:16.827736069Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c1467aa79bfcdbf2829863547edfba80b85cefe964a113ba6e2ac22da24b42f8\" id:\"c1467aa79bfcdbf2829863547edfba80b85cefe964a113ba6e2ac22da24b42f8\" pid:4533 exited_at:{seconds:1746766216 nanos:826733714}" May 9 04:50:17.576273 kubelet[2635]: E0509 04:50:17.576233 2635 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 9 04:50:17.759868 containerd[1513]: time="2025-05-09T04:50:17.759825737Z" level=info msg="CreateContainer within sandbox \"88b975c58cb3813c4a2d729900010ce9b4e5d488bb6bf274f7390fef69f5c048\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 9 04:50:17.800409 containerd[1513]: time="2025-05-09T04:50:17.800356311Z" level=info msg="Container 81157aad6afac704305212a265394c6710576b64c589ae50100f7b2f6b896946: CDI devices from CRI Config.CDIDevices: []" May 9 04:50:17.808250 containerd[1513]: time="2025-05-09T04:50:17.808205257Z" level=info msg="CreateContainer within sandbox \"88b975c58cb3813c4a2d729900010ce9b4e5d488bb6bf274f7390fef69f5c048\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"81157aad6afac704305212a265394c6710576b64c589ae50100f7b2f6b896946\"" May 9 04:50:17.808870 containerd[1513]: time="2025-05-09T04:50:17.808830918Z" level=info msg="StartContainer for \"81157aad6afac704305212a265394c6710576b64c589ae50100f7b2f6b896946\"" May 9 04:50:17.810362 containerd[1513]: time="2025-05-09T04:50:17.810331129Z" level=info msg="connecting to shim 81157aad6afac704305212a265394c6710576b64c589ae50100f7b2f6b896946" address="unix:///run/containerd/s/1474be8ea88829a7e8a6f5ccad5ea6d9c8cb05f3a0f2898fb30310470f32e531" protocol=ttrpc version=3 May 9 04:50:17.831913 systemd[1]: Started cri-containerd-81157aad6afac704305212a265394c6710576b64c589ae50100f7b2f6b896946.scope - libcontainer container 81157aad6afac704305212a265394c6710576b64c589ae50100f7b2f6b896946. May 9 04:50:17.883045 systemd[1]: cri-containerd-81157aad6afac704305212a265394c6710576b64c589ae50100f7b2f6b896946.scope: Deactivated successfully. May 9 04:50:17.883613 containerd[1513]: time="2025-05-09T04:50:17.883572372Z" level=info msg="StartContainer for \"81157aad6afac704305212a265394c6710576b64c589ae50100f7b2f6b896946\" returns successfully" May 9 04:50:17.885358 containerd[1513]: time="2025-05-09T04:50:17.885303870Z" level=info msg="received exit event container_id:\"81157aad6afac704305212a265394c6710576b64c589ae50100f7b2f6b896946\" id:\"81157aad6afac704305212a265394c6710576b64c589ae50100f7b2f6b896946\" pid:4577 exited_at:{seconds:1746766217 nanos:885095263}" May 9 04:50:17.885431 containerd[1513]: time="2025-05-09T04:50:17.885415634Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81157aad6afac704305212a265394c6710576b64c589ae50100f7b2f6b896946\" id:\"81157aad6afac704305212a265394c6710576b64c589ae50100f7b2f6b896946\" pid:4577 exited_at:{seconds:1746766217 nanos:885095263}" May 9 04:50:17.903899 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81157aad6afac704305212a265394c6710576b64c589ae50100f7b2f6b896946-rootfs.mount: Deactivated successfully. May 9 04:50:18.763513 containerd[1513]: time="2025-05-09T04:50:18.763463193Z" level=info msg="CreateContainer within sandbox \"88b975c58cb3813c4a2d729900010ce9b4e5d488bb6bf274f7390fef69f5c048\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 9 04:50:18.769551 containerd[1513]: time="2025-05-09T04:50:18.769493911Z" level=info msg="Container 9f39b4cc251d642bc35a113d9f0a3e062ce5a5485e3d8b008c555e2db36b12a9: CDI devices from CRI Config.CDIDevices: []" May 9 04:50:18.778466 containerd[1513]: time="2025-05-09T04:50:18.778289920Z" level=info msg="CreateContainer within sandbox \"88b975c58cb3813c4a2d729900010ce9b4e5d488bb6bf274f7390fef69f5c048\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9f39b4cc251d642bc35a113d9f0a3e062ce5a5485e3d8b008c555e2db36b12a9\"" May 9 04:50:18.778878 containerd[1513]: time="2025-05-09T04:50:18.778845138Z" level=info msg="StartContainer for \"9f39b4cc251d642bc35a113d9f0a3e062ce5a5485e3d8b008c555e2db36b12a9\"" May 9 04:50:18.779722 containerd[1513]: time="2025-05-09T04:50:18.779680885Z" level=info msg="connecting to shim 9f39b4cc251d642bc35a113d9f0a3e062ce5a5485e3d8b008c555e2db36b12a9" address="unix:///run/containerd/s/1474be8ea88829a7e8a6f5ccad5ea6d9c8cb05f3a0f2898fb30310470f32e531" protocol=ttrpc version=3 May 9 04:50:18.799809 systemd[1]: Started cri-containerd-9f39b4cc251d642bc35a113d9f0a3e062ce5a5485e3d8b008c555e2db36b12a9.scope - libcontainer container 9f39b4cc251d642bc35a113d9f0a3e062ce5a5485e3d8b008c555e2db36b12a9. May 9 04:50:18.825086 systemd[1]: cri-containerd-9f39b4cc251d642bc35a113d9f0a3e062ce5a5485e3d8b008c555e2db36b12a9.scope: Deactivated successfully. May 9 04:50:18.826507 containerd[1513]: time="2025-05-09T04:50:18.826467102Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9f39b4cc251d642bc35a113d9f0a3e062ce5a5485e3d8b008c555e2db36b12a9\" id:\"9f39b4cc251d642bc35a113d9f0a3e062ce5a5485e3d8b008c555e2db36b12a9\" pid:4617 exited_at:{seconds:1746766218 nanos:826238934}" May 9 04:50:18.828688 containerd[1513]: time="2025-05-09T04:50:18.827846867Z" level=info msg="received exit event container_id:\"9f39b4cc251d642bc35a113d9f0a3e062ce5a5485e3d8b008c555e2db36b12a9\" id:\"9f39b4cc251d642bc35a113d9f0a3e062ce5a5485e3d8b008c555e2db36b12a9\" pid:4617 exited_at:{seconds:1746766218 nanos:826238934}" May 9 04:50:18.829664 containerd[1513]: time="2025-05-09T04:50:18.829598165Z" level=info msg="StartContainer for \"9f39b4cc251d642bc35a113d9f0a3e062ce5a5485e3d8b008c555e2db36b12a9\" returns successfully" May 9 04:50:18.844525 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f39b4cc251d642bc35a113d9f0a3e062ce5a5485e3d8b008c555e2db36b12a9-rootfs.mount: Deactivated successfully. May 9 04:50:19.370403 kubelet[2635]: I0509 04:50:19.370055 2635 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-09T04:50:19Z","lastTransitionTime":"2025-05-09T04:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 9 04:50:19.768994 containerd[1513]: time="2025-05-09T04:50:19.768880184Z" level=info msg="CreateContainer within sandbox \"88b975c58cb3813c4a2d729900010ce9b4e5d488bb6bf274f7390fef69f5c048\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 9 04:50:19.779964 containerd[1513]: time="2025-05-09T04:50:19.779233833Z" level=info msg="Container a743ecfabca2fd701ef0c7e397dac698cfa50eb06627980e989c7485f276c54c: CDI devices from CRI Config.CDIDevices: []" May 9 04:50:19.782726 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2429636036.mount: Deactivated successfully. May 9 04:50:19.788154 containerd[1513]: time="2025-05-09T04:50:19.788110195Z" level=info msg="CreateContainer within sandbox \"88b975c58cb3813c4a2d729900010ce9b4e5d488bb6bf274f7390fef69f5c048\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a743ecfabca2fd701ef0c7e397dac698cfa50eb06627980e989c7485f276c54c\"" May 9 04:50:19.789549 containerd[1513]: time="2025-05-09T04:50:19.788750336Z" level=info msg="StartContainer for \"a743ecfabca2fd701ef0c7e397dac698cfa50eb06627980e989c7485f276c54c\"" May 9 04:50:19.789754 containerd[1513]: time="2025-05-09T04:50:19.789724767Z" level=info msg="connecting to shim a743ecfabca2fd701ef0c7e397dac698cfa50eb06627980e989c7485f276c54c" address="unix:///run/containerd/s/1474be8ea88829a7e8a6f5ccad5ea6d9c8cb05f3a0f2898fb30310470f32e531" protocol=ttrpc version=3 May 9 04:50:19.807805 systemd[1]: Started cri-containerd-a743ecfabca2fd701ef0c7e397dac698cfa50eb06627980e989c7485f276c54c.scope - libcontainer container a743ecfabca2fd701ef0c7e397dac698cfa50eb06627980e989c7485f276c54c. May 9 04:50:19.836846 containerd[1513]: time="2025-05-09T04:50:19.836800784Z" level=info msg="StartContainer for \"a743ecfabca2fd701ef0c7e397dac698cfa50eb06627980e989c7485f276c54c\" returns successfully" May 9 04:50:19.889147 containerd[1513]: time="2025-05-09T04:50:19.889085048Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a743ecfabca2fd701ef0c7e397dac698cfa50eb06627980e989c7485f276c54c\" id:\"e854870fb2d0fd8134780d86e4f85bd8377b8c5f64f1e957446c9cde30aee63a\" pid:4686 exited_at:{seconds:1746766219 nanos:888785318}" May 9 04:50:20.103696 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 9 04:50:20.786923 kubelet[2635]: I0509 04:50:20.785818 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-x868j" podStartSLOduration=5.785800236 podStartE2EDuration="5.785800236s" podCreationTimestamp="2025-05-09 04:50:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 04:50:20.785387664 +0000 UTC m=+83.342469282" watchObservedRunningTime="2025-05-09 04:50:20.785800236 +0000 UTC m=+83.342881814" May 9 04:50:22.471584 containerd[1513]: time="2025-05-09T04:50:22.471544975Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a743ecfabca2fd701ef0c7e397dac698cfa50eb06627980e989c7485f276c54c\" id:\"8026523589294020f5045b31ec85567dd03c0ecaf29cfc8b17c88f79c9b3948f\" pid:5066 exit_status:1 exited_at:{seconds:1746766222 nanos:471159804}" May 9 04:50:22.987094 systemd-networkd[1438]: lxc_health: Link UP May 9 04:50:22.987332 systemd-networkd[1438]: lxc_health: Gained carrier May 9 04:50:24.605732 containerd[1513]: time="2025-05-09T04:50:24.605619040Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a743ecfabca2fd701ef0c7e397dac698cfa50eb06627980e989c7485f276c54c\" id:\"e31177d5b740b55c2c4c1720d656d4f0fbff861d03218a07ebe2db4da0bfc0e0\" pid:5224 exited_at:{seconds:1746766224 nanos:604791138}" May 9 04:50:24.966877 systemd-networkd[1438]: lxc_health: Gained IPv6LL May 9 04:50:26.700963 containerd[1513]: time="2025-05-09T04:50:26.700916378Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a743ecfabca2fd701ef0c7e397dac698cfa50eb06627980e989c7485f276c54c\" id:\"f139c00bbb50510e4b933371559c821c4b8882c9c24bdbca424492b49d5d1066\" pid:5257 exited_at:{seconds:1746766226 nanos:700609250}" May 9 04:50:28.832757 containerd[1513]: time="2025-05-09T04:50:28.831778843Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a743ecfabca2fd701ef0c7e397dac698cfa50eb06627980e989c7485f276c54c\" id:\"0a71442fc2f22b48a71ad990bc46738e33caf4f35cd624a6777e5005c20e83ca\" pid:5281 exited_at:{seconds:1746766228 nanos:831320832}" May 9 04:50:28.837219 sshd[4419]: Connection closed by 10.0.0.1 port 46502 May 9 04:50:28.838184 sshd-session[4416]: pam_unix(sshd:session): session closed for user core May 9 04:50:28.842202 systemd[1]: sshd@25-10.0.0.27:22-10.0.0.1:46502.service: Deactivated successfully. May 9 04:50:28.844018 systemd[1]: session-26.scope: Deactivated successfully. May 9 04:50:28.845565 systemd-logind[1503]: Session 26 logged out. Waiting for processes to exit. May 9 04:50:28.847153 systemd-logind[1503]: Removed session 26.