May 7 23:37:02.928435 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 7 23:37:02.928457 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Wed May 7 22:21:35 -00 2025 May 7 23:37:02.928466 kernel: KASLR enabled May 7 23:37:02.928472 kernel: efi: EFI v2.7 by EDK II May 7 23:37:02.928477 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 May 7 23:37:02.928483 kernel: random: crng init done May 7 23:37:02.928489 kernel: secureboot: Secure boot disabled May 7 23:37:02.928495 kernel: ACPI: Early table checksum verification disabled May 7 23:37:02.928501 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) May 7 23:37:02.928508 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 7 23:37:02.928514 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 7 23:37:02.928520 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 7 23:37:02.928526 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 7 23:37:02.928531 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 7 23:37:02.928538 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 7 23:37:02.928546 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 7 23:37:02.928552 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 7 23:37:02.928558 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 7 23:37:02.928564 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 7 23:37:02.928570 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 7 23:37:02.928576 kernel: NUMA: Failed to initialise from firmware May 7 23:37:02.928582 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 7 23:37:02.928588 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] May 7 23:37:02.928594 kernel: Zone ranges: May 7 23:37:02.928606 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 7 23:37:02.928613 kernel: DMA32 empty May 7 23:37:02.928619 kernel: Normal empty May 7 23:37:02.928625 kernel: Movable zone start for each node May 7 23:37:02.928631 kernel: Early memory node ranges May 7 23:37:02.928637 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] May 7 23:37:02.928643 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] May 7 23:37:02.928649 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] May 7 23:37:02.928655 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 7 23:37:02.928661 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 7 23:37:02.928667 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 7 23:37:02.928673 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 7 23:37:02.928679 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 7 23:37:02.928686 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 7 23:37:02.928692 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 7 23:37:02.928698 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 7 23:37:02.928707 kernel: psci: probing for conduit method from ACPI. May 7 23:37:02.928713 kernel: psci: PSCIv1.1 detected in firmware. May 7 23:37:02.928720 kernel: psci: Using standard PSCI v0.2 function IDs May 7 23:37:02.928727 kernel: psci: Trusted OS migration not required May 7 23:37:02.928734 kernel: psci: SMC Calling Convention v1.1 May 7 23:37:02.928740 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 7 23:37:02.928747 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 May 7 23:37:02.928753 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 May 7 23:37:02.928760 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 7 23:37:02.928766 kernel: Detected PIPT I-cache on CPU0 May 7 23:37:02.928772 kernel: CPU features: detected: GIC system register CPU interface May 7 23:37:02.928779 kernel: CPU features: detected: Hardware dirty bit management May 7 23:37:02.928785 kernel: CPU features: detected: Spectre-v4 May 7 23:37:02.928793 kernel: CPU features: detected: Spectre-BHB May 7 23:37:02.928799 kernel: CPU features: kernel page table isolation forced ON by KASLR May 7 23:37:02.928806 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 7 23:37:02.928812 kernel: CPU features: detected: ARM erratum 1418040 May 7 23:37:02.928819 kernel: CPU features: detected: SSBS not fully self-synchronizing May 7 23:37:02.928825 kernel: alternatives: applying boot alternatives May 7 23:37:02.928832 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=82f9441f083668f7b43f8fe99c3dc9ee441b8a3ef2f63ecd1e548de4dde5b207 May 7 23:37:02.928839 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 7 23:37:02.928846 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 7 23:37:02.928853 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 7 23:37:02.928859 kernel: Fallback order for Node 0: 0 May 7 23:37:02.928867 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 7 23:37:02.928873 kernel: Policy zone: DMA May 7 23:37:02.928880 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 7 23:37:02.928886 kernel: software IO TLB: area num 4. May 7 23:37:02.928892 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 7 23:37:02.928899 kernel: Memory: 2387476K/2572288K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38336K init, 897K bss, 184812K reserved, 0K cma-reserved) May 7 23:37:02.928906 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 7 23:37:02.928912 kernel: rcu: Preemptible hierarchical RCU implementation. May 7 23:37:02.928919 kernel: rcu: RCU event tracing is enabled. May 7 23:37:02.928926 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 7 23:37:02.928932 kernel: Trampoline variant of Tasks RCU enabled. May 7 23:37:02.928939 kernel: Tracing variant of Tasks RCU enabled. May 7 23:37:02.928947 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 7 23:37:02.928954 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 7 23:37:02.928960 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 7 23:37:02.928966 kernel: GICv3: 256 SPIs implemented May 7 23:37:02.928973 kernel: GICv3: 0 Extended SPIs implemented May 7 23:37:02.928979 kernel: Root IRQ handler: gic_handle_irq May 7 23:37:02.928985 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 7 23:37:02.928992 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 7 23:37:02.928998 kernel: ITS [mem 0x08080000-0x0809ffff] May 7 23:37:02.929005 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 7 23:37:02.929012 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 7 23:37:02.929019 kernel: GICv3: using LPI property table @0x00000000400f0000 May 7 23:37:02.929026 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 7 23:37:02.929033 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 7 23:37:02.929039 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 7 23:37:02.929046 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 7 23:37:02.929052 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 7 23:37:02.929059 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 7 23:37:02.929065 kernel: arm-pv: using stolen time PV May 7 23:37:02.929072 kernel: Console: colour dummy device 80x25 May 7 23:37:02.929079 kernel: ACPI: Core revision 20230628 May 7 23:37:02.929086 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 7 23:37:02.929094 kernel: pid_max: default: 32768 minimum: 301 May 7 23:37:02.929100 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 7 23:37:02.929107 kernel: landlock: Up and running. May 7 23:37:02.929114 kernel: SELinux: Initializing. May 7 23:37:02.929120 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 7 23:37:02.929127 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 7 23:37:02.929142 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 7 23:37:02.929150 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 7 23:37:02.929157 kernel: rcu: Hierarchical SRCU implementation. May 7 23:37:02.929166 kernel: rcu: Max phase no-delay instances is 400. May 7 23:37:02.929172 kernel: Platform MSI: ITS@0x8080000 domain created May 7 23:37:02.929179 kernel: PCI/MSI: ITS@0x8080000 domain created May 7 23:37:02.929186 kernel: Remapping and enabling EFI services. May 7 23:37:02.929192 kernel: smp: Bringing up secondary CPUs ... May 7 23:37:02.929199 kernel: Detected PIPT I-cache on CPU1 May 7 23:37:02.929205 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 7 23:37:02.929212 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 7 23:37:02.929219 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 7 23:37:02.929227 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 7 23:37:02.929234 kernel: Detected PIPT I-cache on CPU2 May 7 23:37:02.929246 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 7 23:37:02.929254 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 7 23:37:02.929261 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 7 23:37:02.929268 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 7 23:37:02.929275 kernel: Detected PIPT I-cache on CPU3 May 7 23:37:02.929281 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 7 23:37:02.929289 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 7 23:37:02.929297 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 7 23:37:02.929304 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 7 23:37:02.929311 kernel: smp: Brought up 1 node, 4 CPUs May 7 23:37:02.929318 kernel: SMP: Total of 4 processors activated. May 7 23:37:02.929325 kernel: CPU features: detected: 32-bit EL0 Support May 7 23:37:02.929337 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 7 23:37:02.929345 kernel: CPU features: detected: Common not Private translations May 7 23:37:02.929352 kernel: CPU features: detected: CRC32 instructions May 7 23:37:02.929360 kernel: CPU features: detected: Enhanced Virtualization Traps May 7 23:37:02.929367 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 7 23:37:02.929374 kernel: CPU features: detected: LSE atomic instructions May 7 23:37:02.929381 kernel: CPU features: detected: Privileged Access Never May 7 23:37:02.929388 kernel: CPU features: detected: RAS Extension Support May 7 23:37:02.929395 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 7 23:37:02.929402 kernel: CPU: All CPU(s) started at EL1 May 7 23:37:02.929409 kernel: alternatives: applying system-wide alternatives May 7 23:37:02.929416 kernel: devtmpfs: initialized May 7 23:37:02.929424 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 7 23:37:02.929431 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 7 23:37:02.929438 kernel: pinctrl core: initialized pinctrl subsystem May 7 23:37:02.929445 kernel: SMBIOS 3.0.0 present. May 7 23:37:02.929452 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 7 23:37:02.929459 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 7 23:37:02.929466 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 7 23:37:02.929473 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 7 23:37:02.929480 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 7 23:37:02.929488 kernel: audit: initializing netlink subsys (disabled) May 7 23:37:02.929495 kernel: audit: type=2000 audit(0.017:1): state=initialized audit_enabled=0 res=1 May 7 23:37:02.929502 kernel: thermal_sys: Registered thermal governor 'step_wise' May 7 23:37:02.929509 kernel: cpuidle: using governor menu May 7 23:37:02.929516 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 7 23:37:02.929523 kernel: ASID allocator initialised with 32768 entries May 7 23:37:02.929530 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 7 23:37:02.929537 kernel: Serial: AMBA PL011 UART driver May 7 23:37:02.929544 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 7 23:37:02.929552 kernel: Modules: 0 pages in range for non-PLT usage May 7 23:37:02.929560 kernel: Modules: 509264 pages in range for PLT usage May 7 23:37:02.929566 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 7 23:37:02.929573 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 7 23:37:02.929580 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 7 23:37:02.929587 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 7 23:37:02.929594 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 7 23:37:02.929601 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 7 23:37:02.929608 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 7 23:37:02.929616 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 7 23:37:02.929623 kernel: ACPI: Added _OSI(Module Device) May 7 23:37:02.929630 kernel: ACPI: Added _OSI(Processor Device) May 7 23:37:02.929637 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 7 23:37:02.929644 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 7 23:37:02.929651 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 7 23:37:02.929658 kernel: ACPI: Interpreter enabled May 7 23:37:02.929665 kernel: ACPI: Using GIC for interrupt routing May 7 23:37:02.929671 kernel: ACPI: MCFG table detected, 1 entries May 7 23:37:02.929678 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 7 23:37:02.929687 kernel: printk: console [ttyAMA0] enabled May 7 23:37:02.929694 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 7 23:37:02.929828 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 7 23:37:02.929899 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 7 23:37:02.929963 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 7 23:37:02.930025 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 7 23:37:02.930086 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 7 23:37:02.930097 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 7 23:37:02.930105 kernel: PCI host bridge to bus 0000:00 May 7 23:37:02.930206 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 7 23:37:02.930272 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 7 23:37:02.930329 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 7 23:37:02.930399 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 7 23:37:02.930481 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 7 23:37:02.930565 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 7 23:37:02.930632 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 7 23:37:02.930696 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 7 23:37:02.930760 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 7 23:37:02.930823 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 7 23:37:02.930888 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 7 23:37:02.930955 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 7 23:37:02.931014 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 7 23:37:02.931071 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 7 23:37:02.931129 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 7 23:37:02.931202 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 7 23:37:02.931211 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 7 23:37:02.931218 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 7 23:37:02.931225 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 7 23:37:02.931236 kernel: iommu: Default domain type: Translated May 7 23:37:02.931244 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 7 23:37:02.931251 kernel: efivars: Registered efivars operations May 7 23:37:02.931257 kernel: vgaarb: loaded May 7 23:37:02.931265 kernel: clocksource: Switched to clocksource arch_sys_counter May 7 23:37:02.931272 kernel: VFS: Disk quotas dquot_6.6.0 May 7 23:37:02.931279 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 7 23:37:02.931286 kernel: pnp: PnP ACPI init May 7 23:37:02.931377 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 7 23:37:02.931393 kernel: pnp: PnP ACPI: found 1 devices May 7 23:37:02.931400 kernel: NET: Registered PF_INET protocol family May 7 23:37:02.931407 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 7 23:37:02.931414 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 7 23:37:02.931421 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 7 23:37:02.931428 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 7 23:37:02.931435 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 7 23:37:02.931442 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 7 23:37:02.931451 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 7 23:37:02.931459 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 7 23:37:02.931466 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 7 23:37:02.931473 kernel: PCI: CLS 0 bytes, default 64 May 7 23:37:02.931480 kernel: kvm [1]: HYP mode not available May 7 23:37:02.931487 kernel: Initialise system trusted keyrings May 7 23:37:02.931494 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 7 23:37:02.931501 kernel: Key type asymmetric registered May 7 23:37:02.931508 kernel: Asymmetric key parser 'x509' registered May 7 23:37:02.931516 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 7 23:37:02.931523 kernel: io scheduler mq-deadline registered May 7 23:37:02.931530 kernel: io scheduler kyber registered May 7 23:37:02.931537 kernel: io scheduler bfq registered May 7 23:37:02.931544 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 7 23:37:02.931551 kernel: ACPI: button: Power Button [PWRB] May 7 23:37:02.931558 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 7 23:37:02.931630 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 7 23:37:02.931641 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 7 23:37:02.931650 kernel: thunder_xcv, ver 1.0 May 7 23:37:02.931657 kernel: thunder_bgx, ver 1.0 May 7 23:37:02.931665 kernel: nicpf, ver 1.0 May 7 23:37:02.931672 kernel: nicvf, ver 1.0 May 7 23:37:02.931756 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 7 23:37:02.931822 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-07T23:37:02 UTC (1746661022) May 7 23:37:02.931831 kernel: hid: raw HID events driver (C) Jiri Kosina May 7 23:37:02.931839 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 7 23:37:02.931846 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 7 23:37:02.931856 kernel: watchdog: Hard watchdog permanently disabled May 7 23:37:02.931863 kernel: NET: Registered PF_INET6 protocol family May 7 23:37:02.931869 kernel: Segment Routing with IPv6 May 7 23:37:02.931876 kernel: In-situ OAM (IOAM) with IPv6 May 7 23:37:02.931883 kernel: NET: Registered PF_PACKET protocol family May 7 23:37:02.931890 kernel: Key type dns_resolver registered May 7 23:37:02.931898 kernel: registered taskstats version 1 May 7 23:37:02.931905 kernel: Loading compiled-in X.509 certificates May 7 23:37:02.931912 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: f45666b1b2057b901dda15e57012558a26abdeb0' May 7 23:37:02.931920 kernel: Key type .fscrypt registered May 7 23:37:02.931927 kernel: Key type fscrypt-provisioning registered May 7 23:37:02.931934 kernel: ima: No TPM chip found, activating TPM-bypass! May 7 23:37:02.931941 kernel: ima: Allocated hash algorithm: sha1 May 7 23:37:02.931948 kernel: ima: No architecture policies found May 7 23:37:02.931955 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 7 23:37:02.931962 kernel: clk: Disabling unused clocks May 7 23:37:02.931969 kernel: Freeing unused kernel memory: 38336K May 7 23:37:02.931978 kernel: Run /init as init process May 7 23:37:02.931985 kernel: with arguments: May 7 23:37:02.931992 kernel: /init May 7 23:37:02.931999 kernel: with environment: May 7 23:37:02.932006 kernel: HOME=/ May 7 23:37:02.932013 kernel: TERM=linux May 7 23:37:02.932020 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 7 23:37:02.932028 systemd[1]: Successfully made /usr/ read-only. May 7 23:37:02.932037 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 7 23:37:02.932047 systemd[1]: Detected virtualization kvm. May 7 23:37:02.932054 systemd[1]: Detected architecture arm64. May 7 23:37:02.932061 systemd[1]: Running in initrd. May 7 23:37:02.932068 systemd[1]: No hostname configured, using default hostname. May 7 23:37:02.932076 systemd[1]: Hostname set to . May 7 23:37:02.932083 systemd[1]: Initializing machine ID from VM UUID. May 7 23:37:02.932091 systemd[1]: Queued start job for default target initrd.target. May 7 23:37:02.932100 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 7 23:37:02.932108 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 7 23:37:02.932116 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 7 23:37:02.932124 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 7 23:37:02.932131 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 7 23:37:02.932149 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 7 23:37:02.932158 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 7 23:37:02.932167 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 7 23:37:02.932175 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 7 23:37:02.932183 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 7 23:37:02.932190 systemd[1]: Reached target paths.target - Path Units. May 7 23:37:02.932198 systemd[1]: Reached target slices.target - Slice Units. May 7 23:37:02.932205 systemd[1]: Reached target swap.target - Swaps. May 7 23:37:02.932212 systemd[1]: Reached target timers.target - Timer Units. May 7 23:37:02.932220 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 7 23:37:02.932227 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 7 23:37:02.932237 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 7 23:37:02.932244 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 7 23:37:02.932252 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 7 23:37:02.932260 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 7 23:37:02.932267 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 7 23:37:02.932275 systemd[1]: Reached target sockets.target - Socket Units. May 7 23:37:02.932283 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 7 23:37:02.932290 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 7 23:37:02.932299 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 7 23:37:02.932307 systemd[1]: Starting systemd-fsck-usr.service... May 7 23:37:02.932314 systemd[1]: Starting systemd-journald.service - Journal Service... May 7 23:37:02.932322 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 7 23:37:02.932329 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 7 23:37:02.932343 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 7 23:37:02.932350 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 7 23:37:02.932360 systemd[1]: Finished systemd-fsck-usr.service. May 7 23:37:02.932368 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 7 23:37:02.932395 systemd-journald[238]: Collecting audit messages is disabled. May 7 23:37:02.932416 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 7 23:37:02.932424 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 7 23:37:02.932433 systemd-journald[238]: Journal started May 7 23:37:02.932451 systemd-journald[238]: Runtime Journal (/run/log/journal/6f9d8021dde645d9aa55a33d927ffb70) is 5.9M, max 47.3M, 41.4M free. May 7 23:37:02.919890 systemd-modules-load[240]: Inserted module 'overlay' May 7 23:37:02.934027 systemd[1]: Started systemd-journald.service - Journal Service. May 7 23:37:02.934869 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 7 23:37:02.937962 systemd-modules-load[240]: Inserted module 'br_netfilter' May 7 23:37:02.938705 kernel: Bridge firewalling registered May 7 23:37:02.938712 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 7 23:37:02.940453 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 7 23:37:02.942901 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 7 23:37:02.946747 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 7 23:37:02.949372 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 7 23:37:02.952318 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 7 23:37:02.953487 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 7 23:37:02.961179 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 7 23:37:02.970314 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 7 23:37:02.971475 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 7 23:37:02.975614 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 7 23:37:02.983635 dracut-cmdline[275]: dracut-dracut-053 May 7 23:37:02.986083 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=82f9441f083668f7b43f8fe99c3dc9ee441b8a3ef2f63ecd1e548de4dde5b207 May 7 23:37:03.012234 systemd-resolved[279]: Positive Trust Anchors: May 7 23:37:03.012250 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 7 23:37:03.012281 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 7 23:37:03.017410 systemd-resolved[279]: Defaulting to hostname 'linux'. May 7 23:37:03.018776 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 7 23:37:03.021681 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 7 23:37:03.060160 kernel: SCSI subsystem initialized May 7 23:37:03.065158 kernel: Loading iSCSI transport class v2.0-870. May 7 23:37:03.074160 kernel: iscsi: registered transport (tcp) May 7 23:37:03.086176 kernel: iscsi: registered transport (qla4xxx) May 7 23:37:03.086214 kernel: QLogic iSCSI HBA Driver May 7 23:37:03.128745 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 7 23:37:03.144290 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 7 23:37:03.161674 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 7 23:37:03.161731 kernel: device-mapper: uevent: version 1.0.3 May 7 23:37:03.161742 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 7 23:37:03.208184 kernel: raid6: neonx8 gen() 15777 MB/s May 7 23:37:03.225161 kernel: raid6: neonx4 gen() 15818 MB/s May 7 23:37:03.242164 kernel: raid6: neonx2 gen() 13208 MB/s May 7 23:37:03.259152 kernel: raid6: neonx1 gen() 10523 MB/s May 7 23:37:03.276154 kernel: raid6: int64x8 gen() 6795 MB/s May 7 23:37:03.293160 kernel: raid6: int64x4 gen() 7347 MB/s May 7 23:37:03.310151 kernel: raid6: int64x2 gen() 6111 MB/s May 7 23:37:03.327152 kernel: raid6: int64x1 gen() 5056 MB/s May 7 23:37:03.327167 kernel: raid6: using algorithm neonx4 gen() 15818 MB/s May 7 23:37:03.344164 kernel: raid6: .... xor() 12391 MB/s, rmw enabled May 7 23:37:03.344189 kernel: raid6: using neon recovery algorithm May 7 23:37:03.349468 kernel: xor: measuring software checksum speed May 7 23:37:03.349491 kernel: 8regs : 21601 MB/sec May 7 23:37:03.349509 kernel: 32regs : 21704 MB/sec May 7 23:37:03.350397 kernel: arm64_neon : 27823 MB/sec May 7 23:37:03.350408 kernel: xor: using function: arm64_neon (27823 MB/sec) May 7 23:37:03.400502 kernel: Btrfs loaded, zoned=no, fsverity=no May 7 23:37:03.412028 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 7 23:37:03.420319 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 7 23:37:03.435298 systemd-udevd[462]: Using default interface naming scheme 'v255'. May 7 23:37:03.441608 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 7 23:37:03.452396 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 7 23:37:03.464066 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation May 7 23:37:03.488857 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 7 23:37:03.509290 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 7 23:37:03.549311 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 7 23:37:03.556325 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 7 23:37:03.568077 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 7 23:37:03.569321 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 7 23:37:03.570847 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 7 23:37:03.572687 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 7 23:37:03.584357 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 7 23:37:03.591176 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 7 23:37:03.607465 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 7 23:37:03.607554 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 7 23:37:03.607565 kernel: GPT:9289727 != 19775487 May 7 23:37:03.607574 kernel: GPT:Alternate GPT header not at the end of the disk. May 7 23:37:03.607590 kernel: GPT:9289727 != 19775487 May 7 23:37:03.607598 kernel: GPT: Use GNU Parted to correct GPT errors. May 7 23:37:03.607607 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 7 23:37:03.599164 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 7 23:37:03.609507 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 7 23:37:03.609577 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 7 23:37:03.611360 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 7 23:37:03.614200 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 7 23:37:03.614259 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 7 23:37:03.620795 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 7 23:37:03.625156 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (508) May 7 23:37:03.625189 kernel: BTRFS: device fsid a4d66dad-2d34-4ed0-87a7-f6519531b08f devid 1 transid 42 /dev/vda3 scanned by (udev-worker) (511) May 7 23:37:03.629274 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 7 23:37:03.642160 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 7 23:37:03.649883 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 7 23:37:03.661640 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 7 23:37:03.667896 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 7 23:37:03.669164 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 7 23:37:03.677839 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 7 23:37:03.690274 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 7 23:37:03.692033 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 7 23:37:03.697889 disk-uuid[551]: Primary Header is updated. May 7 23:37:03.697889 disk-uuid[551]: Secondary Entries is updated. May 7 23:37:03.697889 disk-uuid[551]: Secondary Header is updated. May 7 23:37:03.700751 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 7 23:37:03.715879 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 7 23:37:04.716157 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 7 23:37:04.718348 disk-uuid[552]: The operation has completed successfully. May 7 23:37:04.741497 systemd[1]: disk-uuid.service: Deactivated successfully. May 7 23:37:04.741592 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 7 23:37:04.775353 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 7 23:37:04.778141 sh[575]: Success May 7 23:37:04.794178 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 7 23:37:04.823875 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 7 23:37:04.832406 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 7 23:37:04.833787 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 7 23:37:04.842548 kernel: BTRFS info (device dm-0): first mount of filesystem a4d66dad-2d34-4ed0-87a7-f6519531b08f May 7 23:37:04.842586 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 7 23:37:04.842597 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 7 23:37:04.844423 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 7 23:37:04.844449 kernel: BTRFS info (device dm-0): using free space tree May 7 23:37:04.847614 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 7 23:37:04.848706 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 7 23:37:04.859266 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 7 23:37:04.860838 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 7 23:37:04.874785 kernel: BTRFS info (device vda6): first mount of filesystem 28594331-30e6-4c58-8ddc-9d8448a320bb May 7 23:37:04.874824 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 7 23:37:04.874835 kernel: BTRFS info (device vda6): using free space tree May 7 23:37:04.878185 kernel: BTRFS info (device vda6): auto enabling async discard May 7 23:37:04.883170 kernel: BTRFS info (device vda6): last unmount of filesystem 28594331-30e6-4c58-8ddc-9d8448a320bb May 7 23:37:04.885966 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 7 23:37:04.891307 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 7 23:37:04.955795 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 7 23:37:04.969297 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 7 23:37:04.990312 ignition[664]: Ignition 2.20.0 May 7 23:37:04.990333 ignition[664]: Stage: fetch-offline May 7 23:37:04.990368 ignition[664]: no configs at "/usr/lib/ignition/base.d" May 7 23:37:04.990376 ignition[664]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 7 23:37:04.990570 ignition[664]: parsed url from cmdline: "" May 7 23:37:04.990573 ignition[664]: no config URL provided May 7 23:37:04.990578 ignition[664]: reading system config file "/usr/lib/ignition/user.ign" May 7 23:37:04.990584 ignition[664]: no config at "/usr/lib/ignition/user.ign" May 7 23:37:04.990606 ignition[664]: op(1): [started] loading QEMU firmware config module May 7 23:37:04.990610 ignition[664]: op(1): executing: "modprobe" "qemu_fw_cfg" May 7 23:37:04.999476 systemd-networkd[762]: lo: Link UP May 7 23:37:04.999486 systemd-networkd[762]: lo: Gained carrier May 7 23:37:05.000289 systemd-networkd[762]: Enumeration completed May 7 23:37:05.000307 ignition[664]: op(1): [finished] loading QEMU firmware config module May 7 23:37:05.000448 systemd[1]: Started systemd-networkd.service - Network Configuration. May 7 23:37:05.000671 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 7 23:37:05.000675 systemd-networkd[762]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 7 23:37:05.001360 systemd-networkd[762]: eth0: Link UP May 7 23:37:05.001363 systemd-networkd[762]: eth0: Gained carrier May 7 23:37:05.001369 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 7 23:37:05.002282 systemd[1]: Reached target network.target - Network. May 7 23:37:05.035199 systemd-networkd[762]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 7 23:37:05.047944 ignition[664]: parsing config with SHA512: c5c4b0ce6f2814449440cb0673b479e954e94a8377b8d3a6e9bf674854122ec2086e26512c2ad65c6941b1d91e7c2bc3091e85b215c28d78c66f6696ddbfdc82 May 7 23:37:05.053034 unknown[664]: fetched base config from "system" May 7 23:37:05.053044 unknown[664]: fetched user config from "qemu" May 7 23:37:05.054262 ignition[664]: fetch-offline: fetch-offline passed May 7 23:37:05.054412 ignition[664]: Ignition finished successfully May 7 23:37:05.056607 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 7 23:37:05.057672 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 7 23:37:05.069281 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 7 23:37:05.081625 ignition[770]: Ignition 2.20.0 May 7 23:37:05.081636 ignition[770]: Stage: kargs May 7 23:37:05.081814 ignition[770]: no configs at "/usr/lib/ignition/base.d" May 7 23:37:05.081825 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 7 23:37:05.082735 ignition[770]: kargs: kargs passed May 7 23:37:05.082783 ignition[770]: Ignition finished successfully May 7 23:37:05.085702 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 7 23:37:05.097276 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 7 23:37:05.106682 ignition[779]: Ignition 2.20.0 May 7 23:37:05.106693 ignition[779]: Stage: disks May 7 23:37:05.106842 ignition[779]: no configs at "/usr/lib/ignition/base.d" May 7 23:37:05.106851 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 7 23:37:05.107734 ignition[779]: disks: disks passed May 7 23:37:05.107778 ignition[779]: Ignition finished successfully May 7 23:37:05.112174 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 7 23:37:05.113077 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 7 23:37:05.114274 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 7 23:37:05.115768 systemd[1]: Reached target local-fs.target - Local File Systems. May 7 23:37:05.117186 systemd[1]: Reached target sysinit.target - System Initialization. May 7 23:37:05.118518 systemd[1]: Reached target basic.target - Basic System. May 7 23:37:05.130273 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 7 23:37:05.140458 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 7 23:37:05.144109 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 7 23:37:05.145857 systemd[1]: Mounting sysroot.mount - /sysroot... May 7 23:37:05.192160 kernel: EXT4-fs (vda9): mounted filesystem f291ddc8-664e-45dc-bbf9-8344dca1a297 r/w with ordered data mode. Quota mode: none. May 7 23:37:05.192395 systemd[1]: Mounted sysroot.mount - /sysroot. May 7 23:37:05.193408 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 7 23:37:05.204229 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 7 23:37:05.206293 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 7 23:37:05.207088 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 7 23:37:05.207128 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 7 23:37:05.207183 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 7 23:37:05.212561 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 7 23:37:05.215080 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 7 23:37:05.218168 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (798) May 7 23:37:05.220448 kernel: BTRFS info (device vda6): first mount of filesystem 28594331-30e6-4c58-8ddc-9d8448a320bb May 7 23:37:05.220500 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 7 23:37:05.220514 kernel: BTRFS info (device vda6): using free space tree May 7 23:37:05.222149 kernel: BTRFS info (device vda6): auto enabling async discard May 7 23:37:05.223086 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 7 23:37:05.258439 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory May 7 23:37:05.262111 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory May 7 23:37:05.266126 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory May 7 23:37:05.270082 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory May 7 23:37:05.338480 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 7 23:37:05.354236 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 7 23:37:05.356557 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 7 23:37:05.361162 kernel: BTRFS info (device vda6): last unmount of filesystem 28594331-30e6-4c58-8ddc-9d8448a320bb May 7 23:37:05.377252 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 7 23:37:05.378616 ignition[911]: INFO : Ignition 2.20.0 May 7 23:37:05.378616 ignition[911]: INFO : Stage: mount May 7 23:37:05.378616 ignition[911]: INFO : no configs at "/usr/lib/ignition/base.d" May 7 23:37:05.378616 ignition[911]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 7 23:37:05.378616 ignition[911]: INFO : mount: mount passed May 7 23:37:05.378616 ignition[911]: INFO : Ignition finished successfully May 7 23:37:05.380390 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 7 23:37:05.388245 systemd[1]: Starting ignition-files.service - Ignition (files)... May 7 23:37:05.972741 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 7 23:37:05.982303 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 7 23:37:05.988837 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (925) May 7 23:37:05.988869 kernel: BTRFS info (device vda6): first mount of filesystem 28594331-30e6-4c58-8ddc-9d8448a320bb May 7 23:37:05.988880 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 7 23:37:05.989556 kernel: BTRFS info (device vda6): using free space tree May 7 23:37:05.992155 kernel: BTRFS info (device vda6): auto enabling async discard May 7 23:37:05.992927 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 7 23:37:06.008432 ignition[942]: INFO : Ignition 2.20.0 May 7 23:37:06.008432 ignition[942]: INFO : Stage: files May 7 23:37:06.009635 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" May 7 23:37:06.009635 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 7 23:37:06.009635 ignition[942]: DEBUG : files: compiled without relabeling support, skipping May 7 23:37:06.012195 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 7 23:37:06.012195 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 7 23:37:06.014211 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 7 23:37:06.014211 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 7 23:37:06.014211 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 7 23:37:06.014211 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 7 23:37:06.014211 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 7 23:37:06.012731 unknown[942]: wrote ssh authorized keys file for user: core May 7 23:37:06.626306 systemd-networkd[762]: eth0: Gained IPv6LL May 7 23:37:07.129476 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 7 23:37:10.280658 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 7 23:37:10.280658 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 7 23:37:10.284245 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 7 23:37:10.656406 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 7 23:37:10.868717 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 7 23:37:10.870553 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 7 23:37:10.870553 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 7 23:37:10.870553 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 7 23:37:10.870553 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 7 23:37:10.870553 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 7 23:37:10.870553 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 7 23:37:10.870553 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 7 23:37:10.870553 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 7 23:37:10.870553 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 7 23:37:10.870553 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 7 23:37:10.870553 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 7 23:37:10.870553 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 7 23:37:10.870553 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 7 23:37:10.870553 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 7 23:37:11.164670 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 7 23:37:12.120683 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 7 23:37:12.120683 ignition[942]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 7 23:37:12.124430 ignition[942]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 7 23:37:12.124430 ignition[942]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 7 23:37:12.124430 ignition[942]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 7 23:37:12.124430 ignition[942]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 7 23:37:12.124430 ignition[942]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 7 23:37:12.124430 ignition[942]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 7 23:37:12.124430 ignition[942]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 7 23:37:12.124430 ignition[942]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 7 23:37:12.142461 ignition[942]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 7 23:37:12.146545 ignition[942]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 7 23:37:12.147649 ignition[942]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 7 23:37:12.147649 ignition[942]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 7 23:37:12.147649 ignition[942]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 7 23:37:12.147649 ignition[942]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 7 23:37:12.147649 ignition[942]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 7 23:37:12.147649 ignition[942]: INFO : files: files passed May 7 23:37:12.147649 ignition[942]: INFO : Ignition finished successfully May 7 23:37:12.148313 systemd[1]: Finished ignition-files.service - Ignition (files). May 7 23:37:12.157357 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 7 23:37:12.159776 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 7 23:37:12.163715 systemd[1]: ignition-quench.service: Deactivated successfully. May 7 23:37:12.163800 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 7 23:37:12.169456 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory May 7 23:37:12.174501 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 7 23:37:12.174501 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 7 23:37:12.179855 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 7 23:37:12.178516 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 7 23:37:12.181251 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 7 23:37:12.192355 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 7 23:37:12.212908 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 7 23:37:12.213027 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 7 23:37:12.214975 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 7 23:37:12.216619 systemd[1]: Reached target initrd.target - Initrd Default Target. May 7 23:37:12.218171 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 7 23:37:12.218975 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 7 23:37:12.237531 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 7 23:37:12.239977 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 7 23:37:12.251502 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 7 23:37:12.252827 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 7 23:37:12.254942 systemd[1]: Stopped target timers.target - Timer Units. May 7 23:37:12.256731 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 7 23:37:12.256848 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 7 23:37:12.259322 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 7 23:37:12.261302 systemd[1]: Stopped target basic.target - Basic System. May 7 23:37:12.262895 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 7 23:37:12.264629 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 7 23:37:12.266551 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 7 23:37:12.268499 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 7 23:37:12.270269 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 7 23:37:12.272221 systemd[1]: Stopped target sysinit.target - System Initialization. May 7 23:37:12.274242 systemd[1]: Stopped target local-fs.target - Local File Systems. May 7 23:37:12.275963 systemd[1]: Stopped target swap.target - Swaps. May 7 23:37:12.277599 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 7 23:37:12.277732 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 7 23:37:12.280037 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 7 23:37:12.282052 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 7 23:37:12.284043 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 7 23:37:12.287212 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 7 23:37:12.288635 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 7 23:37:12.288763 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 7 23:37:12.291674 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 7 23:37:12.291795 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 7 23:37:12.293799 systemd[1]: Stopped target paths.target - Path Units. May 7 23:37:12.295452 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 7 23:37:12.296236 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 7 23:37:12.297343 systemd[1]: Stopped target slices.target - Slice Units. May 7 23:37:12.298828 systemd[1]: Stopped target sockets.target - Socket Units. May 7 23:37:12.300649 systemd[1]: iscsid.socket: Deactivated successfully. May 7 23:37:12.300771 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 7 23:37:12.302914 systemd[1]: iscsiuio.socket: Deactivated successfully. May 7 23:37:12.303027 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 7 23:37:12.304641 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 7 23:37:12.304810 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 7 23:37:12.306468 systemd[1]: ignition-files.service: Deactivated successfully. May 7 23:37:12.306617 systemd[1]: Stopped ignition-files.service - Ignition (files). May 7 23:37:12.315475 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 7 23:37:12.316217 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 7 23:37:12.316394 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 7 23:37:12.321219 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 7 23:37:12.322901 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 7 23:37:12.323033 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 7 23:37:12.324256 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 7 23:37:12.324360 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 7 23:37:12.330950 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 7 23:37:12.332164 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 7 23:37:12.336915 ignition[998]: INFO : Ignition 2.20.0 May 7 23:37:12.336915 ignition[998]: INFO : Stage: umount May 7 23:37:12.338332 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" May 7 23:37:12.338332 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 7 23:37:12.338332 ignition[998]: INFO : umount: umount passed May 7 23:37:12.338332 ignition[998]: INFO : Ignition finished successfully May 7 23:37:12.339054 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 7 23:37:12.340929 systemd[1]: ignition-mount.service: Deactivated successfully. May 7 23:37:12.342206 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 7 23:37:12.343379 systemd[1]: sysroot-boot.service: Deactivated successfully. May 7 23:37:12.343453 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 7 23:37:12.344760 systemd[1]: Stopped target network.target - Network. May 7 23:37:12.345607 systemd[1]: ignition-disks.service: Deactivated successfully. May 7 23:37:12.345670 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 7 23:37:12.346830 systemd[1]: ignition-kargs.service: Deactivated successfully. May 7 23:37:12.346870 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 7 23:37:12.347945 systemd[1]: ignition-setup.service: Deactivated successfully. May 7 23:37:12.347980 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 7 23:37:12.349150 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 7 23:37:12.349184 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 7 23:37:12.350564 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 7 23:37:12.350607 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 7 23:37:12.352083 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 7 23:37:12.353568 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 7 23:37:12.361189 systemd[1]: systemd-resolved.service: Deactivated successfully. May 7 23:37:12.361302 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 7 23:37:12.364556 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 7 23:37:12.364825 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 7 23:37:12.364859 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 7 23:37:12.367738 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 7 23:37:12.367927 systemd[1]: systemd-networkd.service: Deactivated successfully. May 7 23:37:12.368010 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 7 23:37:12.370880 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 7 23:37:12.371321 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 7 23:37:12.371384 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 7 23:37:12.387272 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 7 23:37:12.388012 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 7 23:37:12.388078 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 7 23:37:12.389582 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 7 23:37:12.389625 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 7 23:37:12.391897 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 7 23:37:12.391937 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 7 23:37:12.392869 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 7 23:37:12.395020 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 7 23:37:12.400784 systemd[1]: systemd-udevd.service: Deactivated successfully. May 7 23:37:12.400946 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 7 23:37:12.402780 systemd[1]: network-cleanup.service: Deactivated successfully. May 7 23:37:12.402865 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 7 23:37:12.404092 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 7 23:37:12.404184 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 7 23:37:12.405725 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 7 23:37:12.405756 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 7 23:37:12.406989 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 7 23:37:12.407033 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 7 23:37:12.409013 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 7 23:37:12.409057 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 7 23:37:12.412694 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 7 23:37:12.412742 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 7 23:37:12.422325 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 7 23:37:12.423096 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 7 23:37:12.423174 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 7 23:37:12.425627 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 7 23:37:12.425677 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 7 23:37:12.428388 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 7 23:37:12.428474 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 7 23:37:12.430052 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 7 23:37:12.431852 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 7 23:37:12.441763 systemd[1]: Switching root. May 7 23:37:12.470148 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). May 7 23:37:12.470210 systemd-journald[238]: Journal stopped May 7 23:37:13.478537 kernel: SELinux: policy capability network_peer_controls=1 May 7 23:37:13.478598 kernel: SELinux: policy capability open_perms=1 May 7 23:37:13.478609 kernel: SELinux: policy capability extended_socket_class=1 May 7 23:37:13.478619 kernel: SELinux: policy capability always_check_network=0 May 7 23:37:13.478628 kernel: SELinux: policy capability cgroup_seclabel=1 May 7 23:37:13.478642 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 7 23:37:13.478651 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 7 23:37:13.478661 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 7 23:37:13.478677 kernel: audit: type=1403 audit(1746661032.875:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 7 23:37:13.478688 systemd[1]: Successfully loaded SELinux policy in 31.082ms. May 7 23:37:13.478710 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.768ms. May 7 23:37:13.478721 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 7 23:37:13.478731 systemd[1]: Detected virtualization kvm. May 7 23:37:13.478744 systemd[1]: Detected architecture arm64. May 7 23:37:13.478755 systemd[1]: Detected first boot. May 7 23:37:13.478764 systemd[1]: Initializing machine ID from VM UUID. May 7 23:37:13.478774 kernel: NET: Registered PF_VSOCK protocol family May 7 23:37:13.478786 zram_generator::config[1044]: No configuration found. May 7 23:37:13.478796 systemd[1]: Populated /etc with preset unit settings. May 7 23:37:13.478807 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 7 23:37:13.478817 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 7 23:37:13.478827 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 7 23:37:13.478837 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 7 23:37:13.478847 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 7 23:37:13.478858 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 7 23:37:13.478869 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 7 23:37:13.478879 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 7 23:37:13.478889 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 7 23:37:13.478899 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 7 23:37:13.478909 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 7 23:37:13.478920 systemd[1]: Created slice user.slice - User and Session Slice. May 7 23:37:13.478929 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 7 23:37:13.478939 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 7 23:37:13.478950 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 7 23:37:13.478963 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 7 23:37:13.478973 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 7 23:37:13.478984 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 7 23:37:13.478994 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 7 23:37:13.479003 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 7 23:37:13.479013 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 7 23:37:13.479023 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 7 23:37:13.479041 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 7 23:37:13.479057 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 7 23:37:13.479068 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 7 23:37:13.479078 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 7 23:37:13.479088 systemd[1]: Reached target slices.target - Slice Units. May 7 23:37:13.479098 systemd[1]: Reached target swap.target - Swaps. May 7 23:37:13.479108 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 7 23:37:13.479118 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 7 23:37:13.479128 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 7 23:37:13.479159 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 7 23:37:13.479174 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 7 23:37:13.479185 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 7 23:37:13.479195 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 7 23:37:13.479205 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 7 23:37:13.479216 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 7 23:37:13.479226 systemd[1]: Mounting media.mount - External Media Directory... May 7 23:37:13.479236 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 7 23:37:13.479245 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 7 23:37:13.479257 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 7 23:37:13.479268 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 7 23:37:13.479280 systemd[1]: Reached target machines.target - Containers. May 7 23:37:13.479302 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 7 23:37:13.479314 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 7 23:37:13.479325 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 7 23:37:13.479335 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 7 23:37:13.479345 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 7 23:37:13.479356 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 7 23:37:13.479368 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 7 23:37:13.479378 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 7 23:37:13.479389 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 7 23:37:13.479400 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 7 23:37:13.479410 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 7 23:37:13.479421 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 7 23:37:13.479431 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 7 23:37:13.479441 systemd[1]: Stopped systemd-fsck-usr.service. May 7 23:37:13.479453 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 7 23:37:13.479462 kernel: fuse: init (API version 7.39) May 7 23:37:13.479472 kernel: loop: module loaded May 7 23:37:13.479481 systemd[1]: Starting systemd-journald.service - Journal Service... May 7 23:37:13.479491 kernel: ACPI: bus type drm_connector registered May 7 23:37:13.479501 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 7 23:37:13.479555 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 7 23:37:13.479572 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 7 23:37:13.479582 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 7 23:37:13.479595 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 7 23:37:13.479629 systemd-journald[1113]: Collecting audit messages is disabled. May 7 23:37:13.479654 systemd[1]: verity-setup.service: Deactivated successfully. May 7 23:37:13.479665 systemd[1]: Stopped verity-setup.service. May 7 23:37:13.479678 systemd-journald[1113]: Journal started May 7 23:37:13.479697 systemd-journald[1113]: Runtime Journal (/run/log/journal/6f9d8021dde645d9aa55a33d927ffb70) is 5.9M, max 47.3M, 41.4M free. May 7 23:37:13.298655 systemd[1]: Queued start job for default target multi-user.target. May 7 23:37:13.313026 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 7 23:37:13.313456 systemd[1]: systemd-journald.service: Deactivated successfully. May 7 23:37:13.482164 systemd[1]: Started systemd-journald.service - Journal Service. May 7 23:37:13.483532 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 7 23:37:13.484510 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 7 23:37:13.485527 systemd[1]: Mounted media.mount - External Media Directory. May 7 23:37:13.486474 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 7 23:37:13.487544 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 7 23:37:13.488484 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 7 23:37:13.491169 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 7 23:37:13.492457 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 7 23:37:13.493663 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 7 23:37:13.493828 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 7 23:37:13.495036 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 7 23:37:13.495226 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 7 23:37:13.496355 systemd[1]: modprobe@drm.service: Deactivated successfully. May 7 23:37:13.496509 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 7 23:37:13.497595 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 7 23:37:13.497746 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 7 23:37:13.498994 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 7 23:37:13.499429 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 7 23:37:13.500488 systemd[1]: modprobe@loop.service: Deactivated successfully. May 7 23:37:13.500639 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 7 23:37:13.501884 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 7 23:37:13.503216 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 7 23:37:13.504483 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 7 23:37:13.505752 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 7 23:37:13.517531 systemd[1]: Reached target network-pre.target - Preparation for Network. May 7 23:37:13.528293 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 7 23:37:13.530109 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 7 23:37:13.530964 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 7 23:37:13.530997 systemd[1]: Reached target local-fs.target - Local File Systems. May 7 23:37:13.532781 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 7 23:37:13.534761 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 7 23:37:13.536649 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 7 23:37:13.537578 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 7 23:37:13.539101 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 7 23:37:13.540800 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 7 23:37:13.541774 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 7 23:37:13.545347 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 7 23:37:13.546246 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 7 23:37:13.547268 systemd-journald[1113]: Time spent on flushing to /var/log/journal/6f9d8021dde645d9aa55a33d927ffb70 is 27.919ms for 867 entries. May 7 23:37:13.547268 systemd-journald[1113]: System Journal (/var/log/journal/6f9d8021dde645d9aa55a33d927ffb70) is 8M, max 195.6M, 187.6M free. May 7 23:37:13.592455 systemd-journald[1113]: Received client request to flush runtime journal. May 7 23:37:13.592518 kernel: loop0: detected capacity change from 0 to 113512 May 7 23:37:13.592538 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 7 23:37:13.547329 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 7 23:37:13.552897 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 7 23:37:13.556724 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 7 23:37:13.559615 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 7 23:37:13.560714 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 7 23:37:13.565407 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 7 23:37:13.567343 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 7 23:37:13.568836 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 7 23:37:13.572347 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 7 23:37:13.576123 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 7 23:37:13.593368 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 7 23:37:13.595763 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 7 23:37:13.597525 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 7 23:37:13.604628 kernel: loop1: detected capacity change from 0 to 123192 May 7 23:37:13.606034 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 7 23:37:13.619514 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 7 23:37:13.622486 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 7 23:37:13.625226 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 7 23:37:13.627123 udevadm[1175]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 7 23:37:13.639656 kernel: loop2: detected capacity change from 0 to 194096 May 7 23:37:13.644270 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. May 7 23:37:13.644294 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. May 7 23:37:13.650438 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 7 23:37:13.686187 kernel: loop3: detected capacity change from 0 to 113512 May 7 23:37:13.691223 kernel: loop4: detected capacity change from 0 to 123192 May 7 23:37:13.697159 kernel: loop5: detected capacity change from 0 to 194096 May 7 23:37:13.704566 (sd-merge)[1188]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 7 23:37:13.705033 (sd-merge)[1188]: Merged extensions into '/usr'. May 7 23:37:13.709097 systemd[1]: Reload requested from client PID 1161 ('systemd-sysext') (unit systemd-sysext.service)... May 7 23:37:13.709266 systemd[1]: Reloading... May 7 23:37:13.779222 zram_generator::config[1221]: No configuration found. May 7 23:37:13.845807 ldconfig[1156]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 7 23:37:13.889291 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 7 23:37:13.938618 systemd[1]: Reloading finished in 228 ms. May 7 23:37:13.958949 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 7 23:37:13.960336 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 7 23:37:13.972380 systemd[1]: Starting ensure-sysext.service... May 7 23:37:13.975690 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 7 23:37:13.989436 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 7 23:37:13.989638 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 7 23:37:13.990277 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 7 23:37:13.990488 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. May 7 23:37:13.990535 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. May 7 23:37:13.993076 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot. May 7 23:37:13.993094 systemd-tmpfiles[1253]: Skipping /boot May 7 23:37:13.996872 systemd[1]: Reload requested from client PID 1252 ('systemctl') (unit ensure-sysext.service)... May 7 23:37:13.996886 systemd[1]: Reloading... May 7 23:37:14.001956 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot. May 7 23:37:14.001971 systemd-tmpfiles[1253]: Skipping /boot May 7 23:37:14.043189 zram_generator::config[1285]: No configuration found. May 7 23:37:14.125579 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 7 23:37:14.175402 systemd[1]: Reloading finished in 178 ms. May 7 23:37:14.188947 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 7 23:37:14.210479 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 7 23:37:14.218568 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 7 23:37:14.221415 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 7 23:37:14.223789 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 7 23:37:14.227467 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 7 23:37:14.234257 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 7 23:37:14.238825 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 7 23:37:14.243915 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 7 23:37:14.247453 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 7 23:37:14.252993 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 7 23:37:14.256423 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 7 23:37:14.257411 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 7 23:37:14.258313 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 7 23:37:14.262268 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 7 23:37:14.264225 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 7 23:37:14.265621 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 7 23:37:14.267178 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 7 23:37:14.268693 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 7 23:37:14.268843 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 7 23:37:14.270720 systemd[1]: modprobe@loop.service: Deactivated successfully. May 7 23:37:14.270879 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 7 23:37:14.277398 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 7 23:37:14.287481 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 7 23:37:14.289441 systemd-udevd[1323]: Using default interface naming scheme 'v255'. May 7 23:37:14.289596 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 7 23:37:14.295398 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 7 23:37:14.296322 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 7 23:37:14.296495 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 7 23:37:14.300485 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 7 23:37:14.303028 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 7 23:37:14.303225 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 7 23:37:14.304818 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 7 23:37:14.306183 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 7 23:37:14.307823 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 7 23:37:14.309343 systemd[1]: modprobe@loop.service: Deactivated successfully. May 7 23:37:14.309510 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 7 23:37:14.318375 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 7 23:37:14.319958 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 7 23:37:14.323767 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 7 23:37:14.325852 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 7 23:37:14.332415 systemd[1]: Finished ensure-sysext.service. May 7 23:37:14.336868 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 7 23:37:14.340591 augenrules[1384]: No rules May 7 23:37:14.349415 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 7 23:37:14.352126 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 7 23:37:14.355455 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 7 23:37:14.357363 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 7 23:37:14.358240 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 7 23:37:14.358299 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 7 23:37:14.361405 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 7 23:37:14.365681 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 7 23:37:14.366571 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 7 23:37:14.368354 systemd[1]: audit-rules.service: Deactivated successfully. May 7 23:37:14.368706 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 7 23:37:14.369807 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 7 23:37:14.369967 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 7 23:37:14.373897 systemd[1]: modprobe@drm.service: Deactivated successfully. May 7 23:37:14.374094 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 7 23:37:14.377846 systemd[1]: modprobe@loop.service: Deactivated successfully. May 7 23:37:14.378052 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 7 23:37:14.381228 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 7 23:37:14.381380 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 7 23:37:14.382934 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 7 23:37:14.383187 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 7 23:37:14.384764 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 7 23:37:14.388374 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1381) May 7 23:37:14.449304 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 7 23:37:14.452661 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 7 23:37:14.453881 systemd[1]: Reached target time-set.target - System Time Set. May 7 23:37:14.460644 systemd-resolved[1322]: Positive Trust Anchors: May 7 23:37:14.460808 systemd-resolved[1322]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 7 23:37:14.460845 systemd-resolved[1322]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 7 23:37:14.462003 systemd-networkd[1395]: lo: Link UP May 7 23:37:14.462007 systemd-networkd[1395]: lo: Gained carrier May 7 23:37:14.466341 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 7 23:37:14.466964 systemd-networkd[1395]: Enumeration completed May 7 23:37:14.467464 systemd[1]: Started systemd-networkd.service - Network Configuration. May 7 23:37:14.469801 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 7 23:37:14.469812 systemd-networkd[1395]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 7 23:37:14.470403 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 7 23:37:14.470436 systemd-networkd[1395]: eth0: Link UP May 7 23:37:14.470439 systemd-networkd[1395]: eth0: Gained carrier May 7 23:37:14.470645 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 7 23:37:14.471960 systemd-resolved[1322]: Defaulting to hostname 'linux'. May 7 23:37:14.476246 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 7 23:37:14.478376 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 7 23:37:14.479447 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 7 23:37:14.480330 systemd[1]: Reached target network.target - Network. May 7 23:37:14.480958 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 7 23:37:14.488492 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 7 23:37:14.497226 systemd-networkd[1395]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 7 23:37:14.497824 systemd-timesyncd[1396]: Network configuration changed, trying to establish connection. May 7 23:37:14.498841 systemd-timesyncd[1396]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 7 23:37:14.498891 systemd-timesyncd[1396]: Initial clock synchronization to Wed 2025-05-07 23:37:14.356003 UTC. May 7 23:37:14.505750 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 7 23:37:14.531383 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 7 23:37:14.544327 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 7 23:37:14.546926 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 7 23:37:14.568313 lvm[1423]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 7 23:37:14.572201 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 7 23:37:14.600739 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 7 23:37:14.602360 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 7 23:37:14.603481 systemd[1]: Reached target sysinit.target - System Initialization. May 7 23:37:14.604677 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 7 23:37:14.605927 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 7 23:37:14.607356 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 7 23:37:14.608521 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 7 23:37:14.609804 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 7 23:37:14.611062 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 7 23:37:14.611107 systemd[1]: Reached target paths.target - Path Units. May 7 23:37:14.612058 systemd[1]: Reached target timers.target - Timer Units. May 7 23:37:14.613914 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 7 23:37:14.616374 systemd[1]: Starting docker.socket - Docker Socket for the API... May 7 23:37:14.619536 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 7 23:37:14.620701 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 7 23:37:14.621684 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 7 23:37:14.627402 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 7 23:37:14.628868 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 7 23:37:14.631316 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 7 23:37:14.632960 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 7 23:37:14.634115 systemd[1]: Reached target sockets.target - Socket Units. May 7 23:37:14.635217 systemd[1]: Reached target basic.target - Basic System. May 7 23:37:14.636124 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 7 23:37:14.636170 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 7 23:37:14.637095 systemd[1]: Starting containerd.service - containerd container runtime... May 7 23:37:14.639032 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 7 23:37:14.639111 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 7 23:37:14.642363 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 7 23:37:14.647922 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 7 23:37:14.649050 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 7 23:37:14.652448 jq[1433]: false May 7 23:37:14.653231 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 7 23:37:14.655213 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 7 23:37:14.660333 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 7 23:37:14.665310 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 7 23:37:14.665681 extend-filesystems[1434]: Found loop3 May 7 23:37:14.666533 extend-filesystems[1434]: Found loop4 May 7 23:37:14.666533 extend-filesystems[1434]: Found loop5 May 7 23:37:14.666533 extend-filesystems[1434]: Found vda May 7 23:37:14.666533 extend-filesystems[1434]: Found vda1 May 7 23:37:14.666533 extend-filesystems[1434]: Found vda2 May 7 23:37:14.666533 extend-filesystems[1434]: Found vda3 May 7 23:37:14.666533 extend-filesystems[1434]: Found usr May 7 23:37:14.666533 extend-filesystems[1434]: Found vda4 May 7 23:37:14.666533 extend-filesystems[1434]: Found vda6 May 7 23:37:14.666533 extend-filesystems[1434]: Found vda7 May 7 23:37:14.666533 extend-filesystems[1434]: Found vda9 May 7 23:37:14.666533 extend-filesystems[1434]: Checking size of /dev/vda9 May 7 23:37:14.683071 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 7 23:37:14.683192 extend-filesystems[1434]: Resized partition /dev/vda9 May 7 23:37:14.668894 systemd[1]: Starting systemd-logind.service - User Login Management... May 7 23:37:14.685365 extend-filesystems[1451]: resize2fs 1.47.1 (20-May-2024) May 7 23:37:14.693193 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1365) May 7 23:37:14.670918 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 7 23:37:14.686870 dbus-daemon[1432]: [system] SELinux support is enabled May 7 23:37:14.671488 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 7 23:37:14.674835 systemd[1]: Starting update-engine.service - Update Engine... May 7 23:37:14.681523 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 7 23:37:14.684015 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 7 23:37:14.687239 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 7 23:37:14.699868 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 7 23:37:14.701712 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 7 23:37:14.702054 systemd[1]: motdgen.service: Deactivated successfully. May 7 23:37:14.702417 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 7 23:37:14.706109 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 7 23:37:14.706358 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 7 23:37:14.712239 jq[1454]: true May 7 23:37:14.723821 (ntainerd)[1459]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 7 23:37:14.738221 jq[1460]: true May 7 23:37:14.756478 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 7 23:37:14.770115 update_engine[1448]: I20250507 23:37:14.755829 1448 main.cc:92] Flatcar Update Engine starting May 7 23:37:14.770115 update_engine[1448]: I20250507 23:37:14.759227 1448 update_check_scheduler.cc:74] Next update check in 10m35s May 7 23:37:14.763703 systemd[1]: Started update-engine.service - Update Engine. May 7 23:37:14.777504 tar[1457]: linux-arm64/helm May 7 23:37:14.777672 extend-filesystems[1451]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 7 23:37:14.777672 extend-filesystems[1451]: old_desc_blocks = 1, new_desc_blocks = 1 May 7 23:37:14.777672 extend-filesystems[1451]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 7 23:37:14.764900 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 7 23:37:14.780525 extend-filesystems[1434]: Resized filesystem in /dev/vda9 May 7 23:37:14.764930 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 7 23:37:14.766417 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 7 23:37:14.766435 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 7 23:37:14.780172 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 7 23:37:14.781554 systemd[1]: extend-filesystems.service: Deactivated successfully. May 7 23:37:14.781731 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 7 23:37:14.789498 systemd-logind[1447]: Watching system buttons on /dev/input/event0 (Power Button) May 7 23:37:14.793365 systemd-logind[1447]: New seat seat0. May 7 23:37:14.797920 systemd[1]: Started systemd-logind.service - User Login Management. May 7 23:37:14.839234 bash[1487]: Updated "/home/core/.ssh/authorized_keys" May 7 23:37:14.843199 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 7 23:37:14.844753 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 7 23:37:14.848115 locksmithd[1482]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 7 23:37:14.969483 containerd[1459]: time="2025-05-07T23:37:14.969341720Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 7 23:37:15.000869 containerd[1459]: time="2025-05-07T23:37:15.000772300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 7 23:37:15.002305 containerd[1459]: time="2025-05-07T23:37:15.002240126Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 7 23:37:15.002305 containerd[1459]: time="2025-05-07T23:37:15.002282504Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 7 23:37:15.002305 containerd[1459]: time="2025-05-07T23:37:15.002299273Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 7 23:37:15.002483 containerd[1459]: time="2025-05-07T23:37:15.002456062Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 7 23:37:15.002483 containerd[1459]: time="2025-05-07T23:37:15.002478857Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 7 23:37:15.002557 containerd[1459]: time="2025-05-07T23:37:15.002540977Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 7 23:37:15.002584 containerd[1459]: time="2025-05-07T23:37:15.002557509Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 7 23:37:15.002785 containerd[1459]: time="2025-05-07T23:37:15.002756636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 7 23:37:15.002785 containerd[1459]: time="2025-05-07T23:37:15.002778202Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 7 23:37:15.002825 containerd[1459]: time="2025-05-07T23:37:15.002792196Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 7 23:37:15.002825 containerd[1459]: time="2025-05-07T23:37:15.002801829Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 7 23:37:15.002998 containerd[1459]: time="2025-05-07T23:37:15.002980659Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 7 23:37:15.003217 containerd[1459]: time="2025-05-07T23:37:15.003200916Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 7 23:37:15.003372 containerd[1459]: time="2025-05-07T23:37:15.003354097Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 7 23:37:15.003394 containerd[1459]: time="2025-05-07T23:37:15.003373602Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 7 23:37:15.003462 containerd[1459]: time="2025-05-07T23:37:15.003448289Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 7 23:37:15.003511 containerd[1459]: time="2025-05-07T23:37:15.003498477Z" level=info msg="metadata content store policy set" policy=shared May 7 23:37:15.006980 containerd[1459]: time="2025-05-07T23:37:15.006951353Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 7 23:37:15.007056 containerd[1459]: time="2025-05-07T23:37:15.006999836Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 7 23:37:15.007056 containerd[1459]: time="2025-05-07T23:37:15.007015971Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 7 23:37:15.007056 containerd[1459]: time="2025-05-07T23:37:15.007031233Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 7 23:37:15.007056 containerd[1459]: time="2025-05-07T23:37:15.007045148Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 7 23:37:15.007224 containerd[1459]: time="2025-05-07T23:37:15.007196347Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 7 23:37:15.007449 containerd[1459]: time="2025-05-07T23:37:15.007432263Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 7 23:37:15.007555 containerd[1459]: time="2025-05-07T23:37:15.007535851Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 7 23:37:15.007555 containerd[1459]: time="2025-05-07T23:37:15.007557218Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 7 23:37:15.007614 containerd[1459]: time="2025-05-07T23:37:15.007571926Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 7 23:37:15.007614 containerd[1459]: time="2025-05-07T23:37:15.007584810Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 7 23:37:15.007614 containerd[1459]: time="2025-05-07T23:37:15.007597971Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 7 23:37:15.007614 containerd[1459]: time="2025-05-07T23:37:15.007609825Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 7 23:37:15.007676 containerd[1459]: time="2025-05-07T23:37:15.007623977Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 7 23:37:15.007676 containerd[1459]: time="2025-05-07T23:37:15.007638249Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 7 23:37:15.007676 containerd[1459]: time="2025-05-07T23:37:15.007652956Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 7 23:37:15.007676 containerd[1459]: time="2025-05-07T23:37:15.007664889Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 7 23:37:15.007676 containerd[1459]: time="2025-05-07T23:37:15.007675712Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 7 23:37:15.007812 containerd[1459]: time="2025-05-07T23:37:15.007694780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 7 23:37:15.007812 containerd[1459]: time="2025-05-07T23:37:15.007708536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 7 23:37:15.007812 containerd[1459]: time="2025-05-07T23:37:15.007719914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 7 23:37:15.007812 containerd[1459]: time="2025-05-07T23:37:15.007732401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 7 23:37:15.007812 containerd[1459]: time="2025-05-07T23:37:15.007744056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 7 23:37:15.007812 containerd[1459]: time="2025-05-07T23:37:15.007756147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 7 23:37:15.007812 containerd[1459]: time="2025-05-07T23:37:15.007767327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 7 23:37:15.007812 containerd[1459]: time="2025-05-07T23:37:15.007778863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 7 23:37:15.007812 containerd[1459]: time="2025-05-07T23:37:15.007791232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 7 23:37:15.007812 containerd[1459]: time="2025-05-07T23:37:15.007805027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 7 23:37:15.007812 containerd[1459]: time="2025-05-07T23:37:15.007816445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 7 23:37:15.008104 containerd[1459]: time="2025-05-07T23:37:15.007828774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 7 23:37:15.008104 containerd[1459]: time="2025-05-07T23:37:15.007840468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 7 23:37:15.008104 containerd[1459]: time="2025-05-07T23:37:15.007855334Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 7 23:37:15.008104 containerd[1459]: time="2025-05-07T23:37:15.007880746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 7 23:37:15.008104 containerd[1459]: time="2025-05-07T23:37:15.007893709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 7 23:37:15.008104 containerd[1459]: time="2025-05-07T23:37:15.007903739Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 7 23:37:15.008306 containerd[1459]: time="2025-05-07T23:37:15.008282608Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 7 23:37:15.008306 containerd[1459]: time="2025-05-07T23:37:15.008302588Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 7 23:37:15.008344 containerd[1459]: time="2025-05-07T23:37:15.008313014Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 7 23:37:15.008344 containerd[1459]: time="2025-05-07T23:37:15.008324788Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 7 23:37:15.008344 containerd[1459]: time="2025-05-07T23:37:15.008333668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 7 23:37:15.008396 containerd[1459]: time="2025-05-07T23:37:15.008346711Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 7 23:37:15.008396 containerd[1459]: time="2025-05-07T23:37:15.008356701Z" level=info msg="NRI interface is disabled by configuration." May 7 23:37:15.008396 containerd[1459]: time="2025-05-07T23:37:15.008366929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 7 23:37:15.008839 containerd[1459]: time="2025-05-07T23:37:15.008785401Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 7 23:37:15.008839 containerd[1459]: time="2025-05-07T23:37:15.008838365Z" level=info msg="Connect containerd service" May 7 23:37:15.009055 containerd[1459]: time="2025-05-07T23:37:15.008869405Z" level=info msg="using legacy CRI server" May 7 23:37:15.009055 containerd[1459]: time="2025-05-07T23:37:15.008876263Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 7 23:37:15.009340 containerd[1459]: time="2025-05-07T23:37:15.009322447Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 7 23:37:15.011744 containerd[1459]: time="2025-05-07T23:37:15.011705989Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 7 23:37:15.012226 containerd[1459]: time="2025-05-07T23:37:15.012192687Z" level=info msg="Start subscribing containerd event" May 7 23:37:15.012388 containerd[1459]: time="2025-05-07T23:37:15.012368385Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 7 23:37:15.012422 containerd[1459]: time="2025-05-07T23:37:15.012416274Z" level=info msg=serving... address=/run/containerd/containerd.sock May 7 23:37:15.012579 containerd[1459]: time="2025-05-07T23:37:15.012563667Z" level=info msg="Start recovering state" May 7 23:37:15.012654 containerd[1459]: time="2025-05-07T23:37:15.012636531Z" level=info msg="Start event monitor" May 7 23:37:15.012702 containerd[1459]: time="2025-05-07T23:37:15.012665154Z" level=info msg="Start snapshots syncer" May 7 23:37:15.012702 containerd[1459]: time="2025-05-07T23:37:15.012675818Z" level=info msg="Start cni network conf syncer for default" May 7 23:37:15.012702 containerd[1459]: time="2025-05-07T23:37:15.012683707Z" level=info msg="Start streaming server" May 7 23:37:15.012898 systemd[1]: Started containerd.service - containerd container runtime. May 7 23:37:15.014218 containerd[1459]: time="2025-05-07T23:37:15.014176309Z" level=info msg="containerd successfully booted in 0.046715s" May 7 23:37:15.114707 tar[1457]: linux-arm64/LICENSE May 7 23:37:15.116376 tar[1457]: linux-arm64/README.md May 7 23:37:15.130185 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 7 23:37:15.507017 sshd_keygen[1456]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 7 23:37:15.527189 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 7 23:37:15.538442 systemd[1]: Starting issuegen.service - Generate /run/issue... May 7 23:37:15.543755 systemd[1]: issuegen.service: Deactivated successfully. May 7 23:37:15.543988 systemd[1]: Finished issuegen.service - Generate /run/issue. May 7 23:37:15.546581 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 7 23:37:15.558026 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 7 23:37:15.563188 systemd[1]: Started getty@tty1.service - Getty on tty1. May 7 23:37:15.565249 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 7 23:37:15.566354 systemd[1]: Reached target getty.target - Login Prompts. May 7 23:37:15.621019 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 7 23:37:15.631379 systemd[1]: Started sshd@0-10.0.0.15:22-10.0.0.1:40320.service - OpenSSH per-connection server daemon (10.0.0.1:40320). May 7 23:37:15.650231 systemd-networkd[1395]: eth0: Gained IPv6LL May 7 23:37:15.657226 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 7 23:37:15.658935 systemd[1]: Reached target network-online.target - Network is Online. May 7 23:37:15.665413 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 7 23:37:15.667683 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 7 23:37:15.669476 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 7 23:37:15.684429 systemd[1]: coreos-metadata.service: Deactivated successfully. May 7 23:37:15.684663 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 7 23:37:15.688626 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 7 23:37:15.693709 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 7 23:37:15.705686 sshd[1526]: Accepted publickey for core from 10.0.0.1 port 40320 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:37:15.707541 sshd-session[1526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:37:15.717858 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 7 23:37:15.731438 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 7 23:37:15.734653 systemd-logind[1447]: New session 1 of user core. May 7 23:37:15.742209 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 7 23:37:15.750476 systemd[1]: Starting user@500.service - User Manager for UID 500... May 7 23:37:15.756494 (systemd)[1548]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 7 23:37:15.759047 systemd-logind[1447]: New session c1 of user core. May 7 23:37:15.872402 systemd[1548]: Queued start job for default target default.target. May 7 23:37:15.887095 systemd[1548]: Created slice app.slice - User Application Slice. May 7 23:37:15.887295 systemd[1548]: Reached target paths.target - Paths. May 7 23:37:15.887399 systemd[1548]: Reached target timers.target - Timers. May 7 23:37:15.888642 systemd[1548]: Starting dbus.socket - D-Bus User Message Bus Socket... May 7 23:37:15.899117 systemd[1548]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 7 23:37:15.899218 systemd[1548]: Reached target sockets.target - Sockets. May 7 23:37:15.899259 systemd[1548]: Reached target basic.target - Basic System. May 7 23:37:15.899286 systemd[1548]: Reached target default.target - Main User Target. May 7 23:37:15.899311 systemd[1548]: Startup finished in 132ms. May 7 23:37:15.899689 systemd[1]: Started user@500.service - User Manager for UID 500. May 7 23:37:15.902984 systemd[1]: Started session-1.scope - Session 1 of User core. May 7 23:37:15.971460 systemd[1]: Started sshd@1-10.0.0.15:22-10.0.0.1:40324.service - OpenSSH per-connection server daemon (10.0.0.1:40324). May 7 23:37:16.013168 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 40324 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:37:16.014380 sshd-session[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:37:16.018536 systemd-logind[1447]: New session 2 of user core. May 7 23:37:16.032292 systemd[1]: Started session-2.scope - Session 2 of User core. May 7 23:37:16.085844 sshd[1561]: Connection closed by 10.0.0.1 port 40324 May 7 23:37:16.086507 sshd-session[1559]: pam_unix(sshd:session): session closed for user core May 7 23:37:16.098452 systemd[1]: sshd@1-10.0.0.15:22-10.0.0.1:40324.service: Deactivated successfully. May 7 23:37:16.099865 systemd[1]: session-2.scope: Deactivated successfully. May 7 23:37:16.102759 systemd-logind[1447]: Session 2 logged out. Waiting for processes to exit. May 7 23:37:16.115607 systemd[1]: Started sshd@2-10.0.0.15:22-10.0.0.1:40332.service - OpenSSH per-connection server daemon (10.0.0.1:40332). May 7 23:37:16.117890 systemd-logind[1447]: Removed session 2. May 7 23:37:16.157948 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 40332 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:37:16.159062 sshd-session[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:37:16.165294 systemd-logind[1447]: New session 3 of user core. May 7 23:37:16.177328 systemd[1]: Started session-3.scope - Session 3 of User core. May 7 23:37:16.216271 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:37:16.217865 systemd[1]: Reached target multi-user.target - Multi-User System. May 7 23:37:16.220203 systemd[1]: Startup finished in 546ms (kernel) + 10.178s (initrd) + 3.380s (userspace) = 14.106s. May 7 23:37:16.220439 (kubelet)[1575]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 7 23:37:16.241102 sshd[1569]: Connection closed by 10.0.0.1 port 40332 May 7 23:37:16.242532 sshd-session[1566]: pam_unix(sshd:session): session closed for user core May 7 23:37:16.251990 systemd-logind[1447]: Session 3 logged out. Waiting for processes to exit. May 7 23:37:16.252181 systemd[1]: sshd@2-10.0.0.15:22-10.0.0.1:40332.service: Deactivated successfully. May 7 23:37:16.253948 systemd[1]: session-3.scope: Deactivated successfully. May 7 23:37:16.256937 systemd-logind[1447]: Removed session 3. May 7 23:37:16.712951 kubelet[1575]: E0507 23:37:16.712875 1575 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 7 23:37:16.715427 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 7 23:37:16.715573 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 7 23:37:16.717209 systemd[1]: kubelet.service: Consumed 824ms CPU time, 243M memory peak. May 7 23:37:26.190370 systemd[1]: Started sshd@3-10.0.0.15:22-10.0.0.1:34800.service - OpenSSH per-connection server daemon (10.0.0.1:34800). May 7 23:37:26.234560 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 34800 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:37:26.235612 sshd-session[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:37:26.239523 systemd-logind[1447]: New session 4 of user core. May 7 23:37:26.250287 systemd[1]: Started session-4.scope - Session 4 of User core. May 7 23:37:26.300595 sshd[1595]: Connection closed by 10.0.0.1 port 34800 May 7 23:37:26.301258 sshd-session[1593]: pam_unix(sshd:session): session closed for user core May 7 23:37:26.314564 systemd[1]: sshd@3-10.0.0.15:22-10.0.0.1:34800.service: Deactivated successfully. May 7 23:37:26.315821 systemd[1]: session-4.scope: Deactivated successfully. May 7 23:37:26.316535 systemd-logind[1447]: Session 4 logged out. Waiting for processes to exit. May 7 23:37:26.318191 systemd[1]: Started sshd@4-10.0.0.15:22-10.0.0.1:34816.service - OpenSSH per-connection server daemon (10.0.0.1:34816). May 7 23:37:26.319474 systemd-logind[1447]: Removed session 4. May 7 23:37:26.362111 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 34816 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:37:26.363401 sshd-session[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:37:26.367696 systemd-logind[1447]: New session 5 of user core. May 7 23:37:26.382356 systemd[1]: Started session-5.scope - Session 5 of User core. May 7 23:37:26.429760 sshd[1603]: Connection closed by 10.0.0.1 port 34816 May 7 23:37:26.430222 sshd-session[1600]: pam_unix(sshd:session): session closed for user core May 7 23:37:26.444559 systemd[1]: sshd@4-10.0.0.15:22-10.0.0.1:34816.service: Deactivated successfully. May 7 23:37:26.445856 systemd[1]: session-5.scope: Deactivated successfully. May 7 23:37:26.446553 systemd-logind[1447]: Session 5 logged out. Waiting for processes to exit. May 7 23:37:26.457449 systemd[1]: Started sshd@5-10.0.0.15:22-10.0.0.1:34820.service - OpenSSH per-connection server daemon (10.0.0.1:34820). May 7 23:37:26.458327 systemd-logind[1447]: Removed session 5. May 7 23:37:26.498128 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 34820 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:37:26.499308 sshd-session[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:37:26.503587 systemd-logind[1447]: New session 6 of user core. May 7 23:37:26.509293 systemd[1]: Started session-6.scope - Session 6 of User core. May 7 23:37:26.559489 sshd[1611]: Connection closed by 10.0.0.1 port 34820 May 7 23:37:26.559874 sshd-session[1608]: pam_unix(sshd:session): session closed for user core May 7 23:37:26.569217 systemd[1]: sshd@5-10.0.0.15:22-10.0.0.1:34820.service: Deactivated successfully. May 7 23:37:26.570586 systemd[1]: session-6.scope: Deactivated successfully. May 7 23:37:26.571797 systemd-logind[1447]: Session 6 logged out. Waiting for processes to exit. May 7 23:37:26.572898 systemd[1]: Started sshd@6-10.0.0.15:22-10.0.0.1:34828.service - OpenSSH per-connection server daemon (10.0.0.1:34828). May 7 23:37:26.573689 systemd-logind[1447]: Removed session 6. May 7 23:37:26.617559 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 34828 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:37:26.618696 sshd-session[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:37:26.622896 systemd-logind[1447]: New session 7 of user core. May 7 23:37:26.634288 systemd[1]: Started session-7.scope - Session 7 of User core. May 7 23:37:26.695410 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 7 23:37:26.695676 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 7 23:37:26.709050 sudo[1620]: pam_unix(sudo:session): session closed for user root May 7 23:37:26.712304 sshd[1619]: Connection closed by 10.0.0.1 port 34828 May 7 23:37:26.712713 sshd-session[1616]: pam_unix(sshd:session): session closed for user core May 7 23:37:26.722326 systemd[1]: sshd@6-10.0.0.15:22-10.0.0.1:34828.service: Deactivated successfully. May 7 23:37:26.723696 systemd[1]: session-7.scope: Deactivated successfully. May 7 23:37:26.724677 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 7 23:37:26.726325 systemd-logind[1447]: Session 7 logged out. Waiting for processes to exit. May 7 23:37:26.736350 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 7 23:37:26.737545 systemd[1]: Started sshd@7-10.0.0.15:22-10.0.0.1:34838.service - OpenSSH per-connection server daemon (10.0.0.1:34838). May 7 23:37:26.739905 systemd-logind[1447]: Removed session 7. May 7 23:37:26.782554 sshd[1626]: Accepted publickey for core from 10.0.0.1 port 34838 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:37:26.783809 sshd-session[1626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:37:26.788064 systemd-logind[1447]: New session 8 of user core. May 7 23:37:26.799348 systemd[1]: Started session-8.scope - Session 8 of User core. May 7 23:37:26.829504 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:37:26.832805 (kubelet)[1637]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 7 23:37:26.850033 sudo[1647]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 7 23:37:26.850322 sudo[1647]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 7 23:37:26.853264 sudo[1647]: pam_unix(sudo:session): session closed for user root May 7 23:37:26.857986 sudo[1644]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 7 23:37:26.858272 sudo[1644]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 7 23:37:26.871483 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 7 23:37:26.874302 kubelet[1637]: E0507 23:37:26.874220 1637 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 7 23:37:26.878719 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 7 23:37:26.878848 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 7 23:37:26.879336 systemd[1]: kubelet.service: Consumed 128ms CPU time, 97.4M memory peak. May 7 23:37:26.894786 augenrules[1670]: No rules May 7 23:37:26.895635 systemd[1]: audit-rules.service: Deactivated successfully. May 7 23:37:26.895846 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 7 23:37:26.897362 sudo[1644]: pam_unix(sudo:session): session closed for user root May 7 23:37:26.898520 sshd[1631]: Connection closed by 10.0.0.1 port 34838 May 7 23:37:26.898980 sshd-session[1626]: pam_unix(sshd:session): session closed for user core May 7 23:37:26.913267 systemd[1]: sshd@7-10.0.0.15:22-10.0.0.1:34838.service: Deactivated successfully. May 7 23:37:26.914702 systemd[1]: session-8.scope: Deactivated successfully. May 7 23:37:26.916215 systemd-logind[1447]: Session 8 logged out. Waiting for processes to exit. May 7 23:37:26.923491 systemd[1]: Started sshd@8-10.0.0.15:22-10.0.0.1:34840.service - OpenSSH per-connection server daemon (10.0.0.1:34840). May 7 23:37:26.924391 systemd-logind[1447]: Removed session 8. May 7 23:37:26.964271 sshd[1678]: Accepted publickey for core from 10.0.0.1 port 34840 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:37:26.965409 sshd-session[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:37:26.970209 systemd-logind[1447]: New session 9 of user core. May 7 23:37:26.978271 systemd[1]: Started session-9.scope - Session 9 of User core. May 7 23:37:27.028181 sudo[1682]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 7 23:37:27.028455 sudo[1682]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 7 23:37:27.369398 systemd[1]: Starting docker.service - Docker Application Container Engine... May 7 23:37:27.369587 (dockerd)[1703]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 7 23:37:27.606005 dockerd[1703]: time="2025-05-07T23:37:27.605946279Z" level=info msg="Starting up" May 7 23:37:27.759724 dockerd[1703]: time="2025-05-07T23:37:27.759583601Z" level=info msg="Loading containers: start." May 7 23:37:27.911171 kernel: Initializing XFRM netlink socket May 7 23:37:27.973608 systemd-networkd[1395]: docker0: Link UP May 7 23:37:28.104461 dockerd[1703]: time="2025-05-07T23:37:28.104299554Z" level=info msg="Loading containers: done." May 7 23:37:28.117067 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck442048144-merged.mount: Deactivated successfully. May 7 23:37:28.222484 dockerd[1703]: time="2025-05-07T23:37:28.222402026Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 7 23:37:28.222613 dockerd[1703]: time="2025-05-07T23:37:28.222547503Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 7 23:37:28.222770 dockerd[1703]: time="2025-05-07T23:37:28.222750876Z" level=info msg="Daemon has completed initialization" May 7 23:37:28.387284 dockerd[1703]: time="2025-05-07T23:37:28.387055319Z" level=info msg="API listen on /run/docker.sock" May 7 23:37:28.387267 systemd[1]: Started docker.service - Docker Application Container Engine. May 7 23:37:29.304901 containerd[1459]: time="2025-05-07T23:37:29.304850707Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 7 23:37:30.192801 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2691316187.mount: Deactivated successfully. May 7 23:37:32.254291 containerd[1459]: time="2025-05-07T23:37:32.254229179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:37:32.255712 containerd[1459]: time="2025-05-07T23:37:32.255647582Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794152" May 7 23:37:32.256774 containerd[1459]: time="2025-05-07T23:37:32.256742192Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:37:32.259889 containerd[1459]: time="2025-05-07T23:37:32.259844524Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:37:32.260875 containerd[1459]: time="2025-05-07T23:37:32.260842224Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 2.955949903s" May 7 23:37:32.260931 containerd[1459]: time="2025-05-07T23:37:32.260876800Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 7 23:37:32.279055 containerd[1459]: time="2025-05-07T23:37:32.278856250Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 7 23:37:34.586892 containerd[1459]: time="2025-05-07T23:37:34.586831769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:37:34.587857 containerd[1459]: time="2025-05-07T23:37:34.587631869Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855552" May 7 23:37:34.588768 containerd[1459]: time="2025-05-07T23:37:34.588710638Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:37:34.592227 containerd[1459]: time="2025-05-07T23:37:34.592166647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:37:34.592859 containerd[1459]: time="2025-05-07T23:37:34.592830225Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 2.313940552s" May 7 23:37:34.592905 containerd[1459]: time="2025-05-07T23:37:34.592859907Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 7 23:37:34.611811 containerd[1459]: time="2025-05-07T23:37:34.611768519Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 7 23:37:36.195501 containerd[1459]: time="2025-05-07T23:37:36.195451668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:37:36.196529 containerd[1459]: time="2025-05-07T23:37:36.196442224Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263947" May 7 23:37:36.197182 containerd[1459]: time="2025-05-07T23:37:36.197151907Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:37:36.200641 containerd[1459]: time="2025-05-07T23:37:36.200579443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:37:36.201308 containerd[1459]: time="2025-05-07T23:37:36.201272525Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 1.589463468s" May 7 23:37:36.201308 containerd[1459]: time="2025-05-07T23:37:36.201306885Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 7 23:37:36.219667 containerd[1459]: time="2025-05-07T23:37:36.219624604Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 7 23:37:37.129324 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 7 23:37:37.139386 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 7 23:37:37.241914 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:37:37.245580 (kubelet)[2002]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 7 23:37:37.298476 kubelet[2002]: E0507 23:37:37.298373 2002 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 7 23:37:37.302906 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 7 23:37:37.303066 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 7 23:37:37.303475 systemd[1]: kubelet.service: Consumed 137ms CPU time, 95.3M memory peak. May 7 23:37:37.440060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2551293564.mount: Deactivated successfully. May 7 23:37:37.808872 containerd[1459]: time="2025-05-07T23:37:37.808752291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:37:37.809802 containerd[1459]: time="2025-05-07T23:37:37.809762946Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775707" May 7 23:37:37.810636 containerd[1459]: time="2025-05-07T23:37:37.810612031Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:37:37.812908 containerd[1459]: time="2025-05-07T23:37:37.812875538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:37:37.813948 containerd[1459]: time="2025-05-07T23:37:37.813924750Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.594262033s" May 7 23:37:37.814003 containerd[1459]: time="2025-05-07T23:37:37.813951891Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 7 23:37:37.832677 containerd[1459]: time="2025-05-07T23:37:37.832634466Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 7 23:37:38.496347 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1026742696.mount: Deactivated successfully. May 7 23:37:39.331695 containerd[1459]: time="2025-05-07T23:37:39.331639983Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:37:39.332163 containerd[1459]: time="2025-05-07T23:37:39.332107814Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 7 23:37:39.332987 containerd[1459]: time="2025-05-07T23:37:39.332937079Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:37:39.335957 containerd[1459]: time="2025-05-07T23:37:39.335907637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:37:39.337161 containerd[1459]: time="2025-05-07T23:37:39.337071506Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.504394689s" May 7 23:37:39.337161 containerd[1459]: time="2025-05-07T23:37:39.337106839Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 7 23:37:39.356600 containerd[1459]: time="2025-05-07T23:37:39.356565520Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 7 23:37:39.846462 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3973960154.mount: Deactivated successfully. May 7 23:37:39.853716 containerd[1459]: time="2025-05-07T23:37:39.853667740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:37:39.854691 containerd[1459]: time="2025-05-07T23:37:39.854645004Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" May 7 23:37:39.855540 containerd[1459]: time="2025-05-07T23:37:39.855494191Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:37:39.857625 containerd[1459]: time="2025-05-07T23:37:39.857577155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:37:39.858597 containerd[1459]: time="2025-05-07T23:37:39.858518886Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 501.915678ms" May 7 23:37:39.858597 containerd[1459]: time="2025-05-07T23:37:39.858551504Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 7 23:37:39.877510 containerd[1459]: time="2025-05-07T23:37:39.877461467Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 7 23:37:40.557363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount842452374.mount: Deactivated successfully. May 7 23:37:43.162995 containerd[1459]: time="2025-05-07T23:37:43.162942591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:37:43.164338 containerd[1459]: time="2025-05-07T23:37:43.164295647Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" May 7 23:37:43.165261 containerd[1459]: time="2025-05-07T23:37:43.165208628Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:37:43.172111 containerd[1459]: time="2025-05-07T23:37:43.172062896Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:37:43.173819 containerd[1459]: time="2025-05-07T23:37:43.173471989Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.295970274s" May 7 23:37:43.173819 containerd[1459]: time="2025-05-07T23:37:43.173519799Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 7 23:37:47.422543 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 7 23:37:47.434400 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 7 23:37:47.527529 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:37:47.530440 (kubelet)[2216]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 7 23:37:47.565554 kubelet[2216]: E0507 23:37:47.565505 2216 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 7 23:37:47.568191 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 7 23:37:47.568425 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 7 23:37:47.568821 systemd[1]: kubelet.service: Consumed 119ms CPU time, 98.7M memory peak. May 7 23:37:47.903100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:37:47.903488 systemd[1]: kubelet.service: Consumed 119ms CPU time, 98.7M memory peak. May 7 23:37:47.914548 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 7 23:37:47.935563 systemd[1]: Reload requested from client PID 2231 ('systemctl') (unit session-9.scope)... May 7 23:37:47.935583 systemd[1]: Reloading... May 7 23:37:48.008831 zram_generator::config[2275]: No configuration found. May 7 23:37:48.178830 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 7 23:37:48.251350 systemd[1]: Reloading finished in 315 ms. May 7 23:37:48.294791 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:37:48.297852 (kubelet)[2311]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 7 23:37:48.298217 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 7 23:37:48.298476 systemd[1]: kubelet.service: Deactivated successfully. May 7 23:37:48.298654 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:37:48.298696 systemd[1]: kubelet.service: Consumed 78ms CPU time, 82.4M memory peak. May 7 23:37:48.300817 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 7 23:37:48.387023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:37:48.391074 (kubelet)[2323]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 7 23:37:48.426146 kubelet[2323]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 7 23:37:48.426146 kubelet[2323]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 7 23:37:48.426146 kubelet[2323]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 7 23:37:48.426443 kubelet[2323]: I0507 23:37:48.426308 2323 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 7 23:37:49.015504 kubelet[2323]: I0507 23:37:49.015469 2323 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 7 23:37:49.017159 kubelet[2323]: I0507 23:37:49.015687 2323 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 7 23:37:49.017159 kubelet[2323]: I0507 23:37:49.015887 2323 server.go:927] "Client rotation is on, will bootstrap in background" May 7 23:37:49.046868 kubelet[2323]: I0507 23:37:49.046836 2323 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 7 23:37:49.046955 kubelet[2323]: E0507 23:37:49.046913 2323 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.15:6443: connect: connection refused May 7 23:37:49.059357 kubelet[2323]: I0507 23:37:49.059303 2323 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 7 23:37:49.060407 kubelet[2323]: I0507 23:37:49.060356 2323 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 7 23:37:49.060574 kubelet[2323]: I0507 23:37:49.060400 2323 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 7 23:37:49.060659 kubelet[2323]: I0507 23:37:49.060633 2323 topology_manager.go:138] "Creating topology manager with none policy" May 7 23:37:49.060659 kubelet[2323]: I0507 23:37:49.060644 2323 container_manager_linux.go:301] "Creating device plugin manager" May 7 23:37:49.060918 kubelet[2323]: I0507 23:37:49.060892 2323 state_mem.go:36] "Initialized new in-memory state store" May 7 23:37:49.061761 kubelet[2323]: I0507 23:37:49.061738 2323 kubelet.go:400] "Attempting to sync node with API server" May 7 23:37:49.061761 kubelet[2323]: I0507 23:37:49.061760 2323 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 7 23:37:49.062087 kubelet[2323]: I0507 23:37:49.062072 2323 kubelet.go:312] "Adding apiserver pod source" May 7 23:37:49.062226 kubelet[2323]: I0507 23:37:49.062208 2323 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 7 23:37:49.062518 kubelet[2323]: W0507 23:37:49.062461 2323 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 7 23:37:49.062551 kubelet[2323]: E0507 23:37:49.062531 2323 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 7 23:37:49.062797 kubelet[2323]: W0507 23:37:49.062761 2323 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 7 23:37:49.062842 kubelet[2323]: E0507 23:37:49.062804 2323 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 7 23:37:49.064901 kubelet[2323]: I0507 23:37:49.063162 2323 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 7 23:37:49.064901 kubelet[2323]: I0507 23:37:49.063540 2323 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 7 23:37:49.064901 kubelet[2323]: W0507 23:37:49.063637 2323 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 7 23:37:49.064901 kubelet[2323]: I0507 23:37:49.064641 2323 server.go:1264] "Started kubelet" May 7 23:37:49.065198 kubelet[2323]: I0507 23:37:49.065166 2323 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 7 23:37:49.065544 kubelet[2323]: I0507 23:37:49.065486 2323 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 7 23:37:49.065818 kubelet[2323]: I0507 23:37:49.065797 2323 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 7 23:37:49.066191 kubelet[2323]: I0507 23:37:49.066124 2323 server.go:455] "Adding debug handlers to kubelet server" May 7 23:37:49.068685 kubelet[2323]: I0507 23:37:49.068228 2323 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 7 23:37:49.074554 kubelet[2323]: E0507 23:37:49.068372 2323 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.15:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.15:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183d62fcf499d1fe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-07 23:37:49.06462259 +0000 UTC m=+0.670037263,LastTimestamp:2025-05-07 23:37:49.06462259 +0000 UTC m=+0.670037263,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 7 23:37:49.074554 kubelet[2323]: I0507 23:37:49.074221 2323 volume_manager.go:291] "Starting Kubelet Volume Manager" May 7 23:37:49.074554 kubelet[2323]: I0507 23:37:49.074295 2323 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 7 23:37:49.074554 kubelet[2323]: I0507 23:37:49.074532 2323 reconciler.go:26] "Reconciler: start to sync state" May 7 23:37:49.074835 kubelet[2323]: W0507 23:37:49.074775 2323 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 7 23:37:49.074835 kubelet[2323]: E0507 23:37:49.074825 2323 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 7 23:37:49.075317 kubelet[2323]: E0507 23:37:49.075270 2323 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="200ms" May 7 23:37:49.075898 kubelet[2323]: I0507 23:37:49.075878 2323 factory.go:221] Registration of the systemd container factory successfully May 7 23:37:49.076285 kubelet[2323]: E0507 23:37:49.076254 2323 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 7 23:37:49.076397 kubelet[2323]: I0507 23:37:49.076375 2323 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 7 23:37:49.077710 kubelet[2323]: I0507 23:37:49.077682 2323 factory.go:221] Registration of the containerd container factory successfully May 7 23:37:49.087042 kubelet[2323]: I0507 23:37:49.086973 2323 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 7 23:37:49.088253 kubelet[2323]: I0507 23:37:49.088232 2323 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 7 23:37:49.088350 kubelet[2323]: I0507 23:37:49.088338 2323 status_manager.go:217] "Starting to sync pod status with apiserver" May 7 23:37:49.088437 kubelet[2323]: I0507 23:37:49.088425 2323 kubelet.go:2337] "Starting kubelet main sync loop" May 7 23:37:49.088540 kubelet[2323]: E0507 23:37:49.088522 2323 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 7 23:37:49.089019 kubelet[2323]: W0507 23:37:49.088973 2323 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 7 23:37:49.089019 kubelet[2323]: E0507 23:37:49.089009 2323 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 7 23:37:49.092332 kubelet[2323]: I0507 23:37:49.092315 2323 cpu_manager.go:214] "Starting CPU manager" policy="none" May 7 23:37:49.092644 kubelet[2323]: I0507 23:37:49.092414 2323 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 7 23:37:49.092644 kubelet[2323]: I0507 23:37:49.092434 2323 state_mem.go:36] "Initialized new in-memory state store" May 7 23:37:49.157427 kubelet[2323]: I0507 23:37:49.157390 2323 policy_none.go:49] "None policy: Start" May 7 23:37:49.158374 kubelet[2323]: I0507 23:37:49.158290 2323 memory_manager.go:170] "Starting memorymanager" policy="None" May 7 23:37:49.158374 kubelet[2323]: I0507 23:37:49.158319 2323 state_mem.go:35] "Initializing new in-memory state store" May 7 23:37:49.163406 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 7 23:37:49.176010 kubelet[2323]: I0507 23:37:49.175977 2323 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 7 23:37:49.176311 kubelet[2323]: E0507 23:37:49.176273 2323 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" May 7 23:37:49.177914 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 7 23:37:49.180730 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 7 23:37:49.189542 kubelet[2323]: E0507 23:37:49.189515 2323 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 7 23:37:49.194033 kubelet[2323]: I0507 23:37:49.194005 2323 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 7 23:37:49.194358 kubelet[2323]: I0507 23:37:49.194199 2323 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 7 23:37:49.194358 kubelet[2323]: I0507 23:37:49.194298 2323 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 7 23:37:49.195879 kubelet[2323]: E0507 23:37:49.195821 2323 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 7 23:37:49.276838 kubelet[2323]: E0507 23:37:49.276744 2323 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="400ms" May 7 23:37:49.378110 kubelet[2323]: I0507 23:37:49.378091 2323 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 7 23:37:49.378415 kubelet[2323]: E0507 23:37:49.378391 2323 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" May 7 23:37:49.390613 kubelet[2323]: I0507 23:37:49.390516 2323 topology_manager.go:215] "Topology Admit Handler" podUID="1ecbf0a38dce867f5e6f9f9c6b1c2012" podNamespace="kube-system" podName="kube-apiserver-localhost" May 7 23:37:49.391323 kubelet[2323]: I0507 23:37:49.391291 2323 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 7 23:37:49.392148 kubelet[2323]: I0507 23:37:49.392096 2323 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 7 23:37:49.396839 systemd[1]: Created slice kubepods-burstable-pod1ecbf0a38dce867f5e6f9f9c6b1c2012.slice - libcontainer container kubepods-burstable-pod1ecbf0a38dce867f5e6f9f9c6b1c2012.slice. May 7 23:37:49.415188 systemd[1]: Created slice kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice - libcontainer container kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice. May 7 23:37:49.432649 systemd[1]: Created slice kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice - libcontainer container kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice. May 7 23:37:49.476234 kubelet[2323]: I0507 23:37:49.476178 2323 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 7 23:37:49.476234 kubelet[2323]: I0507 23:37:49.476215 2323 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 7 23:37:49.476566 kubelet[2323]: I0507 23:37:49.476242 2323 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1ecbf0a38dce867f5e6f9f9c6b1c2012-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1ecbf0a38dce867f5e6f9f9c6b1c2012\") " pod="kube-system/kube-apiserver-localhost" May 7 23:37:49.476566 kubelet[2323]: I0507 23:37:49.476267 2323 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 7 23:37:49.476566 kubelet[2323]: I0507 23:37:49.476285 2323 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 7 23:37:49.476566 kubelet[2323]: I0507 23:37:49.476305 2323 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 7 23:37:49.476566 kubelet[2323]: I0507 23:37:49.476332 2323 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 7 23:37:49.476686 kubelet[2323]: I0507 23:37:49.476378 2323 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1ecbf0a38dce867f5e6f9f9c6b1c2012-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1ecbf0a38dce867f5e6f9f9c6b1c2012\") " pod="kube-system/kube-apiserver-localhost" May 7 23:37:49.476686 kubelet[2323]: I0507 23:37:49.476423 2323 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1ecbf0a38dce867f5e6f9f9c6b1c2012-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1ecbf0a38dce867f5e6f9f9c6b1c2012\") " pod="kube-system/kube-apiserver-localhost" May 7 23:37:49.677444 kubelet[2323]: E0507 23:37:49.677406 2323 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="800ms" May 7 23:37:49.713588 kubelet[2323]: E0507 23:37:49.713553 2323 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:37:49.714188 containerd[1459]: time="2025-05-07T23:37:49.714155346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1ecbf0a38dce867f5e6f9f9c6b1c2012,Namespace:kube-system,Attempt:0,}" May 7 23:37:49.731495 kubelet[2323]: E0507 23:37:49.731463 2323 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:37:49.731970 containerd[1459]: time="2025-05-07T23:37:49.731757268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 7 23:37:49.735021 kubelet[2323]: E0507 23:37:49.734999 2323 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:37:49.735455 containerd[1459]: time="2025-05-07T23:37:49.735279482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 7 23:37:49.780587 kubelet[2323]: I0507 23:37:49.780556 2323 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 7 23:37:49.780855 kubelet[2323]: E0507 23:37:49.780829 2323 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" May 7 23:37:50.087264 kubelet[2323]: W0507 23:37:50.087110 2323 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 7 23:37:50.087264 kubelet[2323]: E0507 23:37:50.087199 2323 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 7 23:37:50.282380 kubelet[2323]: W0507 23:37:50.282316 2323 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 7 23:37:50.282380 kubelet[2323]: E0507 23:37:50.282383 2323 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 7 23:37:50.372302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3931599428.mount: Deactivated successfully. May 7 23:37:50.376076 containerd[1459]: time="2025-05-07T23:37:50.376000672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 7 23:37:50.378450 containerd[1459]: time="2025-05-07T23:37:50.378340169Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 7 23:37:50.380513 containerd[1459]: time="2025-05-07T23:37:50.380476736Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 7 23:37:50.382374 containerd[1459]: time="2025-05-07T23:37:50.382347031Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 7 23:37:50.382624 containerd[1459]: time="2025-05-07T23:37:50.382589605Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 7 23:37:50.383461 containerd[1459]: time="2025-05-07T23:37:50.383403405Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 7 23:37:50.384164 containerd[1459]: time="2025-05-07T23:37:50.384035855Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 7 23:37:50.384755 containerd[1459]: time="2025-05-07T23:37:50.384717260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 7 23:37:50.386665 containerd[1459]: time="2025-05-07T23:37:50.386616328Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 651.285736ms" May 7 23:37:50.387949 containerd[1459]: time="2025-05-07T23:37:50.387901769Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 673.672895ms" May 7 23:37:50.390524 containerd[1459]: time="2025-05-07T23:37:50.390479923Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 658.667749ms" May 7 23:37:50.398462 kubelet[2323]: W0507 23:37:50.398396 2323 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 7 23:37:50.398462 kubelet[2323]: E0507 23:37:50.398460 2323 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 7 23:37:50.478596 kubelet[2323]: E0507 23:37:50.478533 2323 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="1.6s" May 7 23:37:50.518753 kubelet[2323]: W0507 23:37:50.518702 2323 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 7 23:37:50.518753 kubelet[2323]: E0507 23:37:50.518757 2323 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 7 23:37:50.520530 containerd[1459]: time="2025-05-07T23:37:50.520222555Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 7 23:37:50.520956 containerd[1459]: time="2025-05-07T23:37:50.520825952Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 7 23:37:50.520956 containerd[1459]: time="2025-05-07T23:37:50.520855964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:37:50.521265 containerd[1459]: time="2025-05-07T23:37:50.521128430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:37:50.521363 containerd[1459]: time="2025-05-07T23:37:50.520990879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 7 23:37:50.521363 containerd[1459]: time="2025-05-07T23:37:50.521351462Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 7 23:37:50.521427 containerd[1459]: time="2025-05-07T23:37:50.521364050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:37:50.521451 containerd[1459]: time="2025-05-07T23:37:50.521430269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:37:50.524125 containerd[1459]: time="2025-05-07T23:37:50.523902362Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 7 23:37:50.524125 containerd[1459]: time="2025-05-07T23:37:50.523959269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 7 23:37:50.524125 containerd[1459]: time="2025-05-07T23:37:50.523974255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:37:50.524125 containerd[1459]: time="2025-05-07T23:37:50.524041353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:37:50.540358 systemd[1]: Started cri-containerd-4c57e36ebe015aa67d7f4b8fa045b3cb313e181da7e278ce7841facb5fca132f.scope - libcontainer container 4c57e36ebe015aa67d7f4b8fa045b3cb313e181da7e278ce7841facb5fca132f. May 7 23:37:50.541467 systemd[1]: Started cri-containerd-744cbdab3828220eb89d3d993cfe3f68c14915eafe9bb1d50b0b14f3ffcd86dc.scope - libcontainer container 744cbdab3828220eb89d3d993cfe3f68c14915eafe9bb1d50b0b14f3ffcd86dc. May 7 23:37:50.543840 systemd[1]: Started cri-containerd-7061e7639b13a86b016f26ce481a05d180f87815afc42bcd4c0c9838e9904e8e.scope - libcontainer container 7061e7639b13a86b016f26ce481a05d180f87815afc42bcd4c0c9838e9904e8e. May 7 23:37:50.571615 containerd[1459]: time="2025-05-07T23:37:50.571480253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c57e36ebe015aa67d7f4b8fa045b3cb313e181da7e278ce7841facb5fca132f\"" May 7 23:37:50.573016 kubelet[2323]: E0507 23:37:50.572989 2323 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:37:50.574635 containerd[1459]: time="2025-05-07T23:37:50.574401328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"744cbdab3828220eb89d3d993cfe3f68c14915eafe9bb1d50b0b14f3ffcd86dc\"" May 7 23:37:50.575959 containerd[1459]: time="2025-05-07T23:37:50.575928223Z" level=info msg="CreateContainer within sandbox \"4c57e36ebe015aa67d7f4b8fa045b3cb313e181da7e278ce7841facb5fca132f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 7 23:37:50.576261 containerd[1459]: time="2025-05-07T23:37:50.576234657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1ecbf0a38dce867f5e6f9f9c6b1c2012,Namespace:kube-system,Attempt:0,} returns sandbox id \"7061e7639b13a86b016f26ce481a05d180f87815afc42bcd4c0c9838e9904e8e\"" May 7 23:37:50.576343 kubelet[2323]: E0507 23:37:50.576325 2323 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:37:50.577521 kubelet[2323]: E0507 23:37:50.577504 2323 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:37:50.578842 containerd[1459]: time="2025-05-07T23:37:50.578682174Z" level=info msg="CreateContainer within sandbox \"744cbdab3828220eb89d3d993cfe3f68c14915eafe9bb1d50b0b14f3ffcd86dc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 7 23:37:50.579600 containerd[1459]: time="2025-05-07T23:37:50.579455972Z" level=info msg="CreateContainer within sandbox \"7061e7639b13a86b016f26ce481a05d180f87815afc42bcd4c0c9838e9904e8e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 7 23:37:50.582806 kubelet[2323]: I0507 23:37:50.582763 2323 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 7 23:37:50.583111 kubelet[2323]: E0507 23:37:50.583090 2323 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" May 7 23:37:50.591598 containerd[1459]: time="2025-05-07T23:37:50.591559479Z" level=info msg="CreateContainer within sandbox \"4c57e36ebe015aa67d7f4b8fa045b3cb313e181da7e278ce7841facb5fca132f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2deeaef949ed46f8fe09a3d257041f325f4040082bc601b19c30e4bbceadf352\"" May 7 23:37:50.592165 containerd[1459]: time="2025-05-07T23:37:50.592121475Z" level=info msg="StartContainer for \"2deeaef949ed46f8fe09a3d257041f325f4040082bc601b19c30e4bbceadf352\"" May 7 23:37:50.596342 containerd[1459]: time="2025-05-07T23:37:50.596293662Z" level=info msg="CreateContainer within sandbox \"744cbdab3828220eb89d3d993cfe3f68c14915eafe9bb1d50b0b14f3ffcd86dc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a7eea8b8f50d86c2c2b0a3bf96736f2eee3aa03b98648b6d4b450cc3b6de1b8e\"" May 7 23:37:50.596736 containerd[1459]: time="2025-05-07T23:37:50.596712512Z" level=info msg="StartContainer for \"a7eea8b8f50d86c2c2b0a3bf96736f2eee3aa03b98648b6d4b450cc3b6de1b8e\"" May 7 23:37:50.597673 containerd[1459]: time="2025-05-07T23:37:50.597648878Z" level=info msg="CreateContainer within sandbox \"7061e7639b13a86b016f26ce481a05d180f87815afc42bcd4c0c9838e9904e8e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f0b359e0d7471d09fba55d945e623f80ef2c0cdb8b5ff16a4238e8c090b11a4d\"" May 7 23:37:50.598021 containerd[1459]: time="2025-05-07T23:37:50.597995395Z" level=info msg="StartContainer for \"f0b359e0d7471d09fba55d945e623f80ef2c0cdb8b5ff16a4238e8c090b11a4d\"" May 7 23:37:50.614312 systemd[1]: Started cri-containerd-2deeaef949ed46f8fe09a3d257041f325f4040082bc601b19c30e4bbceadf352.scope - libcontainer container 2deeaef949ed46f8fe09a3d257041f325f4040082bc601b19c30e4bbceadf352. May 7 23:37:50.616732 systemd[1]: Started cri-containerd-a7eea8b8f50d86c2c2b0a3bf96736f2eee3aa03b98648b6d4b450cc3b6de1b8e.scope - libcontainer container a7eea8b8f50d86c2c2b0a3bf96736f2eee3aa03b98648b6d4b450cc3b6de1b8e. May 7 23:37:50.620540 systemd[1]: Started cri-containerd-f0b359e0d7471d09fba55d945e623f80ef2c0cdb8b5ff16a4238e8c090b11a4d.scope - libcontainer container f0b359e0d7471d09fba55d945e623f80ef2c0cdb8b5ff16a4238e8c090b11a4d. May 7 23:37:50.652678 containerd[1459]: time="2025-05-07T23:37:50.651654292Z" level=info msg="StartContainer for \"a7eea8b8f50d86c2c2b0a3bf96736f2eee3aa03b98648b6d4b450cc3b6de1b8e\" returns successfully" May 7 23:37:50.692224 containerd[1459]: time="2025-05-07T23:37:50.687612144Z" level=info msg="StartContainer for \"f0b359e0d7471d09fba55d945e623f80ef2c0cdb8b5ff16a4238e8c090b11a4d\" returns successfully" May 7 23:37:50.692224 containerd[1459]: time="2025-05-07T23:37:50.687629767Z" level=info msg="StartContainer for \"2deeaef949ed46f8fe09a3d257041f325f4040082bc601b19c30e4bbceadf352\" returns successfully" May 7 23:37:51.097996 kubelet[2323]: E0507 23:37:51.096896 2323 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:37:51.098683 kubelet[2323]: E0507 23:37:51.098660 2323 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:37:51.105919 kubelet[2323]: E0507 23:37:51.105470 2323 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:37:52.102909 kubelet[2323]: E0507 23:37:52.102873 2323 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:37:52.184824 kubelet[2323]: I0507 23:37:52.184793 2323 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 7 23:37:52.334603 kubelet[2323]: E0507 23:37:52.334566 2323 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 7 23:37:52.436114 kubelet[2323]: I0507 23:37:52.436035 2323 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 7 23:37:52.455132 kubelet[2323]: E0507 23:37:52.455101 2323 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 7 23:37:53.064351 kubelet[2323]: I0507 23:37:53.064306 2323 apiserver.go:52] "Watching apiserver" May 7 23:37:53.074944 kubelet[2323]: I0507 23:37:53.074919 2323 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 7 23:37:53.559049 kubelet[2323]: E0507 23:37:53.558980 2323 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:37:54.103475 kubelet[2323]: E0507 23:37:54.103446 2323 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:37:54.356123 systemd[1]: Reload requested from client PID 2605 ('systemctl') (unit session-9.scope)... May 7 23:37:54.356149 systemd[1]: Reloading... May 7 23:37:54.433236 zram_generator::config[2650]: No configuration found. May 7 23:37:54.511627 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 7 23:37:54.594798 systemd[1]: Reloading finished in 238 ms. May 7 23:37:54.619043 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 7 23:37:54.624274 systemd[1]: kubelet.service: Deactivated successfully. May 7 23:37:54.624518 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:37:54.624573 systemd[1]: kubelet.service: Consumed 1.022s CPU time, 117.5M memory peak. May 7 23:37:54.631435 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 7 23:37:54.731580 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:37:54.735302 (kubelet)[2691]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 7 23:37:54.772332 kubelet[2691]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 7 23:37:54.772332 kubelet[2691]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 7 23:37:54.772332 kubelet[2691]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 7 23:37:54.772678 kubelet[2691]: I0507 23:37:54.772365 2691 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 7 23:37:54.779260 kubelet[2691]: I0507 23:37:54.779227 2691 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 7 23:37:54.779260 kubelet[2691]: I0507 23:37:54.779254 2691 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 7 23:37:54.779941 kubelet[2691]: I0507 23:37:54.779763 2691 server.go:927] "Client rotation is on, will bootstrap in background" May 7 23:37:54.781089 kubelet[2691]: I0507 23:37:54.781064 2691 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 7 23:37:54.782412 kubelet[2691]: I0507 23:37:54.782259 2691 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 7 23:37:54.786871 kubelet[2691]: I0507 23:37:54.786848 2691 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 7 23:37:54.787067 kubelet[2691]: I0507 23:37:54.787027 2691 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 7 23:37:54.787269 kubelet[2691]: I0507 23:37:54.787055 2691 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 7 23:37:54.787349 kubelet[2691]: I0507 23:37:54.787274 2691 topology_manager.go:138] "Creating topology manager with none policy" May 7 23:37:54.787349 kubelet[2691]: I0507 23:37:54.787283 2691 container_manager_linux.go:301] "Creating device plugin manager" May 7 23:37:54.787349 kubelet[2691]: I0507 23:37:54.787315 2691 state_mem.go:36] "Initialized new in-memory state store" May 7 23:37:54.787426 kubelet[2691]: I0507 23:37:54.787412 2691 kubelet.go:400] "Attempting to sync node with API server" May 7 23:37:54.787426 kubelet[2691]: I0507 23:37:54.787425 2691 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 7 23:37:54.787470 kubelet[2691]: I0507 23:37:54.787451 2691 kubelet.go:312] "Adding apiserver pod source" May 7 23:37:54.787470 kubelet[2691]: I0507 23:37:54.787464 2691 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 7 23:37:54.789661 kubelet[2691]: I0507 23:37:54.788239 2691 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 7 23:37:54.789661 kubelet[2691]: I0507 23:37:54.788387 2691 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 7 23:37:54.789661 kubelet[2691]: I0507 23:37:54.788742 2691 server.go:1264] "Started kubelet" May 7 23:37:54.789777 kubelet[2691]: I0507 23:37:54.789618 2691 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 7 23:37:54.789948 kubelet[2691]: I0507 23:37:54.789917 2691 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 7 23:37:54.790001 kubelet[2691]: I0507 23:37:54.789979 2691 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 7 23:37:54.790215 kubelet[2691]: I0507 23:37:54.790192 2691 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 7 23:37:54.791340 kubelet[2691]: I0507 23:37:54.791185 2691 server.go:455] "Adding debug handlers to kubelet server" May 7 23:37:54.794294 kubelet[2691]: I0507 23:37:54.794251 2691 volume_manager.go:291] "Starting Kubelet Volume Manager" May 7 23:37:54.795003 kubelet[2691]: I0507 23:37:54.794885 2691 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 7 23:37:54.795176 kubelet[2691]: I0507 23:37:54.795157 2691 reconciler.go:26] "Reconciler: start to sync state" May 7 23:37:54.805339 kubelet[2691]: I0507 23:37:54.805306 2691 factory.go:221] Registration of the systemd container factory successfully May 7 23:37:54.805817 kubelet[2691]: I0507 23:37:54.805777 2691 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 7 23:37:54.813721 kubelet[2691]: I0507 23:37:54.813695 2691 factory.go:221] Registration of the containerd container factory successfully May 7 23:37:54.817188 kubelet[2691]: E0507 23:37:54.817115 2691 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 7 23:37:54.817188 kubelet[2691]: I0507 23:37:54.817152 2691 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 7 23:37:54.818297 kubelet[2691]: I0507 23:37:54.818273 2691 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 7 23:37:54.818297 kubelet[2691]: I0507 23:37:54.818302 2691 status_manager.go:217] "Starting to sync pod status with apiserver" May 7 23:37:54.818381 kubelet[2691]: I0507 23:37:54.818317 2691 kubelet.go:2337] "Starting kubelet main sync loop" May 7 23:37:54.818381 kubelet[2691]: E0507 23:37:54.818353 2691 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 7 23:37:54.849071 kubelet[2691]: I0507 23:37:54.849050 2691 cpu_manager.go:214] "Starting CPU manager" policy="none" May 7 23:37:54.849071 kubelet[2691]: I0507 23:37:54.849067 2691 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 7 23:37:54.849210 kubelet[2691]: I0507 23:37:54.849085 2691 state_mem.go:36] "Initialized new in-memory state store" May 7 23:37:54.849254 kubelet[2691]: I0507 23:37:54.849237 2691 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 7 23:37:54.849276 kubelet[2691]: I0507 23:37:54.849252 2691 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 7 23:37:54.849276 kubelet[2691]: I0507 23:37:54.849270 2691 policy_none.go:49] "None policy: Start" May 7 23:37:54.849842 kubelet[2691]: I0507 23:37:54.849822 2691 memory_manager.go:170] "Starting memorymanager" policy="None" May 7 23:37:54.849842 kubelet[2691]: I0507 23:37:54.849852 2691 state_mem.go:35] "Initializing new in-memory state store" May 7 23:37:54.850006 kubelet[2691]: I0507 23:37:54.849974 2691 state_mem.go:75] "Updated machine memory state" May 7 23:37:54.853595 kubelet[2691]: I0507 23:37:54.853573 2691 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 7 23:37:54.853753 kubelet[2691]: I0507 23:37:54.853719 2691 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 7 23:37:54.853836 kubelet[2691]: I0507 23:37:54.853825 2691 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 7 23:37:54.900556 kubelet[2691]: I0507 23:37:54.900452 2691 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 7 23:37:54.906469 kubelet[2691]: I0507 23:37:54.906431 2691 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 7 23:37:54.906572 kubelet[2691]: I0507 23:37:54.906521 2691 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 7 23:37:54.919021 kubelet[2691]: I0507 23:37:54.918987 2691 topology_manager.go:215] "Topology Admit Handler" podUID="1ecbf0a38dce867f5e6f9f9c6b1c2012" podNamespace="kube-system" podName="kube-apiserver-localhost" May 7 23:37:54.919126 kubelet[2691]: I0507 23:37:54.919085 2691 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 7 23:37:54.919126 kubelet[2691]: I0507 23:37:54.919122 2691 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 7 23:37:54.924471 kubelet[2691]: E0507 23:37:54.924429 2691 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 7 23:37:55.097232 kubelet[2691]: I0507 23:37:55.096987 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 7 23:37:55.097232 kubelet[2691]: I0507 23:37:55.097031 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1ecbf0a38dce867f5e6f9f9c6b1c2012-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1ecbf0a38dce867f5e6f9f9c6b1c2012\") " pod="kube-system/kube-apiserver-localhost" May 7 23:37:55.097232 kubelet[2691]: I0507 23:37:55.097052 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1ecbf0a38dce867f5e6f9f9c6b1c2012-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1ecbf0a38dce867f5e6f9f9c6b1c2012\") " pod="kube-system/kube-apiserver-localhost" May 7 23:37:55.097232 kubelet[2691]: I0507 23:37:55.097067 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 7 23:37:55.097232 kubelet[2691]: I0507 23:37:55.097084 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 7 23:37:55.097446 kubelet[2691]: I0507 23:37:55.097099 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 7 23:37:55.097446 kubelet[2691]: I0507 23:37:55.097112 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1ecbf0a38dce867f5e6f9f9c6b1c2012-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1ecbf0a38dce867f5e6f9f9c6b1c2012\") " pod="kube-system/kube-apiserver-localhost" May 7 23:37:55.097446 kubelet[2691]: I0507 23:37:55.097126 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 7 23:37:55.097446 kubelet[2691]: I0507 23:37:55.097171 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 7 23:37:55.225409 kubelet[2691]: E0507 23:37:55.225373 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:37:55.225693 kubelet[2691]: E0507 23:37:55.225553 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:37:55.226003 kubelet[2691]: E0507 23:37:55.225983 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:37:55.360115 sudo[2727]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 7 23:37:55.360416 sudo[2727]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 7 23:37:55.784304 sudo[2727]: pam_unix(sudo:session): session closed for user root May 7 23:37:55.787902 kubelet[2691]: I0507 23:37:55.787859 2691 apiserver.go:52] "Watching apiserver" May 7 23:37:55.796024 kubelet[2691]: I0507 23:37:55.795988 2691 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 7 23:37:55.831849 kubelet[2691]: E0507 23:37:55.831814 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:37:55.832482 kubelet[2691]: E0507 23:37:55.832428 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:37:55.832710 kubelet[2691]: E0507 23:37:55.832691 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:37:55.857385 kubelet[2691]: I0507 23:37:55.857048 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.857031923 podStartE2EDuration="1.857031923s" podCreationTimestamp="2025-05-07 23:37:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-07 23:37:55.851008872 +0000 UTC m=+1.112729119" watchObservedRunningTime="2025-05-07 23:37:55.857031923 +0000 UTC m=+1.118752130" May 7 23:37:55.865248 kubelet[2691]: I0507 23:37:55.865200 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.8651858949999998 podStartE2EDuration="2.865185895s" podCreationTimestamp="2025-05-07 23:37:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-07 23:37:55.857589267 +0000 UTC m=+1.119309514" watchObservedRunningTime="2025-05-07 23:37:55.865185895 +0000 UTC m=+1.126906102" May 7 23:37:55.872935 kubelet[2691]: I0507 23:37:55.872887 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.8728753409999999 podStartE2EDuration="1.872875341s" podCreationTimestamp="2025-05-07 23:37:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-07 23:37:55.865362056 +0000 UTC m=+1.127082303" watchObservedRunningTime="2025-05-07 23:37:55.872875341 +0000 UTC m=+1.134595588" May 7 23:37:56.833731 kubelet[2691]: E0507 23:37:56.833693 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:37:57.394942 sudo[1682]: pam_unix(sudo:session): session closed for user root May 7 23:37:57.396454 sshd[1681]: Connection closed by 10.0.0.1 port 34840 May 7 23:37:57.396868 sshd-session[1678]: pam_unix(sshd:session): session closed for user core May 7 23:37:57.400763 systemd[1]: sshd@8-10.0.0.15:22-10.0.0.1:34840.service: Deactivated successfully. May 7 23:37:57.402898 systemd[1]: session-9.scope: Deactivated successfully. May 7 23:37:57.403178 systemd[1]: session-9.scope: Consumed 6.804s CPU time, 286.9M memory peak. May 7 23:37:57.404210 systemd-logind[1447]: Session 9 logged out. Waiting for processes to exit. May 7 23:37:57.405004 systemd-logind[1447]: Removed session 9. May 7 23:38:00.117171 update_engine[1448]: I20250507 23:38:00.116919 1448 update_attempter.cc:509] Updating boot flags... May 7 23:38:00.180591 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2778) May 7 23:38:00.225190 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2777) May 7 23:38:00.260352 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2777) May 7 23:38:00.699889 kubelet[2691]: E0507 23:38:00.699793 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:38:00.839611 kubelet[2691]: E0507 23:38:00.839511 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:38:04.922438 kubelet[2691]: E0507 23:38:04.922368 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:38:05.191501 kubelet[2691]: E0507 23:38:05.191335 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:38:05.846187 kubelet[2691]: E0507 23:38:05.845438 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:38:05.846187 kubelet[2691]: E0507 23:38:05.845485 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:38:06.847513 kubelet[2691]: E0507 23:38:06.847476 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:38:10.337679 kubelet[2691]: I0507 23:38:10.337558 2691 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 7 23:38:10.343689 containerd[1459]: time="2025-05-07T23:38:10.343427556Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 7 23:38:10.344041 kubelet[2691]: I0507 23:38:10.343656 2691 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 7 23:38:11.013778 kubelet[2691]: I0507 23:38:11.013735 2691 topology_manager.go:215] "Topology Admit Handler" podUID="96ec5bb1-f150-4c3e-9c53-d9c3e9cb6050" podNamespace="kube-system" podName="kube-proxy-8qb8z" May 7 23:38:11.013966 kubelet[2691]: I0507 23:38:11.013893 2691 topology_manager.go:215] "Topology Admit Handler" podUID="b8b730ca-7ee2-4c70-bd2a-a61be61f0768" podNamespace="kube-system" podName="cilium-t6rnj" May 7 23:38:11.031308 systemd[1]: Created slice kubepods-burstable-podb8b730ca_7ee2_4c70_bd2a_a61be61f0768.slice - libcontainer container kubepods-burstable-podb8b730ca_7ee2_4c70_bd2a_a61be61f0768.slice. May 7 23:38:11.038940 systemd[1]: Created slice kubepods-besteffort-pod96ec5bb1_f150_4c3e_9c53_d9c3e9cb6050.slice - libcontainer container kubepods-besteffort-pod96ec5bb1_f150_4c3e_9c53_d9c3e9cb6050.slice. May 7 23:38:11.203480 kubelet[2691]: I0507 23:38:11.203433 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/96ec5bb1-f150-4c3e-9c53-d9c3e9cb6050-kube-proxy\") pod \"kube-proxy-8qb8z\" (UID: \"96ec5bb1-f150-4c3e-9c53-d9c3e9cb6050\") " pod="kube-system/kube-proxy-8qb8z" May 7 23:38:11.203480 kubelet[2691]: I0507 23:38:11.203481 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-xtables-lock\") pod \"cilium-t6rnj\" (UID: \"b8b730ca-7ee2-4c70-bd2a-a61be61f0768\") " pod="kube-system/cilium-t6rnj" May 7 23:38:11.203653 kubelet[2691]: I0507 23:38:11.203500 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-host-proc-sys-kernel\") pod \"cilium-t6rnj\" (UID: \"b8b730ca-7ee2-4c70-bd2a-a61be61f0768\") " pod="kube-system/cilium-t6rnj" May 7 23:38:11.203653 kubelet[2691]: I0507 23:38:11.203517 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/96ec5bb1-f150-4c3e-9c53-d9c3e9cb6050-xtables-lock\") pod \"kube-proxy-8qb8z\" (UID: \"96ec5bb1-f150-4c3e-9c53-d9c3e9cb6050\") " pod="kube-system/kube-proxy-8qb8z" May 7 23:38:11.203653 kubelet[2691]: I0507 23:38:11.203543 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-cilium-run\") pod \"cilium-t6rnj\" (UID: \"b8b730ca-7ee2-4c70-bd2a-a61be61f0768\") " pod="kube-system/cilium-t6rnj" May 7 23:38:11.203653 kubelet[2691]: I0507 23:38:11.203559 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-etc-cni-netd\") pod \"cilium-t6rnj\" (UID: \"b8b730ca-7ee2-4c70-bd2a-a61be61f0768\") " pod="kube-system/cilium-t6rnj" May 7 23:38:11.203653 kubelet[2691]: I0507 23:38:11.203573 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-clustermesh-secrets\") pod \"cilium-t6rnj\" (UID: \"b8b730ca-7ee2-4c70-bd2a-a61be61f0768\") " pod="kube-system/cilium-t6rnj" May 7 23:38:11.203859 kubelet[2691]: I0507 23:38:11.203588 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-host-proc-sys-net\") pod \"cilium-t6rnj\" (UID: \"b8b730ca-7ee2-4c70-bd2a-a61be61f0768\") " pod="kube-system/cilium-t6rnj" May 7 23:38:11.203859 kubelet[2691]: I0507 23:38:11.203611 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-hostproc\") pod \"cilium-t6rnj\" (UID: \"b8b730ca-7ee2-4c70-bd2a-a61be61f0768\") " pod="kube-system/cilium-t6rnj" May 7 23:38:11.203859 kubelet[2691]: I0507 23:38:11.203640 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-cilium-cgroup\") pod \"cilium-t6rnj\" (UID: \"b8b730ca-7ee2-4c70-bd2a-a61be61f0768\") " pod="kube-system/cilium-t6rnj" May 7 23:38:11.203859 kubelet[2691]: I0507 23:38:11.203658 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz76b\" (UniqueName: \"kubernetes.io/projected/96ec5bb1-f150-4c3e-9c53-d9c3e9cb6050-kube-api-access-tz76b\") pod \"kube-proxy-8qb8z\" (UID: \"96ec5bb1-f150-4c3e-9c53-d9c3e9cb6050\") " pod="kube-system/kube-proxy-8qb8z" May 7 23:38:11.203859 kubelet[2691]: I0507 23:38:11.203683 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-bpf-maps\") pod \"cilium-t6rnj\" (UID: \"b8b730ca-7ee2-4c70-bd2a-a61be61f0768\") " pod="kube-system/cilium-t6rnj" May 7 23:38:11.203859 kubelet[2691]: I0507 23:38:11.203702 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-cilium-config-path\") pod \"cilium-t6rnj\" (UID: \"b8b730ca-7ee2-4c70-bd2a-a61be61f0768\") " pod="kube-system/cilium-t6rnj" May 7 23:38:11.204050 kubelet[2691]: I0507 23:38:11.203725 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/96ec5bb1-f150-4c3e-9c53-d9c3e9cb6050-lib-modules\") pod \"kube-proxy-8qb8z\" (UID: \"96ec5bb1-f150-4c3e-9c53-d9c3e9cb6050\") " pod="kube-system/kube-proxy-8qb8z" May 7 23:38:11.204050 kubelet[2691]: I0507 23:38:11.203741 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-lib-modules\") pod \"cilium-t6rnj\" (UID: \"b8b730ca-7ee2-4c70-bd2a-a61be61f0768\") " pod="kube-system/cilium-t6rnj" May 7 23:38:11.204050 kubelet[2691]: I0507 23:38:11.203765 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-cni-path\") pod \"cilium-t6rnj\" (UID: \"b8b730ca-7ee2-4c70-bd2a-a61be61f0768\") " pod="kube-system/cilium-t6rnj" May 7 23:38:11.204050 kubelet[2691]: I0507 23:38:11.203780 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnjpt\" (UniqueName: \"kubernetes.io/projected/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-kube-api-access-tnjpt\") pod \"cilium-t6rnj\" (UID: \"b8b730ca-7ee2-4c70-bd2a-a61be61f0768\") " pod="kube-system/cilium-t6rnj" May 7 23:38:11.204050 kubelet[2691]: I0507 23:38:11.203796 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-hubble-tls\") pod \"cilium-t6rnj\" (UID: \"b8b730ca-7ee2-4c70-bd2a-a61be61f0768\") " pod="kube-system/cilium-t6rnj" May 7 23:38:11.335623 kubelet[2691]: E0507 23:38:11.335517 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:38:11.337093 containerd[1459]: time="2025-05-07T23:38:11.336972103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t6rnj,Uid:b8b730ca-7ee2-4c70-bd2a-a61be61f0768,Namespace:kube-system,Attempt:0,}" May 7 23:38:11.354869 kubelet[2691]: E0507 23:38:11.354807 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:38:11.355864 containerd[1459]: time="2025-05-07T23:38:11.355229754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8qb8z,Uid:96ec5bb1-f150-4c3e-9c53-d9c3e9cb6050,Namespace:kube-system,Attempt:0,}" May 7 23:38:11.356171 containerd[1459]: time="2025-05-07T23:38:11.355987692Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 7 23:38:11.356171 containerd[1459]: time="2025-05-07T23:38:11.356057195Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 7 23:38:11.356171 containerd[1459]: time="2025-05-07T23:38:11.356072711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:38:11.356298 containerd[1459]: time="2025-05-07T23:38:11.356251668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:38:11.377289 systemd[1]: Started cri-containerd-c7fa7b54a07bde41e294d42f593ad2a0925fecc511568d4fe196e3ef6563c628.scope - libcontainer container c7fa7b54a07bde41e294d42f593ad2a0925fecc511568d4fe196e3ef6563c628. May 7 23:38:11.385598 containerd[1459]: time="2025-05-07T23:38:11.383268933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 7 23:38:11.385598 containerd[1459]: time="2025-05-07T23:38:11.383325639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 7 23:38:11.385598 containerd[1459]: time="2025-05-07T23:38:11.383336676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:38:11.385598 containerd[1459]: time="2025-05-07T23:38:11.383402181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:38:11.405673 systemd[1]: Started cri-containerd-3a3bc56c1a35a2f2fff7aae397db32c964954f1050dce938e31e850cecbbee7a.scope - libcontainer container 3a3bc56c1a35a2f2fff7aae397db32c964954f1050dce938e31e850cecbbee7a. May 7 23:38:11.410241 containerd[1459]: time="2025-05-07T23:38:11.410127035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t6rnj,Uid:b8b730ca-7ee2-4c70-bd2a-a61be61f0768,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7fa7b54a07bde41e294d42f593ad2a0925fecc511568d4fe196e3ef6563c628\"" May 7 23:38:11.418629 kubelet[2691]: E0507 23:38:11.418359 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:38:11.425662 containerd[1459]: time="2025-05-07T23:38:11.425611952Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 7 23:38:11.441708 containerd[1459]: time="2025-05-07T23:38:11.441652176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8qb8z,Uid:96ec5bb1-f150-4c3e-9c53-d9c3e9cb6050,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a3bc56c1a35a2f2fff7aae397db32c964954f1050dce938e31e850cecbbee7a\"" May 7 23:38:11.442243 kubelet[2691]: E0507 23:38:11.442220 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:38:11.443240 kubelet[2691]: I0507 23:38:11.443216 2691 topology_manager.go:215] "Topology Admit Handler" podUID="00f5f149-a9c3-486a-b933-e2e96be0e46f" podNamespace="kube-system" podName="cilium-operator-599987898-c7rmk" May 7 23:38:11.449276 containerd[1459]: time="2025-05-07T23:38:11.449234593Z" level=info msg="CreateContainer within sandbox \"3a3bc56c1a35a2f2fff7aae397db32c964954f1050dce938e31e850cecbbee7a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 7 23:38:11.456247 systemd[1]: Created slice kubepods-besteffort-pod00f5f149_a9c3_486a_b933_e2e96be0e46f.slice - libcontainer container kubepods-besteffort-pod00f5f149_a9c3_486a_b933_e2e96be0e46f.slice. May 7 23:38:11.461527 containerd[1459]: time="2025-05-07T23:38:11.461493086Z" level=info msg="CreateContainer within sandbox \"3a3bc56c1a35a2f2fff7aae397db32c964954f1050dce938e31e850cecbbee7a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a33eac315a91312615f678ccfc0e258c19a7eb3d3565874140c9379bc11a82a4\"" May 7 23:38:11.464483 containerd[1459]: time="2025-05-07T23:38:11.462931620Z" level=info msg="StartContainer for \"a33eac315a91312615f678ccfc0e258c19a7eb3d3565874140c9379bc11a82a4\"" May 7 23:38:11.487291 systemd[1]: Started cri-containerd-a33eac315a91312615f678ccfc0e258c19a7eb3d3565874140c9379bc11a82a4.scope - libcontainer container a33eac315a91312615f678ccfc0e258c19a7eb3d3565874140c9379bc11a82a4. May 7 23:38:11.508793 kubelet[2691]: I0507 23:38:11.507200 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2phvx\" (UniqueName: \"kubernetes.io/projected/00f5f149-a9c3-486a-b933-e2e96be0e46f-kube-api-access-2phvx\") pod \"cilium-operator-599987898-c7rmk\" (UID: \"00f5f149-a9c3-486a-b933-e2e96be0e46f\") " pod="kube-system/cilium-operator-599987898-c7rmk" May 7 23:38:11.508793 kubelet[2691]: I0507 23:38:11.507241 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/00f5f149-a9c3-486a-b933-e2e96be0e46f-cilium-config-path\") pod \"cilium-operator-599987898-c7rmk\" (UID: \"00f5f149-a9c3-486a-b933-e2e96be0e46f\") " pod="kube-system/cilium-operator-599987898-c7rmk" May 7 23:38:11.515553 containerd[1459]: time="2025-05-07T23:38:11.515516258Z" level=info msg="StartContainer for \"a33eac315a91312615f678ccfc0e258c19a7eb3d3565874140c9379bc11a82a4\" returns successfully" May 7 23:38:11.758616 kubelet[2691]: E0507 23:38:11.758588 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:38:11.759097 containerd[1459]: time="2025-05-07T23:38:11.759033071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-c7rmk,Uid:00f5f149-a9c3-486a-b933-e2e96be0e46f,Namespace:kube-system,Attempt:0,}" May 7 23:38:11.781198 containerd[1459]: time="2025-05-07T23:38:11.781059496Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 7 23:38:11.781198 containerd[1459]: time="2025-05-07T23:38:11.781123640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 7 23:38:11.781542 containerd[1459]: time="2025-05-07T23:38:11.781379819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:38:11.781614 containerd[1459]: time="2025-05-07T23:38:11.781504549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:38:11.799347 systemd[1]: Started cri-containerd-4a65224c3275ab8315056583f941ba684bdf5823969b058f8b025739e3900a4b.scope - libcontainer container 4a65224c3275ab8315056583f941ba684bdf5823969b058f8b025739e3900a4b. May 7 23:38:11.832247 containerd[1459]: time="2025-05-07T23:38:11.832194642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-c7rmk,Uid:00f5f149-a9c3-486a-b933-e2e96be0e46f,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a65224c3275ab8315056583f941ba684bdf5823969b058f8b025739e3900a4b\"" May 7 23:38:11.833165 kubelet[2691]: E0507 23:38:11.833096 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:38:11.859523 kubelet[2691]: E0507 23:38:11.859363 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 7 23:38:20.201097 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1802086150.mount: Deactivated successfully. May 7 23:38:22.352882 systemd[1]: Started sshd@9-10.0.0.15:22-10.0.0.1:52424.service - OpenSSH per-connection server daemon (10.0.0.1:52424). May 7 23:38:22.416166 sshd[3093]: Accepted publickey for core from 10.0.0.1 port 52424 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:38:22.417672 sshd-session[3093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:38:22.421591 systemd-logind[1447]: New session 10 of user core. May 7 23:38:22.433308 systemd[1]: Started session-10.scope - Session 10 of User core. May 7 23:38:22.561338 sshd[3095]: Connection closed by 10.0.0.1 port 52424 May 7 23:38:22.561770 sshd-session[3093]: pam_unix(sshd:session): session closed for user core May 7 23:38:22.565730 systemd[1]: sshd@9-10.0.0.15:22-10.0.0.1:52424.service: Deactivated successfully. May 7 23:38:22.567465 systemd[1]: session-10.scope: Deactivated successfully. May 7 23:38:22.568166 systemd-logind[1447]: Session 10 logged out. Waiting for processes to exit. May 7 23:38:22.569007 systemd-logind[1447]: Removed session 10. May 7 23:38:25.742392 containerd[1459]: time="2025-05-07T23:38:25.742330842Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:38:25.742899 containerd[1459]: time="2025-05-07T23:38:25.742835181Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 7 23:38:25.743606 containerd[1459]: time="2025-05-07T23:38:25.743570309Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:38:25.745180 containerd[1459]: time="2025-05-07T23:38:25.745129923Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 14.319332652s" May 7 23:38:25.745225 containerd[1459]: time="2025-05-07T23:38:25.745183160Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 7 23:38:25.756851 containerd[1459]: time="2025-05-07T23:38:25.756687149Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 7 23:38:25.758589 containerd[1459]: time="2025-05-07T23:38:25.758556550Z" level=info msg="CreateContainer within sandbox \"c7fa7b54a07bde41e294d42f593ad2a0925fecc511568d4fe196e3ef6563c628\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 7 23:38:25.771738 containerd[1459]: time="2025-05-07T23:38:25.771688389Z" level=info msg="CreateContainer within sandbox \"c7fa7b54a07bde41e294d42f593ad2a0925fecc511568d4fe196e3ef6563c628\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6102021bb03d6e7fa43017463606e78697acaa45566f6b5a79f7ed7f35a53065\"" May 7 23:38:25.772695 containerd[1459]: time="2025-05-07T23:38:25.772132010Z" level=info msg="StartContainer for \"6102021bb03d6e7fa43017463606e78697acaa45566f6b5a79f7ed7f35a53065\"" May 7 23:38:25.806357 systemd[1]: Started cri-containerd-6102021bb03d6e7fa43017463606e78697acaa45566f6b5a79f7ed7f35a53065.scope - libcontainer container 6102021bb03d6e7fa43017463606e78697acaa45566f6b5a79f7ed7f35a53065. May 7 23:38:25.888252 systemd[1]: cri-containerd-6102021bb03d6e7fa43017463606e78697acaa45566f6b5a79f7ed7f35a53065.scope: Deactivated successfully. May 7 23:38:25.888736 systemd[1]: cri-containerd-6102021bb03d6e7fa43017463606e78697acaa45566f6b5a79f7ed7f35a53065.scope: Consumed 59ms CPU time, 6.7M memory peak, 3.1M written to disk. May 7 23:38:25.910956 containerd[1459]: time="2025-05-07T23:38:25.910893087Z" level=info msg="StartContainer for \"6102021bb03d6e7fa43017463606e78697acaa45566f6b5a79f7ed7f35a53065\" returns successfully" May 7 23:38:25.943060 kubelet[2691]: I0507 23:38:25.942684 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8qb8z" podStartSLOduration=15.942649691 podStartE2EDuration="15.942649691s" podCreationTimestamp="2025-05-07 23:38:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-07 23:38:11.867819677 +0000 UTC m=+17.129539884" watchObservedRunningTime="2025-05-07 23:38:25.942649691 +0000 UTC m=+31.204369938" May 7 23:38:25.968689 containerd[1459]: time="2025-05-07T23:38:25.963906224Z" level=info msg="shim disconnected" id=6102021bb03d6e7fa43017463606e78697acaa45566f6b5a79f7ed7f35a53065 namespace=k8s.io May 7 23:38:25.968689 containerd[1459]: time="2025-05-07T23:38:25.968680180Z" level=warning msg="cleaning up after shim disconnected" id=6102021bb03d6e7fa43017463606e78697acaa45566f6b5a79f7ed7f35a53065 namespace=k8s.io May 7 23:38:25.968689 containerd[1459]: time="2025-05-07T23:38:25.968691659Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 7 23:38:26.769406 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6102021bb03d6e7fa43017463606e78697acaa45566f6b5a79f7ed7f35a53065-rootfs.mount: Deactivated successfully. May 7 23:38:26.919525 containerd[1459]: time="2025-05-07T23:38:26.919364901Z" level=info msg="CreateContainer within sandbox \"c7fa7b54a07bde41e294d42f593ad2a0925fecc511568d4fe196e3ef6563c628\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 7 23:38:26.943609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1223742132.mount: Deactivated successfully. May 7 23:38:26.959018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2544489998.mount: Deactivated successfully. May 7 23:38:26.961379 containerd[1459]: time="2025-05-07T23:38:26.961282200Z" level=info msg="CreateContainer within sandbox \"c7fa7b54a07bde41e294d42f593ad2a0925fecc511568d4fe196e3ef6563c628\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"643db7dca0395cb4149041a04eac8cfb7bd2acb0390b29531e61e2aa25aea7e2\"" May 7 23:38:26.961834 containerd[1459]: time="2025-05-07T23:38:26.961788459Z" level=info msg="StartContainer for \"643db7dca0395cb4149041a04eac8cfb7bd2acb0390b29531e61e2aa25aea7e2\"" May 7 23:38:26.989304 systemd[1]: Started cri-containerd-643db7dca0395cb4149041a04eac8cfb7bd2acb0390b29531e61e2aa25aea7e2.scope - libcontainer container 643db7dca0395cb4149041a04eac8cfb7bd2acb0390b29531e61e2aa25aea7e2. May 7 23:38:27.021355 containerd[1459]: time="2025-05-07T23:38:27.021184695Z" level=info msg="StartContainer for \"643db7dca0395cb4149041a04eac8cfb7bd2acb0390b29531e61e2aa25aea7e2\" returns successfully" May 7 23:38:27.033011 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 7 23:38:27.033529 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 7 23:38:27.033893 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 7 23:38:27.040481 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 7 23:38:27.040662 systemd[1]: cri-containerd-643db7dca0395cb4149041a04eac8cfb7bd2acb0390b29531e61e2aa25aea7e2.scope: Deactivated successfully. May 7 23:38:27.063461 containerd[1459]: time="2025-05-07T23:38:27.063277994Z" level=info msg="shim disconnected" id=643db7dca0395cb4149041a04eac8cfb7bd2acb0390b29531e61e2aa25aea7e2 namespace=k8s.io May 7 23:38:27.063461 containerd[1459]: time="2025-05-07T23:38:27.063402669Z" level=warning msg="cleaning up after shim disconnected" id=643db7dca0395cb4149041a04eac8cfb7bd2acb0390b29531e61e2aa25aea7e2 namespace=k8s.io May 7 23:38:27.063461 containerd[1459]: time="2025-05-07T23:38:27.063422708Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 7 23:38:27.065240 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 7 23:38:27.577678 systemd[1]: Started sshd@10-10.0.0.15:22-10.0.0.1:45010.service - OpenSSH per-connection server daemon (10.0.0.1:45010). May 7 23:38:27.634677 sshd[3243]: Accepted publickey for core from 10.0.0.1 port 45010 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:38:27.636258 sshd-session[3243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:38:27.641374 systemd-logind[1447]: New session 11 of user core. May 7 23:38:27.651310 systemd[1]: Started session-11.scope - Session 11 of User core. May 7 23:38:27.776693 sshd[3245]: Connection closed by 10.0.0.1 port 45010 May 7 23:38:27.777045 sshd-session[3243]: pam_unix(sshd:session): session closed for user core May 7 23:38:27.780405 systemd[1]: sshd@10-10.0.0.15:22-10.0.0.1:45010.service: Deactivated successfully. May 7 23:38:27.781992 systemd[1]: session-11.scope: Deactivated successfully. May 7 23:38:27.784757 systemd-logind[1447]: Session 11 logged out. Waiting for processes to exit. May 7 23:38:27.785824 systemd-logind[1447]: Removed session 11. May 7 23:38:27.805811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount754011053.mount: Deactivated successfully. May 7 23:38:27.924731 containerd[1459]: time="2025-05-07T23:38:27.924686307Z" level=info msg="CreateContainer within sandbox \"c7fa7b54a07bde41e294d42f593ad2a0925fecc511568d4fe196e3ef6563c628\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 7 23:38:27.954166 containerd[1459]: time="2025-05-07T23:38:27.954034681Z" level=info msg="CreateContainer within sandbox \"c7fa7b54a07bde41e294d42f593ad2a0925fecc511568d4fe196e3ef6563c628\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"43c7173c5daba87d987b28026506e98203db7deff63a74e354980407f03302f3\"" May 7 23:38:27.954634 containerd[1459]: time="2025-05-07T23:38:27.954592898Z" level=info msg="StartContainer for \"43c7173c5daba87d987b28026506e98203db7deff63a74e354980407f03302f3\"" May 7 23:38:27.983413 systemd[1]: Started cri-containerd-43c7173c5daba87d987b28026506e98203db7deff63a74e354980407f03302f3.scope - libcontainer container 43c7173c5daba87d987b28026506e98203db7deff63a74e354980407f03302f3. May 7 23:38:28.031352 systemd[1]: cri-containerd-43c7173c5daba87d987b28026506e98203db7deff63a74e354980407f03302f3.scope: Deactivated successfully. May 7 23:38:28.035706 containerd[1459]: time="2025-05-07T23:38:28.035668260Z" level=info msg="StartContainer for \"43c7173c5daba87d987b28026506e98203db7deff63a74e354980407f03302f3\" returns successfully" May 7 23:38:28.090904 containerd[1459]: time="2025-05-07T23:38:28.090658298Z" level=info msg="shim disconnected" id=43c7173c5daba87d987b28026506e98203db7deff63a74e354980407f03302f3 namespace=k8s.io May 7 23:38:28.090904 containerd[1459]: time="2025-05-07T23:38:28.090719975Z" level=warning msg="cleaning up after shim disconnected" id=43c7173c5daba87d987b28026506e98203db7deff63a74e354980407f03302f3 namespace=k8s.io May 7 23:38:28.090904 containerd[1459]: time="2025-05-07T23:38:28.090731335Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 7 23:38:28.114025 containerd[1459]: time="2025-05-07T23:38:28.113977621Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:38:28.114481 containerd[1459]: time="2025-05-07T23:38:28.114437683Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 7 23:38:28.115372 containerd[1459]: time="2025-05-07T23:38:28.115336127Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:38:28.116735 containerd[1459]: time="2025-05-07T23:38:28.116704474Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.359981326s" May 7 23:38:28.116920 containerd[1459]: time="2025-05-07T23:38:28.116826829Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 7 23:38:28.119996 containerd[1459]: time="2025-05-07T23:38:28.119901388Z" level=info msg="CreateContainer within sandbox \"4a65224c3275ab8315056583f941ba684bdf5823969b058f8b025739e3900a4b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 7 23:38:28.128922 containerd[1459]: time="2025-05-07T23:38:28.128879795Z" level=info msg="CreateContainer within sandbox \"4a65224c3275ab8315056583f941ba684bdf5823969b058f8b025739e3900a4b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9d1c15acbdadd698d4517ad5671b6b6d0766be5fbc2e47d157fc1441effccba5\"" May 7 23:38:28.129425 containerd[1459]: time="2025-05-07T23:38:28.129392735Z" level=info msg="StartContainer for \"9d1c15acbdadd698d4517ad5671b6b6d0766be5fbc2e47d157fc1441effccba5\"" May 7 23:38:28.155394 systemd[1]: Started cri-containerd-9d1c15acbdadd698d4517ad5671b6b6d0766be5fbc2e47d157fc1441effccba5.scope - libcontainer container 9d1c15acbdadd698d4517ad5671b6b6d0766be5fbc2e47d157fc1441effccba5. May 7 23:38:28.181514 containerd[1459]: time="2025-05-07T23:38:28.181416209Z" level=info msg="StartContainer for \"9d1c15acbdadd698d4517ad5671b6b6d0766be5fbc2e47d157fc1441effccba5\" returns successfully" May 7 23:38:28.940919 containerd[1459]: time="2025-05-07T23:38:28.940873628Z" level=info msg="CreateContainer within sandbox \"c7fa7b54a07bde41e294d42f593ad2a0925fecc511568d4fe196e3ef6563c628\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 7 23:38:28.963705 containerd[1459]: time="2025-05-07T23:38:28.963610454Z" level=info msg="CreateContainer within sandbox \"c7fa7b54a07bde41e294d42f593ad2a0925fecc511568d4fe196e3ef6563c628\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9ae889fddb2c7f4377610f86836e153f46b2fd4265da92b8a2a0ad5b506b2abc\"" May 7 23:38:28.964637 containerd[1459]: time="2025-05-07T23:38:28.964610614Z" level=info msg="StartContainer for \"9ae889fddb2c7f4377610f86836e153f46b2fd4265da92b8a2a0ad5b506b2abc\"" May 7 23:38:28.967150 kubelet[2691]: I0507 23:38:28.967026 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-c7rmk" podStartSLOduration=1.684852808 podStartE2EDuration="17.96700828s" podCreationTimestamp="2025-05-07 23:38:11 +0000 UTC" firstStartedPulling="2025-05-07 23:38:11.835316731 +0000 UTC m=+17.097036938" lastFinishedPulling="2025-05-07 23:38:28.117472203 +0000 UTC m=+33.379192410" observedRunningTime="2025-05-07 23:38:28.952208622 +0000 UTC m=+34.213928869" watchObservedRunningTime="2025-05-07 23:38:28.96700828 +0000 UTC m=+34.228728527" May 7 23:38:28.992534 systemd[1]: Started cri-containerd-9ae889fddb2c7f4377610f86836e153f46b2fd4265da92b8a2a0ad5b506b2abc.scope - libcontainer container 9ae889fddb2c7f4377610f86836e153f46b2fd4265da92b8a2a0ad5b506b2abc. May 7 23:38:29.013636 systemd[1]: cri-containerd-9ae889fddb2c7f4377610f86836e153f46b2fd4265da92b8a2a0ad5b506b2abc.scope: Deactivated successfully. May 7 23:38:29.016076 containerd[1459]: time="2025-05-07T23:38:29.016043528Z" level=info msg="StartContainer for \"9ae889fddb2c7f4377610f86836e153f46b2fd4265da92b8a2a0ad5b506b2abc\" returns successfully" May 7 23:38:29.050449 containerd[1459]: time="2025-05-07T23:38:29.050389174Z" level=info msg="shim disconnected" id=9ae889fddb2c7f4377610f86836e153f46b2fd4265da92b8a2a0ad5b506b2abc namespace=k8s.io May 7 23:38:29.050449 containerd[1459]: time="2025-05-07T23:38:29.050444211Z" level=warning msg="cleaning up after shim disconnected" id=9ae889fddb2c7f4377610f86836e153f46b2fd4265da92b8a2a0ad5b506b2abc namespace=k8s.io May 7 23:38:29.050449 containerd[1459]: time="2025-05-07T23:38:29.050452771Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 7 23:38:29.769710 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ae889fddb2c7f4377610f86836e153f46b2fd4265da92b8a2a0ad5b506b2abc-rootfs.mount: Deactivated successfully. May 7 23:38:29.944732 containerd[1459]: time="2025-05-07T23:38:29.944593636Z" level=info msg="CreateContainer within sandbox \"c7fa7b54a07bde41e294d42f593ad2a0925fecc511568d4fe196e3ef6563c628\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 7 23:38:29.962510 containerd[1459]: time="2025-05-07T23:38:29.962461072Z" level=info msg="CreateContainer within sandbox \"c7fa7b54a07bde41e294d42f593ad2a0925fecc511568d4fe196e3ef6563c628\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"81368ec221ae001c3e630ea8b16f97ca3d559b321d1ef99dac7a5d9f90025e1d\"" May 7 23:38:29.963290 containerd[1459]: time="2025-05-07T23:38:29.963239842Z" level=info msg="StartContainer for \"81368ec221ae001c3e630ea8b16f97ca3d559b321d1ef99dac7a5d9f90025e1d\"" May 7 23:38:29.989294 systemd[1]: Started cri-containerd-81368ec221ae001c3e630ea8b16f97ca3d559b321d1ef99dac7a5d9f90025e1d.scope - libcontainer container 81368ec221ae001c3e630ea8b16f97ca3d559b321d1ef99dac7a5d9f90025e1d. May 7 23:38:30.015541 containerd[1459]: time="2025-05-07T23:38:30.015500257Z" level=info msg="StartContainer for \"81368ec221ae001c3e630ea8b16f97ca3d559b321d1ef99dac7a5d9f90025e1d\" returns successfully" May 7 23:38:30.108391 kubelet[2691]: I0507 23:38:30.108236 2691 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 7 23:38:30.127214 kubelet[2691]: I0507 23:38:30.127168 2691 topology_manager.go:215] "Topology Admit Handler" podUID="a1fc79ed-88cd-408c-9d8f-d569fd88c6a0" podNamespace="kube-system" podName="coredns-7db6d8ff4d-wxsvw" May 7 23:38:30.128123 kubelet[2691]: I0507 23:38:30.127600 2691 topology_manager.go:215] "Topology Admit Handler" podUID="4738bb66-a444-472e-9a5f-3d4e1d36d42a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-j87bj" May 7 23:38:30.140940 systemd[1]: Created slice kubepods-burstable-poda1fc79ed_88cd_408c_9d8f_d569fd88c6a0.slice - libcontainer container kubepods-burstable-poda1fc79ed_88cd_408c_9d8f_d569fd88c6a0.slice. May 7 23:38:30.148995 systemd[1]: Created slice kubepods-burstable-pod4738bb66_a444_472e_9a5f_3d4e1d36d42a.slice - libcontainer container kubepods-burstable-pod4738bb66_a444_472e_9a5f_3d4e1d36d42a.slice. May 7 23:38:30.244472 kubelet[2691]: I0507 23:38:30.244434 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkqxd\" (UniqueName: \"kubernetes.io/projected/a1fc79ed-88cd-408c-9d8f-d569fd88c6a0-kube-api-access-hkqxd\") pod \"coredns-7db6d8ff4d-wxsvw\" (UID: \"a1fc79ed-88cd-408c-9d8f-d569fd88c6a0\") " pod="kube-system/coredns-7db6d8ff4d-wxsvw" May 7 23:38:30.244730 kubelet[2691]: I0507 23:38:30.244640 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a1fc79ed-88cd-408c-9d8f-d569fd88c6a0-config-volume\") pod \"coredns-7db6d8ff4d-wxsvw\" (UID: \"a1fc79ed-88cd-408c-9d8f-d569fd88c6a0\") " pod="kube-system/coredns-7db6d8ff4d-wxsvw" May 7 23:38:30.244730 kubelet[2691]: I0507 23:38:30.244666 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czfx7\" (UniqueName: \"kubernetes.io/projected/4738bb66-a444-472e-9a5f-3d4e1d36d42a-kube-api-access-czfx7\") pod \"coredns-7db6d8ff4d-j87bj\" (UID: \"4738bb66-a444-472e-9a5f-3d4e1d36d42a\") " pod="kube-system/coredns-7db6d8ff4d-j87bj" May 7 23:38:30.244730 kubelet[2691]: I0507 23:38:30.244700 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4738bb66-a444-472e-9a5f-3d4e1d36d42a-config-volume\") pod \"coredns-7db6d8ff4d-j87bj\" (UID: \"4738bb66-a444-472e-9a5f-3d4e1d36d42a\") " pod="kube-system/coredns-7db6d8ff4d-j87bj" May 7 23:38:30.452913 containerd[1459]: time="2025-05-07T23:38:30.452376906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j87bj,Uid:4738bb66-a444-472e-9a5f-3d4e1d36d42a,Namespace:kube-system,Attempt:0,}" May 7 23:38:30.452913 containerd[1459]: time="2025-05-07T23:38:30.452798290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wxsvw,Uid:a1fc79ed-88cd-408c-9d8f-d569fd88c6a0,Namespace:kube-system,Attempt:0,}" May 7 23:38:30.959291 kubelet[2691]: I0507 23:38:30.959180 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-t6rnj" podStartSLOduration=6.627829086 podStartE2EDuration="20.95916499s" podCreationTimestamp="2025-05-07 23:38:10 +0000 UTC" firstStartedPulling="2025-05-07 23:38:11.425198732 +0000 UTC m=+16.686918979" lastFinishedPulling="2025-05-07 23:38:25.756534636 +0000 UTC m=+31.018254883" observedRunningTime="2025-05-07 23:38:30.958499895 +0000 UTC m=+36.220220182" watchObservedRunningTime="2025-05-07 23:38:30.95916499 +0000 UTC m=+36.220885237" May 7 23:38:32.215686 systemd-networkd[1395]: cilium_host: Link UP May 7 23:38:32.215804 systemd-networkd[1395]: cilium_net: Link UP May 7 23:38:32.215806 systemd-networkd[1395]: cilium_net: Gained carrier May 7 23:38:32.215928 systemd-networkd[1395]: cilium_host: Gained carrier May 7 23:38:32.216050 systemd-networkd[1395]: cilium_host: Gained IPv6LL May 7 23:38:32.297625 systemd-networkd[1395]: cilium_vxlan: Link UP May 7 23:38:32.297632 systemd-networkd[1395]: cilium_vxlan: Gained carrier May 7 23:38:32.451286 systemd-networkd[1395]: cilium_net: Gained IPv6LL May 7 23:38:32.587268 kernel: NET: Registered PF_ALG protocol family May 7 23:38:32.796044 systemd[1]: Started sshd@11-10.0.0.15:22-10.0.0.1:47146.service - OpenSSH per-connection server daemon (10.0.0.1:47146). May 7 23:38:32.846233 sshd[3714]: Accepted publickey for core from 10.0.0.1 port 47146 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:38:32.847446 sshd-session[3714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:38:32.852767 systemd-logind[1447]: New session 12 of user core. May 7 23:38:32.862396 systemd[1]: Started session-12.scope - Session 12 of User core. May 7 23:38:32.993354 sshd[3765]: Connection closed by 10.0.0.1 port 47146 May 7 23:38:32.994720 sshd-session[3714]: pam_unix(sshd:session): session closed for user core May 7 23:38:32.998437 systemd[1]: sshd@11-10.0.0.15:22-10.0.0.1:47146.service: Deactivated successfully. May 7 23:38:33.000782 systemd[1]: session-12.scope: Deactivated successfully. May 7 23:38:33.001582 systemd-logind[1447]: Session 12 logged out. Waiting for processes to exit. May 7 23:38:33.002678 systemd-logind[1447]: Removed session 12. May 7 23:38:33.196050 systemd-networkd[1395]: lxc_health: Link UP May 7 23:38:33.203801 systemd-networkd[1395]: lxc_health: Gained carrier May 7 23:38:33.474507 systemd-networkd[1395]: cilium_vxlan: Gained IPv6LL May 7 23:38:33.565847 systemd-networkd[1395]: lxc04847451182a: Link UP May 7 23:38:33.571175 kernel: eth0: renamed from tmpd6a34 May 7 23:38:33.580740 systemd-networkd[1395]: lxc04847451182a: Gained carrier May 7 23:38:33.584787 systemd-networkd[1395]: lxcc4cbde4200d2: Link UP May 7 23:38:33.589168 kernel: eth0: renamed from tmp7fc93 May 7 23:38:33.595814 systemd-networkd[1395]: lxcc4cbde4200d2: Gained carrier May 7 23:38:34.818304 systemd-networkd[1395]: lxc_health: Gained IPv6LL May 7 23:38:34.946302 systemd-networkd[1395]: lxcc4cbde4200d2: Gained IPv6LL May 7 23:38:35.330296 systemd-networkd[1395]: lxc04847451182a: Gained IPv6LL May 7 23:38:37.270796 containerd[1459]: time="2025-05-07T23:38:37.270558494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 7 23:38:37.270796 containerd[1459]: time="2025-05-07T23:38:37.270627452Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 7 23:38:37.270796 containerd[1459]: time="2025-05-07T23:38:37.270644211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:38:37.270796 containerd[1459]: time="2025-05-07T23:38:37.270732369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:38:37.271483 containerd[1459]: time="2025-05-07T23:38:37.270431818Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 7 23:38:37.271483 containerd[1459]: time="2025-05-07T23:38:37.271116757Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 7 23:38:37.271483 containerd[1459]: time="2025-05-07T23:38:37.271148276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:38:37.271483 containerd[1459]: time="2025-05-07T23:38:37.271256792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:38:37.300410 systemd[1]: Started cri-containerd-7fc9337a115da3c8fd8bb54ec047f8a35e6e7fe3528886ed517f3223e2340052.scope - libcontainer container 7fc9337a115da3c8fd8bb54ec047f8a35e6e7fe3528886ed517f3223e2340052. May 7 23:38:37.301911 systemd[1]: Started cri-containerd-d6a34c9590f14721ed1f3f1b8e43d8b00377d92f171e032d62dd14fb74eb1afb.scope - libcontainer container d6a34c9590f14721ed1f3f1b8e43d8b00377d92f171e032d62dd14fb74eb1afb. May 7 23:38:37.311715 systemd-resolved[1322]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 7 23:38:37.314009 systemd-resolved[1322]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 7 23:38:37.336523 containerd[1459]: time="2025-05-07T23:38:37.336368298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j87bj,Uid:4738bb66-a444-472e-9a5f-3d4e1d36d42a,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6a34c9590f14721ed1f3f1b8e43d8b00377d92f171e032d62dd14fb74eb1afb\"" May 7 23:38:37.336523 containerd[1459]: time="2025-05-07T23:38:37.336408617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wxsvw,Uid:a1fc79ed-88cd-408c-9d8f-d569fd88c6a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"7fc9337a115da3c8fd8bb54ec047f8a35e6e7fe3528886ed517f3223e2340052\"" May 7 23:38:37.340352 containerd[1459]: time="2025-05-07T23:38:37.339072174Z" level=info msg="CreateContainer within sandbox \"d6a34c9590f14721ed1f3f1b8e43d8b00377d92f171e032d62dd14fb74eb1afb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 7 23:38:37.340352 containerd[1459]: time="2025-05-07T23:38:37.339334766Z" level=info msg="CreateContainer within sandbox \"7fc9337a115da3c8fd8bb54ec047f8a35e6e7fe3528886ed517f3223e2340052\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 7 23:38:37.353862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount206553472.mount: Deactivated successfully. May 7 23:38:37.355070 containerd[1459]: time="2025-05-07T23:38:37.354878365Z" level=info msg="CreateContainer within sandbox \"7fc9337a115da3c8fd8bb54ec047f8a35e6e7fe3528886ed517f3223e2340052\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"962b263341bd8d6f265963be63964312f3d94dc1b6b2d8f6d9cc164067418e2a\"" May 7 23:38:37.355528 containerd[1459]: time="2025-05-07T23:38:37.355500426Z" level=info msg="StartContainer for \"962b263341bd8d6f265963be63964312f3d94dc1b6b2d8f6d9cc164067418e2a\"" May 7 23:38:37.357894 containerd[1459]: time="2025-05-07T23:38:37.357827954Z" level=info msg="CreateContainer within sandbox \"d6a34c9590f14721ed1f3f1b8e43d8b00377d92f171e032d62dd14fb74eb1afb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5b8d7f29bd855c0d3e505c70fc351673f20ea658edb24c891c0e1c4de3049a23\"" May 7 23:38:37.358926 containerd[1459]: time="2025-05-07T23:38:37.358887481Z" level=info msg="StartContainer for \"5b8d7f29bd855c0d3e505c70fc351673f20ea658edb24c891c0e1c4de3049a23\"" May 7 23:38:37.380339 systemd[1]: Started cri-containerd-962b263341bd8d6f265963be63964312f3d94dc1b6b2d8f6d9cc164067418e2a.scope - libcontainer container 962b263341bd8d6f265963be63964312f3d94dc1b6b2d8f6d9cc164067418e2a. May 7 23:38:37.383458 systemd[1]: Started cri-containerd-5b8d7f29bd855c0d3e505c70fc351673f20ea658edb24c891c0e1c4de3049a23.scope - libcontainer container 5b8d7f29bd855c0d3e505c70fc351673f20ea658edb24c891c0e1c4de3049a23. May 7 23:38:37.415552 containerd[1459]: time="2025-05-07T23:38:37.415419452Z" level=info msg="StartContainer for \"5b8d7f29bd855c0d3e505c70fc351673f20ea658edb24c891c0e1c4de3049a23\" returns successfully" May 7 23:38:37.415552 containerd[1459]: time="2025-05-07T23:38:37.415501969Z" level=info msg="StartContainer for \"962b263341bd8d6f265963be63964312f3d94dc1b6b2d8f6d9cc164067418e2a\" returns successfully" May 7 23:38:37.972618 kubelet[2691]: I0507 23:38:37.972529 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-wxsvw" podStartSLOduration=26.972512455 podStartE2EDuration="26.972512455s" podCreationTimestamp="2025-05-07 23:38:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-07 23:38:37.971775558 +0000 UTC m=+43.233495805" watchObservedRunningTime="2025-05-07 23:38:37.972512455 +0000 UTC m=+43.234232662" May 7 23:38:37.984452 kubelet[2691]: I0507 23:38:37.984383 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-j87bj" podStartSLOduration=26.984362169 podStartE2EDuration="26.984362169s" podCreationTimestamp="2025-05-07 23:38:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-07 23:38:37.983944742 +0000 UTC m=+43.245664989" watchObservedRunningTime="2025-05-07 23:38:37.984362169 +0000 UTC m=+43.246082376" May 7 23:38:38.014532 systemd[1]: Started sshd@12-10.0.0.15:22-10.0.0.1:47158.service - OpenSSH per-connection server daemon (10.0.0.1:47158). May 7 23:38:38.064167 sshd[4155]: Accepted publickey for core from 10.0.0.1 port 47158 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:38:38.065893 sshd-session[4155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:38:38.070234 systemd-logind[1447]: New session 13 of user core. May 7 23:38:38.077335 systemd[1]: Started session-13.scope - Session 13 of User core. May 7 23:38:38.194182 sshd[4161]: Connection closed by 10.0.0.1 port 47158 May 7 23:38:38.194657 sshd-session[4155]: pam_unix(sshd:session): session closed for user core May 7 23:38:38.206062 systemd[1]: sshd@12-10.0.0.15:22-10.0.0.1:47158.service: Deactivated successfully. May 7 23:38:38.207717 systemd[1]: session-13.scope: Deactivated successfully. May 7 23:38:38.208440 systemd-logind[1447]: Session 13 logged out. Waiting for processes to exit. May 7 23:38:38.217442 systemd[1]: Started sshd@13-10.0.0.15:22-10.0.0.1:47160.service - OpenSSH per-connection server daemon (10.0.0.1:47160). May 7 23:38:38.218547 systemd-logind[1447]: Removed session 13. May 7 23:38:38.260411 sshd[4174]: Accepted publickey for core from 10.0.0.1 port 47160 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:38:38.261838 sshd-session[4174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:38:38.266031 systemd-logind[1447]: New session 14 of user core. May 7 23:38:38.276288 systemd[1]: Started session-14.scope - Session 14 of User core. May 7 23:38:38.445836 sshd[4177]: Connection closed by 10.0.0.1 port 47160 May 7 23:38:38.446240 sshd-session[4174]: pam_unix(sshd:session): session closed for user core May 7 23:38:38.459628 systemd[1]: sshd@13-10.0.0.15:22-10.0.0.1:47160.service: Deactivated successfully. May 7 23:38:38.462754 systemd[1]: session-14.scope: Deactivated successfully. May 7 23:38:38.466096 systemd-logind[1447]: Session 14 logged out. Waiting for processes to exit. May 7 23:38:38.473567 systemd[1]: Started sshd@14-10.0.0.15:22-10.0.0.1:47176.service - OpenSSH per-connection server daemon (10.0.0.1:47176). May 7 23:38:38.474459 systemd-logind[1447]: Removed session 14. May 7 23:38:38.522125 sshd[4188]: Accepted publickey for core from 10.0.0.1 port 47176 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:38:38.523591 sshd-session[4188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:38:38.528163 systemd-logind[1447]: New session 15 of user core. May 7 23:38:38.539351 systemd[1]: Started session-15.scope - Session 15 of User core. May 7 23:38:38.653744 sshd[4191]: Connection closed by 10.0.0.1 port 47176 May 7 23:38:38.654113 sshd-session[4188]: pam_unix(sshd:session): session closed for user core May 7 23:38:38.656873 systemd[1]: sshd@14-10.0.0.15:22-10.0.0.1:47176.service: Deactivated successfully. May 7 23:38:38.658932 systemd[1]: session-15.scope: Deactivated successfully. May 7 23:38:38.660517 systemd-logind[1447]: Session 15 logged out. Waiting for processes to exit. May 7 23:38:38.661588 systemd-logind[1447]: Removed session 15. May 7 23:38:43.668448 systemd[1]: Started sshd@15-10.0.0.15:22-10.0.0.1:32984.service - OpenSSH per-connection server daemon (10.0.0.1:32984). May 7 23:38:43.713991 sshd[4207]: Accepted publickey for core from 10.0.0.1 port 32984 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:38:43.715326 sshd-session[4207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:38:43.719343 systemd-logind[1447]: New session 16 of user core. May 7 23:38:43.731338 systemd[1]: Started session-16.scope - Session 16 of User core. May 7 23:38:43.861311 sshd[4209]: Connection closed by 10.0.0.1 port 32984 May 7 23:38:43.860398 sshd-session[4207]: pam_unix(sshd:session): session closed for user core May 7 23:38:43.865316 systemd[1]: sshd@15-10.0.0.15:22-10.0.0.1:32984.service: Deactivated successfully. May 7 23:38:43.867446 systemd[1]: session-16.scope: Deactivated successfully. May 7 23:38:43.868489 systemd-logind[1447]: Session 16 logged out. Waiting for processes to exit. May 7 23:38:43.869552 systemd-logind[1447]: Removed session 16. May 7 23:38:48.872045 systemd[1]: Started sshd@16-10.0.0.15:22-10.0.0.1:32996.service - OpenSSH per-connection server daemon (10.0.0.1:32996). May 7 23:38:48.923933 sshd[4222]: Accepted publickey for core from 10.0.0.1 port 32996 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:38:48.925327 sshd-session[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:38:48.929712 systemd-logind[1447]: New session 17 of user core. May 7 23:38:48.937315 systemd[1]: Started session-17.scope - Session 17 of User core. May 7 23:38:49.055394 sshd[4224]: Connection closed by 10.0.0.1 port 32996 May 7 23:38:49.056694 sshd-session[4222]: pam_unix(sshd:session): session closed for user core May 7 23:38:49.072366 systemd[1]: sshd@16-10.0.0.15:22-10.0.0.1:32996.service: Deactivated successfully. May 7 23:38:49.077108 systemd[1]: session-17.scope: Deactivated successfully. May 7 23:38:49.077962 systemd-logind[1447]: Session 17 logged out. Waiting for processes to exit. May 7 23:38:49.080493 systemd[1]: Started sshd@17-10.0.0.15:22-10.0.0.1:33004.service - OpenSSH per-connection server daemon (10.0.0.1:33004). May 7 23:38:49.081375 systemd-logind[1447]: Removed session 17. May 7 23:38:49.127552 sshd[4237]: Accepted publickey for core from 10.0.0.1 port 33004 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:38:49.128772 sshd-session[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:38:49.133211 systemd-logind[1447]: New session 18 of user core. May 7 23:38:49.144323 systemd[1]: Started session-18.scope - Session 18 of User core. May 7 23:38:49.345332 sshd[4240]: Connection closed by 10.0.0.1 port 33004 May 7 23:38:49.345954 sshd-session[4237]: pam_unix(sshd:session): session closed for user core May 7 23:38:49.360373 systemd[1]: sshd@17-10.0.0.15:22-10.0.0.1:33004.service: Deactivated successfully. May 7 23:38:49.361779 systemd[1]: session-18.scope: Deactivated successfully. May 7 23:38:49.362579 systemd-logind[1447]: Session 18 logged out. Waiting for processes to exit. May 7 23:38:49.369391 systemd[1]: Started sshd@18-10.0.0.15:22-10.0.0.1:33014.service - OpenSSH per-connection server daemon (10.0.0.1:33014). May 7 23:38:49.370737 systemd-logind[1447]: Removed session 18. May 7 23:38:49.417894 sshd[4250]: Accepted publickey for core from 10.0.0.1 port 33014 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:38:49.419072 sshd-session[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:38:49.423207 systemd-logind[1447]: New session 19 of user core. May 7 23:38:49.433269 systemd[1]: Started session-19.scope - Session 19 of User core. May 7 23:38:50.708652 sshd[4253]: Connection closed by 10.0.0.1 port 33014 May 7 23:38:50.709247 sshd-session[4250]: pam_unix(sshd:session): session closed for user core May 7 23:38:50.722500 systemd[1]: sshd@18-10.0.0.15:22-10.0.0.1:33014.service: Deactivated successfully. May 7 23:38:50.724440 systemd[1]: session-19.scope: Deactivated successfully. May 7 23:38:50.725825 systemd-logind[1447]: Session 19 logged out. Waiting for processes to exit. May 7 23:38:50.735052 systemd[1]: Started sshd@19-10.0.0.15:22-10.0.0.1:33020.service - OpenSSH per-connection server daemon (10.0.0.1:33020). May 7 23:38:50.737497 systemd-logind[1447]: Removed session 19. May 7 23:38:50.776477 sshd[4273]: Accepted publickey for core from 10.0.0.1 port 33020 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:38:50.777637 sshd-session[4273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:38:50.781732 systemd-logind[1447]: New session 20 of user core. May 7 23:38:50.797338 systemd[1]: Started session-20.scope - Session 20 of User core. May 7 23:38:51.006628 sshd[4277]: Connection closed by 10.0.0.1 port 33020 May 7 23:38:51.009328 sshd-session[4273]: pam_unix(sshd:session): session closed for user core May 7 23:38:51.029524 systemd[1]: Started sshd@20-10.0.0.15:22-10.0.0.1:33026.service - OpenSSH per-connection server daemon (10.0.0.1:33026). May 7 23:38:51.030052 systemd[1]: sshd@19-10.0.0.15:22-10.0.0.1:33020.service: Deactivated successfully. May 7 23:38:51.031714 systemd[1]: session-20.scope: Deactivated successfully. May 7 23:38:51.038392 systemd-logind[1447]: Session 20 logged out. Waiting for processes to exit. May 7 23:38:51.043738 systemd-logind[1447]: Removed session 20. May 7 23:38:51.079869 sshd[4287]: Accepted publickey for core from 10.0.0.1 port 33026 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:38:51.081167 sshd-session[4287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:38:51.085213 systemd-logind[1447]: New session 21 of user core. May 7 23:38:51.090306 systemd[1]: Started session-21.scope - Session 21 of User core. May 7 23:38:51.199677 sshd[4291]: Connection closed by 10.0.0.1 port 33026 May 7 23:38:51.200022 sshd-session[4287]: pam_unix(sshd:session): session closed for user core May 7 23:38:51.204508 systemd[1]: sshd@20-10.0.0.15:22-10.0.0.1:33026.service: Deactivated successfully. May 7 23:38:51.206240 systemd[1]: session-21.scope: Deactivated successfully. May 7 23:38:51.206856 systemd-logind[1447]: Session 21 logged out. Waiting for processes to exit. May 7 23:38:51.207698 systemd-logind[1447]: Removed session 21. May 7 23:38:56.212549 systemd[1]: Started sshd@21-10.0.0.15:22-10.0.0.1:59204.service - OpenSSH per-connection server daemon (10.0.0.1:59204). May 7 23:38:56.259031 sshd[4307]: Accepted publickey for core from 10.0.0.1 port 59204 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:38:56.260089 sshd-session[4307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:38:56.264078 systemd-logind[1447]: New session 22 of user core. May 7 23:38:56.280355 systemd[1]: Started session-22.scope - Session 22 of User core. May 7 23:38:56.384201 sshd[4309]: Connection closed by 10.0.0.1 port 59204 May 7 23:38:56.383926 sshd-session[4307]: pam_unix(sshd:session): session closed for user core May 7 23:38:56.387002 systemd[1]: sshd@21-10.0.0.15:22-10.0.0.1:59204.service: Deactivated successfully. May 7 23:38:56.390494 systemd[1]: session-22.scope: Deactivated successfully. May 7 23:38:56.391554 systemd-logind[1447]: Session 22 logged out. Waiting for processes to exit. May 7 23:38:56.392492 systemd-logind[1447]: Removed session 22. May 7 23:39:01.395520 systemd[1]: Started sshd@22-10.0.0.15:22-10.0.0.1:59214.service - OpenSSH per-connection server daemon (10.0.0.1:59214). May 7 23:39:01.474144 sshd[4325]: Accepted publickey for core from 10.0.0.1 port 59214 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:39:01.475454 sshd-session[4325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:39:01.479228 systemd-logind[1447]: New session 23 of user core. May 7 23:39:01.490332 systemd[1]: Started session-23.scope - Session 23 of User core. May 7 23:39:01.600172 sshd[4327]: Connection closed by 10.0.0.1 port 59214 May 7 23:39:01.600228 sshd-session[4325]: pam_unix(sshd:session): session closed for user core May 7 23:39:01.603928 systemd[1]: sshd@22-10.0.0.15:22-10.0.0.1:59214.service: Deactivated successfully. May 7 23:39:01.607620 systemd[1]: session-23.scope: Deactivated successfully. May 7 23:39:01.608364 systemd-logind[1447]: Session 23 logged out. Waiting for processes to exit. May 7 23:39:01.609381 systemd-logind[1447]: Removed session 23. May 7 23:39:06.612114 systemd[1]: Started sshd@23-10.0.0.15:22-10.0.0.1:40962.service - OpenSSH per-connection server daemon (10.0.0.1:40962). May 7 23:39:06.657409 sshd[4340]: Accepted publickey for core from 10.0.0.1 port 40962 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:39:06.658721 sshd-session[4340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:39:06.663097 systemd-logind[1447]: New session 24 of user core. May 7 23:39:06.675326 systemd[1]: Started session-24.scope - Session 24 of User core. May 7 23:39:06.784530 sshd[4342]: Connection closed by 10.0.0.1 port 40962 May 7 23:39:06.783958 sshd-session[4340]: pam_unix(sshd:session): session closed for user core May 7 23:39:06.787398 systemd[1]: sshd@23-10.0.0.15:22-10.0.0.1:40962.service: Deactivated successfully. May 7 23:39:06.790604 systemd[1]: session-24.scope: Deactivated successfully. May 7 23:39:06.791236 systemd-logind[1447]: Session 24 logged out. Waiting for processes to exit. May 7 23:39:06.791974 systemd-logind[1447]: Removed session 24. May 7 23:39:11.795438 systemd[1]: Started sshd@24-10.0.0.15:22-10.0.0.1:40966.service - OpenSSH per-connection server daemon (10.0.0.1:40966). May 7 23:39:11.840214 sshd[4359]: Accepted publickey for core from 10.0.0.1 port 40966 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:39:11.841371 sshd-session[4359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:39:11.845204 systemd-logind[1447]: New session 25 of user core. May 7 23:39:11.855300 systemd[1]: Started session-25.scope - Session 25 of User core. May 7 23:39:11.962757 sshd[4361]: Connection closed by 10.0.0.1 port 40966 May 7 23:39:11.963417 sshd-session[4359]: pam_unix(sshd:session): session closed for user core May 7 23:39:11.977992 systemd[1]: sshd@24-10.0.0.15:22-10.0.0.1:40966.service: Deactivated successfully. May 7 23:39:11.979480 systemd[1]: session-25.scope: Deactivated successfully. May 7 23:39:11.980159 systemd-logind[1447]: Session 25 logged out. Waiting for processes to exit. May 7 23:39:11.990462 systemd[1]: Started sshd@25-10.0.0.15:22-10.0.0.1:40968.service - OpenSSH per-connection server daemon (10.0.0.1:40968). May 7 23:39:11.991596 systemd-logind[1447]: Removed session 25. May 7 23:39:12.032063 sshd[4373]: Accepted publickey for core from 10.0.0.1 port 40968 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:39:12.033373 sshd-session[4373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:39:12.037836 systemd-logind[1447]: New session 26 of user core. May 7 23:39:12.043289 systemd[1]: Started session-26.scope - Session 26 of User core. May 7 23:39:14.635166 containerd[1459]: time="2025-05-07T23:39:14.635062564Z" level=info msg="StopContainer for \"9d1c15acbdadd698d4517ad5671b6b6d0766be5fbc2e47d157fc1441effccba5\" with timeout 30 (s)" May 7 23:39:14.635540 containerd[1459]: time="2025-05-07T23:39:14.635421919Z" level=info msg="Stop container \"9d1c15acbdadd698d4517ad5671b6b6d0766be5fbc2e47d157fc1441effccba5\" with signal terminated" May 7 23:39:14.647216 systemd[1]: cri-containerd-9d1c15acbdadd698d4517ad5671b6b6d0766be5fbc2e47d157fc1441effccba5.scope: Deactivated successfully. May 7 23:39:14.666374 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d1c15acbdadd698d4517ad5671b6b6d0766be5fbc2e47d157fc1441effccba5-rootfs.mount: Deactivated successfully. May 7 23:39:14.679367 containerd[1459]: time="2025-05-07T23:39:14.679288330Z" level=info msg="shim disconnected" id=9d1c15acbdadd698d4517ad5671b6b6d0766be5fbc2e47d157fc1441effccba5 namespace=k8s.io May 7 23:39:14.679367 containerd[1459]: time="2025-05-07T23:39:14.679348649Z" level=warning msg="cleaning up after shim disconnected" id=9d1c15acbdadd698d4517ad5671b6b6d0766be5fbc2e47d157fc1441effccba5 namespace=k8s.io May 7 23:39:14.679367 containerd[1459]: time="2025-05-07T23:39:14.679357409Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 7 23:39:14.686790 containerd[1459]: time="2025-05-07T23:39:14.686754709Z" level=info msg="StopContainer for \"81368ec221ae001c3e630ea8b16f97ca3d559b321d1ef99dac7a5d9f90025e1d\" with timeout 2 (s)" May 7 23:39:14.687013 containerd[1459]: time="2025-05-07T23:39:14.686987546Z" level=info msg="Stop container \"81368ec221ae001c3e630ea8b16f97ca3d559b321d1ef99dac7a5d9f90025e1d\" with signal terminated" May 7 23:39:14.692775 systemd-networkd[1395]: lxc_health: Link DOWN May 7 23:39:14.692782 systemd-networkd[1395]: lxc_health: Lost carrier May 7 23:39:14.702166 containerd[1459]: time="2025-05-07T23:39:14.701933666Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 7 23:39:14.712053 systemd[1]: cri-containerd-81368ec221ae001c3e630ea8b16f97ca3d559b321d1ef99dac7a5d9f90025e1d.scope: Deactivated successfully. May 7 23:39:14.712375 systemd[1]: cri-containerd-81368ec221ae001c3e630ea8b16f97ca3d559b321d1ef99dac7a5d9f90025e1d.scope: Consumed 6.645s CPU time, 123.9M memory peak, 228K read from disk, 12.9M written to disk. May 7 23:39:14.731195 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81368ec221ae001c3e630ea8b16f97ca3d559b321d1ef99dac7a5d9f90025e1d-rootfs.mount: Deactivated successfully. May 7 23:39:14.734016 containerd[1459]: time="2025-05-07T23:39:14.733975275Z" level=info msg="StopContainer for \"9d1c15acbdadd698d4517ad5671b6b6d0766be5fbc2e47d157fc1441effccba5\" returns successfully" May 7 23:39:14.734968 containerd[1459]: time="2025-05-07T23:39:14.734919023Z" level=info msg="StopPodSandbox for \"4a65224c3275ab8315056583f941ba684bdf5823969b058f8b025739e3900a4b\"" May 7 23:39:14.738843 containerd[1459]: time="2025-05-07T23:39:14.738789731Z" level=info msg="shim disconnected" id=81368ec221ae001c3e630ea8b16f97ca3d559b321d1ef99dac7a5d9f90025e1d namespace=k8s.io May 7 23:39:14.738843 containerd[1459]: time="2025-05-07T23:39:14.738833210Z" level=warning msg="cleaning up after shim disconnected" id=81368ec221ae001c3e630ea8b16f97ca3d559b321d1ef99dac7a5d9f90025e1d namespace=k8s.io May 7 23:39:14.738843 containerd[1459]: time="2025-05-07T23:39:14.738841010Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 7 23:39:14.740259 containerd[1459]: time="2025-05-07T23:39:14.740199112Z" level=info msg="Container to stop \"9d1c15acbdadd698d4517ad5671b6b6d0766be5fbc2e47d157fc1441effccba5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 7 23:39:14.741845 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4a65224c3275ab8315056583f941ba684bdf5823969b058f8b025739e3900a4b-shm.mount: Deactivated successfully. May 7 23:39:14.747954 systemd[1]: cri-containerd-4a65224c3275ab8315056583f941ba684bdf5823969b058f8b025739e3900a4b.scope: Deactivated successfully. May 7 23:39:14.768401 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a65224c3275ab8315056583f941ba684bdf5823969b058f8b025739e3900a4b-rootfs.mount: Deactivated successfully. May 7 23:39:14.771482 containerd[1459]: time="2025-05-07T23:39:14.771439812Z" level=info msg="StopContainer for \"81368ec221ae001c3e630ea8b16f97ca3d559b321d1ef99dac7a5d9f90025e1d\" returns successfully" May 7 23:39:14.772073 containerd[1459]: time="2025-05-07T23:39:14.772047924Z" level=info msg="StopPodSandbox for \"c7fa7b54a07bde41e294d42f593ad2a0925fecc511568d4fe196e3ef6563c628\"" May 7 23:39:14.772197 containerd[1459]: time="2025-05-07T23:39:14.772168282Z" level=info msg="Container to stop \"43c7173c5daba87d987b28026506e98203db7deff63a74e354980407f03302f3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 7 23:39:14.772230 containerd[1459]: time="2025-05-07T23:39:14.772198842Z" level=info msg="Container to stop \"9ae889fddb2c7f4377610f86836e153f46b2fd4265da92b8a2a0ad5b506b2abc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 7 23:39:14.772230 containerd[1459]: time="2025-05-07T23:39:14.772210842Z" level=info msg="Container to stop \"81368ec221ae001c3e630ea8b16f97ca3d559b321d1ef99dac7a5d9f90025e1d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 7 23:39:14.772230 containerd[1459]: time="2025-05-07T23:39:14.772220721Z" level=info msg="Container to stop \"6102021bb03d6e7fa43017463606e78697acaa45566f6b5a79f7ed7f35a53065\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 7 23:39:14.772297 containerd[1459]: time="2025-05-07T23:39:14.772241601Z" level=info msg="Container to stop \"643db7dca0395cb4149041a04eac8cfb7bd2acb0390b29531e61e2aa25aea7e2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 7 23:39:14.777613 containerd[1459]: time="2025-05-07T23:39:14.777543410Z" level=info msg="shim disconnected" id=4a65224c3275ab8315056583f941ba684bdf5823969b058f8b025739e3900a4b namespace=k8s.io May 7 23:39:14.777613 containerd[1459]: time="2025-05-07T23:39:14.777596809Z" level=warning msg="cleaning up after shim disconnected" id=4a65224c3275ab8315056583f941ba684bdf5823969b058f8b025739e3900a4b namespace=k8s.io May 7 23:39:14.777613 containerd[1459]: time="2025-05-07T23:39:14.777604689Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 7 23:39:14.779407 systemd[1]: cri-containerd-c7fa7b54a07bde41e294d42f593ad2a0925fecc511568d4fe196e3ef6563c628.scope: Deactivated successfully. May 7 23:39:14.796925 containerd[1459]: time="2025-05-07T23:39:14.796383677Z" level=info msg="TearDown network for sandbox \"4a65224c3275ab8315056583f941ba684bdf5823969b058f8b025739e3900a4b\" successfully" May 7 23:39:14.796925 containerd[1459]: time="2025-05-07T23:39:14.796414196Z" level=info msg="StopPodSandbox for \"4a65224c3275ab8315056583f941ba684bdf5823969b058f8b025739e3900a4b\" returns successfully" May 7 23:39:14.846186 containerd[1459]: time="2025-05-07T23:39:14.846020010Z" level=info msg="shim disconnected" id=c7fa7b54a07bde41e294d42f593ad2a0925fecc511568d4fe196e3ef6563c628 namespace=k8s.io May 7 23:39:14.846186 containerd[1459]: time="2025-05-07T23:39:14.846185048Z" level=warning msg="cleaning up after shim disconnected" id=c7fa7b54a07bde41e294d42f593ad2a0925fecc511568d4fe196e3ef6563c628 namespace=k8s.io May 7 23:39:14.846423 containerd[1459]: time="2025-05-07T23:39:14.846196848Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 7 23:39:14.856301 containerd[1459]: time="2025-05-07T23:39:14.856256433Z" level=warning msg="cleanup warnings time=\"2025-05-07T23:39:14Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 7 23:39:14.857384 containerd[1459]: time="2025-05-07T23:39:14.857350538Z" level=info msg="TearDown network for sandbox \"c7fa7b54a07bde41e294d42f593ad2a0925fecc511568d4fe196e3ef6563c628\" successfully" May 7 23:39:14.857384 containerd[1459]: time="2025-05-07T23:39:14.857380337Z" level=info msg="StopPodSandbox for \"c7fa7b54a07bde41e294d42f593ad2a0925fecc511568d4fe196e3ef6563c628\" returns successfully" May 7 23:39:14.866988 kubelet[2691]: E0507 23:39:14.866906 2691 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 7 23:39:14.914370 kubelet[2691]: I0507 23:39:14.914187 2691 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2phvx\" (UniqueName: \"kubernetes.io/projected/00f5f149-a9c3-486a-b933-e2e96be0e46f-kube-api-access-2phvx\") pod \"00f5f149-a9c3-486a-b933-e2e96be0e46f\" (UID: \"00f5f149-a9c3-486a-b933-e2e96be0e46f\") " May 7 23:39:14.914370 kubelet[2691]: I0507 23:39:14.914236 2691 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/00f5f149-a9c3-486a-b933-e2e96be0e46f-cilium-config-path\") pod \"00f5f149-a9c3-486a-b933-e2e96be0e46f\" (UID: \"00f5f149-a9c3-486a-b933-e2e96be0e46f\") " May 7 23:39:14.924728 kubelet[2691]: I0507 23:39:14.924542 2691 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00f5f149-a9c3-486a-b933-e2e96be0e46f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "00f5f149-a9c3-486a-b933-e2e96be0e46f" (UID: "00f5f149-a9c3-486a-b933-e2e96be0e46f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 7 23:39:14.925488 kubelet[2691]: I0507 23:39:14.925441 2691 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00f5f149-a9c3-486a-b933-e2e96be0e46f-kube-api-access-2phvx" (OuterVolumeSpecName: "kube-api-access-2phvx") pod "00f5f149-a9c3-486a-b933-e2e96be0e46f" (UID: "00f5f149-a9c3-486a-b933-e2e96be0e46f"). InnerVolumeSpecName "kube-api-access-2phvx". PluginName "kubernetes.io/projected", VolumeGidValue "" May 7 23:39:15.015185 kubelet[2691]: I0507 23:39:15.014651 2691 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-hostproc\") pod \"b8b730ca-7ee2-4c70-bd2a-a61be61f0768\" (UID: \"b8b730ca-7ee2-4c70-bd2a-a61be61f0768\") " May 7 23:39:15.015185 kubelet[2691]: I0507 23:39:15.014691 2691 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-host-proc-sys-kernel\") pod \"b8b730ca-7ee2-4c70-bd2a-a61be61f0768\" (UID: \"b8b730ca-7ee2-4c70-bd2a-a61be61f0768\") " May 7 23:39:15.015185 kubelet[2691]: I0507 23:39:15.014715 2691 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-host-proc-sys-net\") pod \"b8b730ca-7ee2-4c70-bd2a-a61be61f0768\" (UID: \"b8b730ca-7ee2-4c70-bd2a-a61be61f0768\") " May 7 23:39:15.015185 kubelet[2691]: I0507 23:39:15.014729 2691 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-cni-path\") pod \"b8b730ca-7ee2-4c70-bd2a-a61be61f0768\" (UID: \"b8b730ca-7ee2-4c70-bd2a-a61be61f0768\") " May 7 23:39:15.015185 kubelet[2691]: I0507 23:39:15.014718 2691 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-hostproc" (OuterVolumeSpecName: "hostproc") pod "b8b730ca-7ee2-4c70-bd2a-a61be61f0768" (UID: "b8b730ca-7ee2-4c70-bd2a-a61be61f0768"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 7 23:39:15.015185 kubelet[2691]: I0507 23:39:15.014755 2691 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-hubble-tls\") pod \"b8b730ca-7ee2-4c70-bd2a-a61be61f0768\" (UID: \"b8b730ca-7ee2-4c70-bd2a-a61be61f0768\") " May 7 23:39:15.015444 kubelet[2691]: I0507 23:39:15.014771 2691 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-cilium-cgroup\") pod \"b8b730ca-7ee2-4c70-bd2a-a61be61f0768\" (UID: \"b8b730ca-7ee2-4c70-bd2a-a61be61f0768\") " May 7 23:39:15.015444 kubelet[2691]: I0507 23:39:15.014773 2691 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b8b730ca-7ee2-4c70-bd2a-a61be61f0768" (UID: "b8b730ca-7ee2-4c70-bd2a-a61be61f0768"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 7 23:39:15.015444 kubelet[2691]: I0507 23:39:15.014786 2691 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-xtables-lock\") pod \"b8b730ca-7ee2-4c70-bd2a-a61be61f0768\" (UID: \"b8b730ca-7ee2-4c70-bd2a-a61be61f0768\") " May 7 23:39:15.015444 kubelet[2691]: I0507 23:39:15.014792 2691 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b8b730ca-7ee2-4c70-bd2a-a61be61f0768" (UID: "b8b730ca-7ee2-4c70-bd2a-a61be61f0768"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 7 23:39:15.015444 kubelet[2691]: I0507 23:39:15.014802 2691 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-cilium-run\") pod \"b8b730ca-7ee2-4c70-bd2a-a61be61f0768\" (UID: \"b8b730ca-7ee2-4c70-bd2a-a61be61f0768\") " May 7 23:39:15.015546 kubelet[2691]: I0507 23:39:15.014819 2691 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-cilium-config-path\") pod \"b8b730ca-7ee2-4c70-bd2a-a61be61f0768\" (UID: \"b8b730ca-7ee2-4c70-bd2a-a61be61f0768\") " May 7 23:39:15.015546 kubelet[2691]: I0507 23:39:15.014837 2691 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnjpt\" (UniqueName: \"kubernetes.io/projected/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-kube-api-access-tnjpt\") pod \"b8b730ca-7ee2-4c70-bd2a-a61be61f0768\" (UID: \"b8b730ca-7ee2-4c70-bd2a-a61be61f0768\") " May 7 23:39:15.015546 kubelet[2691]: I0507 23:39:15.014852 2691 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-etc-cni-netd\") pod \"b8b730ca-7ee2-4c70-bd2a-a61be61f0768\" (UID: \"b8b730ca-7ee2-4c70-bd2a-a61be61f0768\") " May 7 23:39:15.015546 kubelet[2691]: I0507 23:39:15.014869 2691 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-clustermesh-secrets\") pod \"b8b730ca-7ee2-4c70-bd2a-a61be61f0768\" (UID: \"b8b730ca-7ee2-4c70-bd2a-a61be61f0768\") " May 7 23:39:15.015546 kubelet[2691]: I0507 23:39:15.014882 2691 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-lib-modules\") pod \"b8b730ca-7ee2-4c70-bd2a-a61be61f0768\" (UID: \"b8b730ca-7ee2-4c70-bd2a-a61be61f0768\") " May 7 23:39:15.015546 kubelet[2691]: I0507 23:39:15.014895 2691 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-bpf-maps\") pod \"b8b730ca-7ee2-4c70-bd2a-a61be61f0768\" (UID: \"b8b730ca-7ee2-4c70-bd2a-a61be61f0768\") " May 7 23:39:15.015668 kubelet[2691]: I0507 23:39:15.014924 2691 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 7 23:39:15.015668 kubelet[2691]: I0507 23:39:15.014933 2691 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 7 23:39:15.015668 kubelet[2691]: I0507 23:39:15.014942 2691 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-2phvx\" (UniqueName: \"kubernetes.io/projected/00f5f149-a9c3-486a-b933-e2e96be0e46f-kube-api-access-2phvx\") on node \"localhost\" DevicePath \"\"" May 7 23:39:15.015668 kubelet[2691]: I0507 23:39:15.014951 2691 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/00f5f149-a9c3-486a-b933-e2e96be0e46f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 7 23:39:15.015668 kubelet[2691]: I0507 23:39:15.014959 2691 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-hostproc\") on node \"localhost\" DevicePath \"\"" May 7 23:39:15.015668 kubelet[2691]: I0507 23:39:15.014989 2691 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b8b730ca-7ee2-4c70-bd2a-a61be61f0768" (UID: "b8b730ca-7ee2-4c70-bd2a-a61be61f0768"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 7 23:39:15.015668 kubelet[2691]: I0507 23:39:15.015010 2691 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b8b730ca-7ee2-4c70-bd2a-a61be61f0768" (UID: "b8b730ca-7ee2-4c70-bd2a-a61be61f0768"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 7 23:39:15.015807 kubelet[2691]: I0507 23:39:15.015024 2691 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b8b730ca-7ee2-4c70-bd2a-a61be61f0768" (UID: "b8b730ca-7ee2-4c70-bd2a-a61be61f0768"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 7 23:39:15.015807 kubelet[2691]: I0507 23:39:15.015038 2691 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b8b730ca-7ee2-4c70-bd2a-a61be61f0768" (UID: "b8b730ca-7ee2-4c70-bd2a-a61be61f0768"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 7 23:39:15.015807 kubelet[2691]: I0507 23:39:15.015061 2691 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-cni-path" (OuterVolumeSpecName: "cni-path") pod "b8b730ca-7ee2-4c70-bd2a-a61be61f0768" (UID: "b8b730ca-7ee2-4c70-bd2a-a61be61f0768"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 7 23:39:15.015807 kubelet[2691]: I0507 23:39:15.015092 2691 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b8b730ca-7ee2-4c70-bd2a-a61be61f0768" (UID: "b8b730ca-7ee2-4c70-bd2a-a61be61f0768"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 7 23:39:15.017203 kubelet[2691]: I0507 23:39:15.017159 2691 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b8b730ca-7ee2-4c70-bd2a-a61be61f0768" (UID: "b8b730ca-7ee2-4c70-bd2a-a61be61f0768"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 7 23:39:15.017532 kubelet[2691]: I0507 23:39:15.017502 2691 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b8b730ca-7ee2-4c70-bd2a-a61be61f0768" (UID: "b8b730ca-7ee2-4c70-bd2a-a61be61f0768"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 7 23:39:15.017635 kubelet[2691]: I0507 23:39:15.017611 2691 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b8b730ca-7ee2-4c70-bd2a-a61be61f0768" (UID: "b8b730ca-7ee2-4c70-bd2a-a61be61f0768"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 7 23:39:15.017963 kubelet[2691]: I0507 23:39:15.017924 2691 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b8b730ca-7ee2-4c70-bd2a-a61be61f0768" (UID: "b8b730ca-7ee2-4c70-bd2a-a61be61f0768"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 7 23:39:15.018478 kubelet[2691]: I0507 23:39:15.018439 2691 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-kube-api-access-tnjpt" (OuterVolumeSpecName: "kube-api-access-tnjpt") pod "b8b730ca-7ee2-4c70-bd2a-a61be61f0768" (UID: "b8b730ca-7ee2-4c70-bd2a-a61be61f0768"). InnerVolumeSpecName "kube-api-access-tnjpt". PluginName "kubernetes.io/projected", VolumeGidValue "" May 7 23:39:15.029082 kubelet[2691]: I0507 23:39:15.029056 2691 scope.go:117] "RemoveContainer" containerID="9d1c15acbdadd698d4517ad5671b6b6d0766be5fbc2e47d157fc1441effccba5" May 7 23:39:15.031360 containerd[1459]: time="2025-05-07T23:39:15.031327088Z" level=info msg="RemoveContainer for \"9d1c15acbdadd698d4517ad5671b6b6d0766be5fbc2e47d157fc1441effccba5\"" May 7 23:39:15.037754 systemd[1]: Removed slice kubepods-besteffort-pod00f5f149_a9c3_486a_b933_e2e96be0e46f.slice - libcontainer container kubepods-besteffort-pod00f5f149_a9c3_486a_b933_e2e96be0e46f.slice. May 7 23:39:15.041093 containerd[1459]: time="2025-05-07T23:39:15.040471408Z" level=info msg="RemoveContainer for \"9d1c15acbdadd698d4517ad5671b6b6d0766be5fbc2e47d157fc1441effccba5\" returns successfully" May 7 23:39:15.041409 kubelet[2691]: I0507 23:39:15.041315 2691 scope.go:117] "RemoveContainer" containerID="9d1c15acbdadd698d4517ad5671b6b6d0766be5fbc2e47d157fc1441effccba5" May 7 23:39:15.041589 containerd[1459]: time="2025-05-07T23:39:15.041532514Z" level=error msg="ContainerStatus for \"9d1c15acbdadd698d4517ad5671b6b6d0766be5fbc2e47d157fc1441effccba5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9d1c15acbdadd698d4517ad5671b6b6d0766be5fbc2e47d157fc1441effccba5\": not found" May 7 23:39:15.041970 systemd[1]: Removed slice kubepods-burstable-podb8b730ca_7ee2_4c70_bd2a_a61be61f0768.slice - libcontainer container kubepods-burstable-podb8b730ca_7ee2_4c70_bd2a_a61be61f0768.slice. May 7 23:39:15.042060 systemd[1]: kubepods-burstable-podb8b730ca_7ee2_4c70_bd2a_a61be61f0768.slice: Consumed 6.787s CPU time, 124.2M memory peak, 264K read from disk, 16.1M written to disk. May 7 23:39:15.052659 kubelet[2691]: E0507 23:39:15.052598 2691 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9d1c15acbdadd698d4517ad5671b6b6d0766be5fbc2e47d157fc1441effccba5\": not found" containerID="9d1c15acbdadd698d4517ad5671b6b6d0766be5fbc2e47d157fc1441effccba5" May 7 23:39:15.052890 kubelet[2691]: I0507 23:39:15.052635 2691 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9d1c15acbdadd698d4517ad5671b6b6d0766be5fbc2e47d157fc1441effccba5"} err="failed to get container status \"9d1c15acbdadd698d4517ad5671b6b6d0766be5fbc2e47d157fc1441effccba5\": rpc error: code = NotFound desc = an error occurred when try to find container \"9d1c15acbdadd698d4517ad5671b6b6d0766be5fbc2e47d157fc1441effccba5\": not found" May 7 23:39:15.052890 kubelet[2691]: I0507 23:39:15.052718 2691 scope.go:117] "RemoveContainer" containerID="81368ec221ae001c3e630ea8b16f97ca3d559b321d1ef99dac7a5d9f90025e1d" May 7 23:39:15.053696 containerd[1459]: time="2025-05-07T23:39:15.053670714Z" level=info msg="RemoveContainer for \"81368ec221ae001c3e630ea8b16f97ca3d559b321d1ef99dac7a5d9f90025e1d\"" May 7 23:39:15.057273 containerd[1459]: time="2025-05-07T23:39:15.057239346Z" level=info msg="RemoveContainer for \"81368ec221ae001c3e630ea8b16f97ca3d559b321d1ef99dac7a5d9f90025e1d\" returns successfully" May 7 23:39:15.057514 kubelet[2691]: I0507 23:39:15.057490 2691 scope.go:117] "RemoveContainer" containerID="9ae889fddb2c7f4377610f86836e153f46b2fd4265da92b8a2a0ad5b506b2abc" May 7 23:39:15.058551 containerd[1459]: time="2025-05-07T23:39:15.058520810Z" level=info msg="RemoveContainer for \"9ae889fddb2c7f4377610f86836e153f46b2fd4265da92b8a2a0ad5b506b2abc\"" May 7 23:39:15.060879 containerd[1459]: time="2025-05-07T23:39:15.060850059Z" level=info msg="RemoveContainer for \"9ae889fddb2c7f4377610f86836e153f46b2fd4265da92b8a2a0ad5b506b2abc\" returns successfully" May 7 23:39:15.061030 kubelet[2691]: I0507 23:39:15.061006 2691 scope.go:117] "RemoveContainer" containerID="43c7173c5daba87d987b28026506e98203db7deff63a74e354980407f03302f3" May 7 23:39:15.062161 containerd[1459]: time="2025-05-07T23:39:15.062112602Z" level=info msg="RemoveContainer for \"43c7173c5daba87d987b28026506e98203db7deff63a74e354980407f03302f3\"" May 7 23:39:15.064537 containerd[1459]: time="2025-05-07T23:39:15.064491451Z" level=info msg="RemoveContainer for \"43c7173c5daba87d987b28026506e98203db7deff63a74e354980407f03302f3\" returns successfully" May 7 23:39:15.064774 kubelet[2691]: I0507 23:39:15.064665 2691 scope.go:117] "RemoveContainer" containerID="643db7dca0395cb4149041a04eac8cfb7bd2acb0390b29531e61e2aa25aea7e2" May 7 23:39:15.065615 containerd[1459]: time="2025-05-07T23:39:15.065590676Z" level=info msg="RemoveContainer for \"643db7dca0395cb4149041a04eac8cfb7bd2acb0390b29531e61e2aa25aea7e2\"" May 7 23:39:15.067787 containerd[1459]: time="2025-05-07T23:39:15.067751128Z" level=info msg="RemoveContainer for \"643db7dca0395cb4149041a04eac8cfb7bd2acb0390b29531e61e2aa25aea7e2\" returns successfully" May 7 23:39:15.067953 kubelet[2691]: I0507 23:39:15.067891 2691 scope.go:117] "RemoveContainer" containerID="6102021bb03d6e7fa43017463606e78697acaa45566f6b5a79f7ed7f35a53065" May 7 23:39:15.069101 containerd[1459]: time="2025-05-07T23:39:15.069073070Z" level=info msg="RemoveContainer for \"6102021bb03d6e7fa43017463606e78697acaa45566f6b5a79f7ed7f35a53065\"" May 7 23:39:15.071118 containerd[1459]: time="2025-05-07T23:39:15.071086084Z" level=info msg="RemoveContainer for \"6102021bb03d6e7fa43017463606e78697acaa45566f6b5a79f7ed7f35a53065\" returns successfully" May 7 23:39:15.071311 kubelet[2691]: I0507 23:39:15.071279 2691 scope.go:117] "RemoveContainer" containerID="81368ec221ae001c3e630ea8b16f97ca3d559b321d1ef99dac7a5d9f90025e1d" May 7 23:39:15.071505 containerd[1459]: time="2025-05-07T23:39:15.071463959Z" level=error msg="ContainerStatus for \"81368ec221ae001c3e630ea8b16f97ca3d559b321d1ef99dac7a5d9f90025e1d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"81368ec221ae001c3e630ea8b16f97ca3d559b321d1ef99dac7a5d9f90025e1d\": not found" May 7 23:39:15.071640 kubelet[2691]: E0507 23:39:15.071614 2691 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"81368ec221ae001c3e630ea8b16f97ca3d559b321d1ef99dac7a5d9f90025e1d\": not found" containerID="81368ec221ae001c3e630ea8b16f97ca3d559b321d1ef99dac7a5d9f90025e1d" May 7 23:39:15.071673 kubelet[2691]: I0507 23:39:15.071644 2691 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"81368ec221ae001c3e630ea8b16f97ca3d559b321d1ef99dac7a5d9f90025e1d"} err="failed to get container status \"81368ec221ae001c3e630ea8b16f97ca3d559b321d1ef99dac7a5d9f90025e1d\": rpc error: code = NotFound desc = an error occurred when try to find container \"81368ec221ae001c3e630ea8b16f97ca3d559b321d1ef99dac7a5d9f90025e1d\": not found" May 7 23:39:15.071673 kubelet[2691]: I0507 23:39:15.071664 2691 scope.go:117] "RemoveContainer" containerID="9ae889fddb2c7f4377610f86836e153f46b2fd4265da92b8a2a0ad5b506b2abc" May 7 23:39:15.071866 containerd[1459]: time="2025-05-07T23:39:15.071823994Z" level=error msg="ContainerStatus for \"9ae889fddb2c7f4377610f86836e153f46b2fd4265da92b8a2a0ad5b506b2abc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9ae889fddb2c7f4377610f86836e153f46b2fd4265da92b8a2a0ad5b506b2abc\": not found" May 7 23:39:15.071981 kubelet[2691]: E0507 23:39:15.071952 2691 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9ae889fddb2c7f4377610f86836e153f46b2fd4265da92b8a2a0ad5b506b2abc\": not found" containerID="9ae889fddb2c7f4377610f86836e153f46b2fd4265da92b8a2a0ad5b506b2abc" May 7 23:39:15.071981 kubelet[2691]: I0507 23:39:15.071968 2691 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9ae889fddb2c7f4377610f86836e153f46b2fd4265da92b8a2a0ad5b506b2abc"} err="failed to get container status \"9ae889fddb2c7f4377610f86836e153f46b2fd4265da92b8a2a0ad5b506b2abc\": rpc error: code = NotFound desc = an error occurred when try to find container \"9ae889fddb2c7f4377610f86836e153f46b2fd4265da92b8a2a0ad5b506b2abc\": not found" May 7 23:39:15.072287 kubelet[2691]: I0507 23:39:15.071983 2691 scope.go:117] "RemoveContainer" containerID="43c7173c5daba87d987b28026506e98203db7deff63a74e354980407f03302f3" May 7 23:39:15.072338 containerd[1459]: time="2025-05-07T23:39:15.072182589Z" level=error msg="ContainerStatus for \"43c7173c5daba87d987b28026506e98203db7deff63a74e354980407f03302f3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"43c7173c5daba87d987b28026506e98203db7deff63a74e354980407f03302f3\": not found" May 7 23:39:15.072585 kubelet[2691]: E0507 23:39:15.072454 2691 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"43c7173c5daba87d987b28026506e98203db7deff63a74e354980407f03302f3\": not found" containerID="43c7173c5daba87d987b28026506e98203db7deff63a74e354980407f03302f3" May 7 23:39:15.072585 kubelet[2691]: I0507 23:39:15.072521 2691 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"43c7173c5daba87d987b28026506e98203db7deff63a74e354980407f03302f3"} err="failed to get container status \"43c7173c5daba87d987b28026506e98203db7deff63a74e354980407f03302f3\": rpc error: code = NotFound desc = an error occurred when try to find container \"43c7173c5daba87d987b28026506e98203db7deff63a74e354980407f03302f3\": not found" May 7 23:39:15.072585 kubelet[2691]: I0507 23:39:15.072545 2691 scope.go:117] "RemoveContainer" containerID="643db7dca0395cb4149041a04eac8cfb7bd2acb0390b29531e61e2aa25aea7e2" May 7 23:39:15.072882 containerd[1459]: time="2025-05-07T23:39:15.072850861Z" level=error msg="ContainerStatus for \"643db7dca0395cb4149041a04eac8cfb7bd2acb0390b29531e61e2aa25aea7e2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"643db7dca0395cb4149041a04eac8cfb7bd2acb0390b29531e61e2aa25aea7e2\": not found" May 7 23:39:15.073054 kubelet[2691]: E0507 23:39:15.072989 2691 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"643db7dca0395cb4149041a04eac8cfb7bd2acb0390b29531e61e2aa25aea7e2\": not found" containerID="643db7dca0395cb4149041a04eac8cfb7bd2acb0390b29531e61e2aa25aea7e2" May 7 23:39:15.073054 kubelet[2691]: I0507 23:39:15.073028 2691 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"643db7dca0395cb4149041a04eac8cfb7bd2acb0390b29531e61e2aa25aea7e2"} err="failed to get container status \"643db7dca0395cb4149041a04eac8cfb7bd2acb0390b29531e61e2aa25aea7e2\": rpc error: code = NotFound desc = an error occurred when try to find container \"643db7dca0395cb4149041a04eac8cfb7bd2acb0390b29531e61e2aa25aea7e2\": not found" May 7 23:39:15.073054 kubelet[2691]: I0507 23:39:15.073044 2691 scope.go:117] "RemoveContainer" containerID="6102021bb03d6e7fa43017463606e78697acaa45566f6b5a79f7ed7f35a53065" May 7 23:39:15.073383 containerd[1459]: time="2025-05-07T23:39:15.073225816Z" level=error msg="ContainerStatus for \"6102021bb03d6e7fa43017463606e78697acaa45566f6b5a79f7ed7f35a53065\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6102021bb03d6e7fa43017463606e78697acaa45566f6b5a79f7ed7f35a53065\": not found" May 7 23:39:15.073440 kubelet[2691]: E0507 23:39:15.073334 2691 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6102021bb03d6e7fa43017463606e78697acaa45566f6b5a79f7ed7f35a53065\": not found" containerID="6102021bb03d6e7fa43017463606e78697acaa45566f6b5a79f7ed7f35a53065" May 7 23:39:15.073440 kubelet[2691]: I0507 23:39:15.073362 2691 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6102021bb03d6e7fa43017463606e78697acaa45566f6b5a79f7ed7f35a53065"} err="failed to get container status \"6102021bb03d6e7fa43017463606e78697acaa45566f6b5a79f7ed7f35a53065\": rpc error: code = NotFound desc = an error occurred when try to find container \"6102021bb03d6e7fa43017463606e78697acaa45566f6b5a79f7ed7f35a53065\": not found" May 7 23:39:15.115676 kubelet[2691]: I0507 23:39:15.115638 2691 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 7 23:39:15.115676 kubelet[2691]: I0507 23:39:15.115666 2691 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 7 23:39:15.115676 kubelet[2691]: I0507 23:39:15.115674 2691 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-cilium-run\") on node \"localhost\" DevicePath \"\"" May 7 23:39:15.115676 kubelet[2691]: I0507 23:39:15.115683 2691 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 7 23:39:15.115837 kubelet[2691]: I0507 23:39:15.115692 2691 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-tnjpt\" (UniqueName: \"kubernetes.io/projected/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-kube-api-access-tnjpt\") on node \"localhost\" DevicePath \"\"" May 7 23:39:15.115837 kubelet[2691]: I0507 23:39:15.115700 2691 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 7 23:39:15.115837 kubelet[2691]: I0507 23:39:15.115708 2691 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-lib-modules\") on node \"localhost\" DevicePath \"\"" May 7 23:39:15.115837 kubelet[2691]: I0507 23:39:15.115715 2691 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 7 23:39:15.115837 kubelet[2691]: I0507 23:39:15.115723 2691 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 7 23:39:15.115837 kubelet[2691]: I0507 23:39:15.115730 2691 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-cni-path\") on node \"localhost\" DevicePath \"\"" May 7 23:39:15.115837 kubelet[2691]: I0507 23:39:15.115737 2691 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b8b730ca-7ee2-4c70-bd2a-a61be61f0768-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 7 23:39:15.661903 systemd[1]: var-lib-kubelet-pods-00f5f149\x2da9c3\x2d486a\x2db933\x2de2e96be0e46f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2phvx.mount: Deactivated successfully. May 7 23:39:15.662011 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7fa7b54a07bde41e294d42f593ad2a0925fecc511568d4fe196e3ef6563c628-rootfs.mount: Deactivated successfully. May 7 23:39:15.662067 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c7fa7b54a07bde41e294d42f593ad2a0925fecc511568d4fe196e3ef6563c628-shm.mount: Deactivated successfully. May 7 23:39:15.662129 systemd[1]: var-lib-kubelet-pods-b8b730ca\x2d7ee2\x2d4c70\x2dbd2a\x2da61be61f0768-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtnjpt.mount: Deactivated successfully. May 7 23:39:15.662209 systemd[1]: var-lib-kubelet-pods-b8b730ca\x2d7ee2\x2d4c70\x2dbd2a\x2da61be61f0768-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 7 23:39:15.662259 systemd[1]: var-lib-kubelet-pods-b8b730ca\x2d7ee2\x2d4c70\x2dbd2a\x2da61be61f0768-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 7 23:39:15.965273 kubelet[2691]: I0507 23:39:15.965211 2691 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-07T23:39:15Z","lastTransitionTime":"2025-05-07T23:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 7 23:39:16.594744 sshd[4376]: Connection closed by 10.0.0.1 port 40968 May 7 23:39:16.596111 sshd-session[4373]: pam_unix(sshd:session): session closed for user core May 7 23:39:16.602844 systemd[1]: sshd@25-10.0.0.15:22-10.0.0.1:40968.service: Deactivated successfully. May 7 23:39:16.604613 systemd[1]: session-26.scope: Deactivated successfully. May 7 23:39:16.604893 systemd[1]: session-26.scope: Consumed 1.941s CPU time, 26.6M memory peak. May 7 23:39:16.605437 systemd-logind[1447]: Session 26 logged out. Waiting for processes to exit. May 7 23:39:16.614665 systemd[1]: Started sshd@26-10.0.0.15:22-10.0.0.1:38536.service - OpenSSH per-connection server daemon (10.0.0.1:38536). May 7 23:39:16.615813 systemd-logind[1447]: Removed session 26. May 7 23:39:16.656551 sshd[4538]: Accepted publickey for core from 10.0.0.1 port 38536 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:39:16.658060 sshd-session[4538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:39:16.662999 systemd-logind[1447]: New session 27 of user core. May 7 23:39:16.674312 systemd[1]: Started session-27.scope - Session 27 of User core. May 7 23:39:16.821826 kubelet[2691]: I0507 23:39:16.821777 2691 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00f5f149-a9c3-486a-b933-e2e96be0e46f" path="/var/lib/kubelet/pods/00f5f149-a9c3-486a-b933-e2e96be0e46f/volumes" May 7 23:39:16.822227 kubelet[2691]: I0507 23:39:16.822207 2691 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8b730ca-7ee2-4c70-bd2a-a61be61f0768" path="/var/lib/kubelet/pods/b8b730ca-7ee2-4c70-bd2a-a61be61f0768/volumes" May 7 23:39:17.444114 sshd[4541]: Connection closed by 10.0.0.1 port 38536 May 7 23:39:17.445344 sshd-session[4538]: pam_unix(sshd:session): session closed for user core May 7 23:39:17.452948 systemd[1]: sshd@26-10.0.0.15:22-10.0.0.1:38536.service: Deactivated successfully. May 7 23:39:17.458353 systemd[1]: session-27.scope: Deactivated successfully. May 7 23:39:17.460408 kubelet[2691]: I0507 23:39:17.460229 2691 topology_manager.go:215] "Topology Admit Handler" podUID="8422a953-e178-4bc3-b7f8-adaed4cff7a0" podNamespace="kube-system" podName="cilium-d9kmx" May 7 23:39:17.460408 kubelet[2691]: E0507 23:39:17.460358 2691 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b8b730ca-7ee2-4c70-bd2a-a61be61f0768" containerName="mount-cgroup" May 7 23:39:17.460408 kubelet[2691]: E0507 23:39:17.460369 2691 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b8b730ca-7ee2-4c70-bd2a-a61be61f0768" containerName="mount-bpf-fs" May 7 23:39:17.460408 kubelet[2691]: E0507 23:39:17.460376 2691 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b8b730ca-7ee2-4c70-bd2a-a61be61f0768" containerName="clean-cilium-state" May 7 23:39:17.460408 kubelet[2691]: E0507 23:39:17.460382 2691 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b8b730ca-7ee2-4c70-bd2a-a61be61f0768" containerName="apply-sysctl-overwrites" May 7 23:39:17.460408 kubelet[2691]: E0507 23:39:17.460388 2691 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="00f5f149-a9c3-486a-b933-e2e96be0e46f" containerName="cilium-operator" May 7 23:39:17.460408 kubelet[2691]: E0507 23:39:17.460396 2691 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b8b730ca-7ee2-4c70-bd2a-a61be61f0768" containerName="cilium-agent" May 7 23:39:17.460408 kubelet[2691]: I0507 23:39:17.460419 2691 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8b730ca-7ee2-4c70-bd2a-a61be61f0768" containerName="cilium-agent" May 7 23:39:17.460818 kubelet[2691]: I0507 23:39:17.460426 2691 memory_manager.go:354] "RemoveStaleState removing state" podUID="00f5f149-a9c3-486a-b933-e2e96be0e46f" containerName="cilium-operator" May 7 23:39:17.462380 systemd-logind[1447]: Session 27 logged out. Waiting for processes to exit. May 7 23:39:17.468832 systemd[1]: Started sshd@27-10.0.0.15:22-10.0.0.1:38552.service - OpenSSH per-connection server daemon (10.0.0.1:38552). May 7 23:39:17.471720 systemd-logind[1447]: Removed session 27. May 7 23:39:17.486250 systemd[1]: Created slice kubepods-burstable-pod8422a953_e178_4bc3_b7f8_adaed4cff7a0.slice - libcontainer container kubepods-burstable-pod8422a953_e178_4bc3_b7f8_adaed4cff7a0.slice. May 7 23:39:17.531699 sshd[4552]: Accepted publickey for core from 10.0.0.1 port 38552 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:39:17.533079 sshd-session[4552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:39:17.537103 systemd-logind[1447]: New session 28 of user core. May 7 23:39:17.543315 systemd[1]: Started session-28.scope - Session 28 of User core. May 7 23:39:17.592544 sshd[4556]: Connection closed by 10.0.0.1 port 38552 May 7 23:39:17.593049 sshd-session[4552]: pam_unix(sshd:session): session closed for user core May 7 23:39:17.604516 systemd[1]: sshd@27-10.0.0.15:22-10.0.0.1:38552.service: Deactivated successfully. May 7 23:39:17.606438 systemd[1]: session-28.scope: Deactivated successfully. May 7 23:39:17.607977 systemd-logind[1447]: Session 28 logged out. Waiting for processes to exit. May 7 23:39:17.613435 systemd[1]: Started sshd@28-10.0.0.15:22-10.0.0.1:38560.service - OpenSSH per-connection server daemon (10.0.0.1:38560). May 7 23:39:17.614930 systemd-logind[1447]: Removed session 28. May 7 23:39:17.629183 kubelet[2691]: I0507 23:39:17.628853 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8422a953-e178-4bc3-b7f8-adaed4cff7a0-cilium-cgroup\") pod \"cilium-d9kmx\" (UID: \"8422a953-e178-4bc3-b7f8-adaed4cff7a0\") " pod="kube-system/cilium-d9kmx" May 7 23:39:17.629183 kubelet[2691]: I0507 23:39:17.628887 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8422a953-e178-4bc3-b7f8-adaed4cff7a0-xtables-lock\") pod \"cilium-d9kmx\" (UID: \"8422a953-e178-4bc3-b7f8-adaed4cff7a0\") " pod="kube-system/cilium-d9kmx" May 7 23:39:17.629183 kubelet[2691]: I0507 23:39:17.628906 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8422a953-e178-4bc3-b7f8-adaed4cff7a0-clustermesh-secrets\") pod \"cilium-d9kmx\" (UID: \"8422a953-e178-4bc3-b7f8-adaed4cff7a0\") " pod="kube-system/cilium-d9kmx" May 7 23:39:17.629183 kubelet[2691]: I0507 23:39:17.628921 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8422a953-e178-4bc3-b7f8-adaed4cff7a0-hubble-tls\") pod \"cilium-d9kmx\" (UID: \"8422a953-e178-4bc3-b7f8-adaed4cff7a0\") " pod="kube-system/cilium-d9kmx" May 7 23:39:17.629183 kubelet[2691]: I0507 23:39:17.628937 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8422a953-e178-4bc3-b7f8-adaed4cff7a0-bpf-maps\") pod \"cilium-d9kmx\" (UID: \"8422a953-e178-4bc3-b7f8-adaed4cff7a0\") " pod="kube-system/cilium-d9kmx" May 7 23:39:17.629183 kubelet[2691]: I0507 23:39:17.628952 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8422a953-e178-4bc3-b7f8-adaed4cff7a0-cilium-config-path\") pod \"cilium-d9kmx\" (UID: \"8422a953-e178-4bc3-b7f8-adaed4cff7a0\") " pod="kube-system/cilium-d9kmx" May 7 23:39:17.629373 kubelet[2691]: I0507 23:39:17.628966 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8422a953-e178-4bc3-b7f8-adaed4cff7a0-cilium-ipsec-secrets\") pod \"cilium-d9kmx\" (UID: \"8422a953-e178-4bc3-b7f8-adaed4cff7a0\") " pod="kube-system/cilium-d9kmx" May 7 23:39:17.629373 kubelet[2691]: I0507 23:39:17.628982 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8422a953-e178-4bc3-b7f8-adaed4cff7a0-cni-path\") pod \"cilium-d9kmx\" (UID: \"8422a953-e178-4bc3-b7f8-adaed4cff7a0\") " pod="kube-system/cilium-d9kmx" May 7 23:39:17.629373 kubelet[2691]: I0507 23:39:17.628996 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds2jc\" (UniqueName: \"kubernetes.io/projected/8422a953-e178-4bc3-b7f8-adaed4cff7a0-kube-api-access-ds2jc\") pod \"cilium-d9kmx\" (UID: \"8422a953-e178-4bc3-b7f8-adaed4cff7a0\") " pod="kube-system/cilium-d9kmx" May 7 23:39:17.629373 kubelet[2691]: I0507 23:39:17.629014 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8422a953-e178-4bc3-b7f8-adaed4cff7a0-lib-modules\") pod \"cilium-d9kmx\" (UID: \"8422a953-e178-4bc3-b7f8-adaed4cff7a0\") " pod="kube-system/cilium-d9kmx" May 7 23:39:17.629373 kubelet[2691]: I0507 23:39:17.629032 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8422a953-e178-4bc3-b7f8-adaed4cff7a0-cilium-run\") pod \"cilium-d9kmx\" (UID: \"8422a953-e178-4bc3-b7f8-adaed4cff7a0\") " pod="kube-system/cilium-d9kmx" May 7 23:39:17.629373 kubelet[2691]: I0507 23:39:17.629047 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8422a953-e178-4bc3-b7f8-adaed4cff7a0-etc-cni-netd\") pod \"cilium-d9kmx\" (UID: \"8422a953-e178-4bc3-b7f8-adaed4cff7a0\") " pod="kube-system/cilium-d9kmx" May 7 23:39:17.629502 kubelet[2691]: I0507 23:39:17.629064 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8422a953-e178-4bc3-b7f8-adaed4cff7a0-host-proc-sys-net\") pod \"cilium-d9kmx\" (UID: \"8422a953-e178-4bc3-b7f8-adaed4cff7a0\") " pod="kube-system/cilium-d9kmx" May 7 23:39:17.629502 kubelet[2691]: I0507 23:39:17.629078 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8422a953-e178-4bc3-b7f8-adaed4cff7a0-hostproc\") pod \"cilium-d9kmx\" (UID: \"8422a953-e178-4bc3-b7f8-adaed4cff7a0\") " pod="kube-system/cilium-d9kmx" May 7 23:39:17.629502 kubelet[2691]: I0507 23:39:17.629092 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8422a953-e178-4bc3-b7f8-adaed4cff7a0-host-proc-sys-kernel\") pod \"cilium-d9kmx\" (UID: \"8422a953-e178-4bc3-b7f8-adaed4cff7a0\") " pod="kube-system/cilium-d9kmx" May 7 23:39:17.654223 sshd[4563]: Accepted publickey for core from 10.0.0.1 port 38560 ssh2: RSA SHA256:7X/0GL6Wfz1DCN37/GHlHSZkyG3w/l6TYxwfIGZEYGQ May 7 23:39:17.655432 sshd-session[4563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:39:17.660845 systemd-logind[1447]: New session 29 of user core. May 7 23:39:17.669295 systemd[1]: Started session-29.scope - Session 29 of User core. May 7 23:39:17.790438 containerd[1459]: time="2025-05-07T23:39:17.790332186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d9kmx,Uid:8422a953-e178-4bc3-b7f8-adaed4cff7a0,Namespace:kube-system,Attempt:0,}" May 7 23:39:17.809175 containerd[1459]: time="2025-05-07T23:39:17.808951869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 7 23:39:17.809175 containerd[1459]: time="2025-05-07T23:39:17.809019108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 7 23:39:17.809175 containerd[1459]: time="2025-05-07T23:39:17.809031548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:39:17.809175 containerd[1459]: time="2025-05-07T23:39:17.809117387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:39:17.825321 systemd[1]: Started cri-containerd-34a6cb702ad9d9dcbc9c425f1b5064b49c3bb26b61511ef843021ed9521c3eb9.scope - libcontainer container 34a6cb702ad9d9dcbc9c425f1b5064b49c3bb26b61511ef843021ed9521c3eb9. May 7 23:39:17.856678 containerd[1459]: time="2025-05-07T23:39:17.856635623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d9kmx,Uid:8422a953-e178-4bc3-b7f8-adaed4cff7a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"34a6cb702ad9d9dcbc9c425f1b5064b49c3bb26b61511ef843021ed9521c3eb9\"" May 7 23:39:17.859961 containerd[1459]: time="2025-05-07T23:39:17.859232350Z" level=info msg="CreateContainer within sandbox \"34a6cb702ad9d9dcbc9c425f1b5064b49c3bb26b61511ef843021ed9521c3eb9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 7 23:39:17.868001 containerd[1459]: time="2025-05-07T23:39:17.867961239Z" level=info msg="CreateContainer within sandbox \"34a6cb702ad9d9dcbc9c425f1b5064b49c3bb26b61511ef843021ed9521c3eb9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1ddd67c6d3092055708975e2f68f4ff54e89ec7f7b0d0f8603de5d02be153f59\"" May 7 23:39:17.868440 containerd[1459]: time="2025-05-07T23:39:17.868405153Z" level=info msg="StartContainer for \"1ddd67c6d3092055708975e2f68f4ff54e89ec7f7b0d0f8603de5d02be153f59\"" May 7 23:39:17.888294 systemd[1]: Started cri-containerd-1ddd67c6d3092055708975e2f68f4ff54e89ec7f7b0d0f8603de5d02be153f59.scope - libcontainer container 1ddd67c6d3092055708975e2f68f4ff54e89ec7f7b0d0f8603de5d02be153f59. May 7 23:39:17.911773 containerd[1459]: time="2025-05-07T23:39:17.911734162Z" level=info msg="StartContainer for \"1ddd67c6d3092055708975e2f68f4ff54e89ec7f7b0d0f8603de5d02be153f59\" returns successfully" May 7 23:39:17.931977 systemd[1]: cri-containerd-1ddd67c6d3092055708975e2f68f4ff54e89ec7f7b0d0f8603de5d02be153f59.scope: Deactivated successfully. May 7 23:39:17.959326 containerd[1459]: time="2025-05-07T23:39:17.959263197Z" level=info msg="shim disconnected" id=1ddd67c6d3092055708975e2f68f4ff54e89ec7f7b0d0f8603de5d02be153f59 namespace=k8s.io May 7 23:39:17.959590 containerd[1459]: time="2025-05-07T23:39:17.959572513Z" level=warning msg="cleaning up after shim disconnected" id=1ddd67c6d3092055708975e2f68f4ff54e89ec7f7b0d0f8603de5d02be153f59 namespace=k8s.io May 7 23:39:17.959659 containerd[1459]: time="2025-05-07T23:39:17.959646992Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 7 23:39:18.045881 containerd[1459]: time="2025-05-07T23:39:18.045482350Z" level=info msg="CreateContainer within sandbox \"34a6cb702ad9d9dcbc9c425f1b5064b49c3bb26b61511ef843021ed9521c3eb9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 7 23:39:18.055926 containerd[1459]: time="2025-05-07T23:39:18.055811861Z" level=info msg="CreateContainer within sandbox \"34a6cb702ad9d9dcbc9c425f1b5064b49c3bb26b61511ef843021ed9521c3eb9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d163922c8e5254c91f9742acdf2297086827ffcbeda593d945291470f50e50bf\"" May 7 23:39:18.056302 containerd[1459]: time="2025-05-07T23:39:18.056271015Z" level=info msg="StartContainer for \"d163922c8e5254c91f9742acdf2297086827ffcbeda593d945291470f50e50bf\"" May 7 23:39:18.078296 systemd[1]: Started cri-containerd-d163922c8e5254c91f9742acdf2297086827ffcbeda593d945291470f50e50bf.scope - libcontainer container d163922c8e5254c91f9742acdf2297086827ffcbeda593d945291470f50e50bf. May 7 23:39:18.099722 containerd[1459]: time="2025-05-07T23:39:18.099683473Z" level=info msg="StartContainer for \"d163922c8e5254c91f9742acdf2297086827ffcbeda593d945291470f50e50bf\" returns successfully" May 7 23:39:18.109040 systemd[1]: cri-containerd-d163922c8e5254c91f9742acdf2297086827ffcbeda593d945291470f50e50bf.scope: Deactivated successfully. May 7 23:39:18.136690 containerd[1459]: time="2025-05-07T23:39:18.136635051Z" level=info msg="shim disconnected" id=d163922c8e5254c91f9742acdf2297086827ffcbeda593d945291470f50e50bf namespace=k8s.io May 7 23:39:18.137069 containerd[1459]: time="2025-05-07T23:39:18.136905247Z" level=warning msg="cleaning up after shim disconnected" id=d163922c8e5254c91f9742acdf2297086827ffcbeda593d945291470f50e50bf namespace=k8s.io May 7 23:39:18.137069 containerd[1459]: time="2025-05-07T23:39:18.136922847Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 7 23:39:19.047861 containerd[1459]: time="2025-05-07T23:39:19.047807992Z" level=info msg="CreateContainer within sandbox \"34a6cb702ad9d9dcbc9c425f1b5064b49c3bb26b61511ef843021ed9521c3eb9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 7 23:39:19.062464 containerd[1459]: time="2025-05-07T23:39:19.062342933Z" level=info msg="CreateContainer within sandbox \"34a6cb702ad9d9dcbc9c425f1b5064b49c3bb26b61511ef843021ed9521c3eb9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d782e0f062f416bd0f176467bf2dc31b8f88fcd3c51c42874aff33cdb33ed062\"" May 7 23:39:19.063383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount971226907.mount: Deactivated successfully. May 7 23:39:19.063677 containerd[1459]: time="2025-05-07T23:39:19.063635557Z" level=info msg="StartContainer for \"d782e0f062f416bd0f176467bf2dc31b8f88fcd3c51c42874aff33cdb33ed062\"" May 7 23:39:19.095310 systemd[1]: Started cri-containerd-d782e0f062f416bd0f176467bf2dc31b8f88fcd3c51c42874aff33cdb33ed062.scope - libcontainer container d782e0f062f416bd0f176467bf2dc31b8f88fcd3c51c42874aff33cdb33ed062. May 7 23:39:19.121131 systemd[1]: cri-containerd-d782e0f062f416bd0f176467bf2dc31b8f88fcd3c51c42874aff33cdb33ed062.scope: Deactivated successfully. May 7 23:39:19.122029 containerd[1459]: time="2025-05-07T23:39:19.121733763Z" level=info msg="StartContainer for \"d782e0f062f416bd0f176467bf2dc31b8f88fcd3c51c42874aff33cdb33ed062\" returns successfully" May 7 23:39:19.143552 containerd[1459]: time="2025-05-07T23:39:19.143477456Z" level=info msg="shim disconnected" id=d782e0f062f416bd0f176467bf2dc31b8f88fcd3c51c42874aff33cdb33ed062 namespace=k8s.io May 7 23:39:19.143883 containerd[1459]: time="2025-05-07T23:39:19.143606295Z" level=warning msg="cleaning up after shim disconnected" id=d782e0f062f416bd0f176467bf2dc31b8f88fcd3c51c42874aff33cdb33ed062 namespace=k8s.io May 7 23:39:19.143883 containerd[1459]: time="2025-05-07T23:39:19.143617215Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 7 23:39:19.734172 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d782e0f062f416bd0f176467bf2dc31b8f88fcd3c51c42874aff33cdb33ed062-rootfs.mount: Deactivated successfully. May 7 23:39:19.868521 kubelet[2691]: E0507 23:39:19.868484 2691 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 7 23:39:20.052919 containerd[1459]: time="2025-05-07T23:39:20.052818616Z" level=info msg="CreateContainer within sandbox \"34a6cb702ad9d9dcbc9c425f1b5064b49c3bb26b61511ef843021ed9521c3eb9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 7 23:39:20.065151 containerd[1459]: time="2025-05-07T23:39:20.065076228Z" level=info msg="CreateContainer within sandbox \"34a6cb702ad9d9dcbc9c425f1b5064b49c3bb26b61511ef843021ed9521c3eb9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6ad1d5ef045da44297834d484227c737c8ce8e21950f2c5f52caf3529de2a7dc\"" May 7 23:39:20.066044 containerd[1459]: time="2025-05-07T23:39:20.065918098Z" level=info msg="StartContainer for \"6ad1d5ef045da44297834d484227c737c8ce8e21950f2c5f52caf3529de2a7dc\"" May 7 23:39:20.092290 systemd[1]: Started cri-containerd-6ad1d5ef045da44297834d484227c737c8ce8e21950f2c5f52caf3529de2a7dc.scope - libcontainer container 6ad1d5ef045da44297834d484227c737c8ce8e21950f2c5f52caf3529de2a7dc. May 7 23:39:20.112255 systemd[1]: cri-containerd-6ad1d5ef045da44297834d484227c737c8ce8e21950f2c5f52caf3529de2a7dc.scope: Deactivated successfully. May 7 23:39:20.114777 containerd[1459]: time="2025-05-07T23:39:20.114667030Z" level=info msg="StartContainer for \"6ad1d5ef045da44297834d484227c737c8ce8e21950f2c5f52caf3529de2a7dc\" returns successfully" May 7 23:39:20.133458 containerd[1459]: time="2025-05-07T23:39:20.133392243Z" level=info msg="shim disconnected" id=6ad1d5ef045da44297834d484227c737c8ce8e21950f2c5f52caf3529de2a7dc namespace=k8s.io May 7 23:39:20.133458 containerd[1459]: time="2025-05-07T23:39:20.133457123Z" level=warning msg="cleaning up after shim disconnected" id=6ad1d5ef045da44297834d484227c737c8ce8e21950f2c5f52caf3529de2a7dc namespace=k8s.io May 7 23:39:20.133458 containerd[1459]: time="2025-05-07T23:39:20.133466123Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 7 23:39:20.734184 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ad1d5ef045da44297834d484227c737c8ce8e21950f2c5f52caf3529de2a7dc-rootfs.mount: Deactivated successfully. May 7 23:39:21.070737 containerd[1459]: time="2025-05-07T23:39:21.070633940Z" level=info msg="CreateContainer within sandbox \"34a6cb702ad9d9dcbc9c425f1b5064b49c3bb26b61511ef843021ed9521c3eb9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 7 23:39:21.090107 containerd[1459]: time="2025-05-07T23:39:21.090052149Z" level=info msg="CreateContainer within sandbox \"34a6cb702ad9d9dcbc9c425f1b5064b49c3bb26b61511ef843021ed9521c3eb9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"dff5ab123006ec87015c06bf6efd605f0fff7f7e2791716116eef4a3d8e69b99\"" May 7 23:39:21.091177 containerd[1459]: time="2025-05-07T23:39:21.090780181Z" level=info msg="StartContainer for \"dff5ab123006ec87015c06bf6efd605f0fff7f7e2791716116eef4a3d8e69b99\"" May 7 23:39:21.119362 systemd[1]: Started cri-containerd-dff5ab123006ec87015c06bf6efd605f0fff7f7e2791716116eef4a3d8e69b99.scope - libcontainer container dff5ab123006ec87015c06bf6efd605f0fff7f7e2791716116eef4a3d8e69b99. May 7 23:39:21.147258 containerd[1459]: time="2025-05-07T23:39:21.147218110Z" level=info msg="StartContainer for \"dff5ab123006ec87015c06bf6efd605f0fff7f7e2791716116eef4a3d8e69b99\" returns successfully" May 7 23:39:21.392283 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 7 23:39:22.074508 kubelet[2691]: I0507 23:39:22.074440 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-d9kmx" podStartSLOduration=5.074423716 podStartE2EDuration="5.074423716s" podCreationTimestamp="2025-05-07 23:39:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-07 23:39:22.070871837 +0000 UTC m=+87.332592084" watchObservedRunningTime="2025-05-07 23:39:22.074423716 +0000 UTC m=+87.336143963" May 7 23:39:24.191297 systemd-networkd[1395]: lxc_health: Link UP May 7 23:39:24.191512 systemd-networkd[1395]: lxc_health: Gained carrier May 7 23:39:25.890266 systemd-networkd[1395]: lxc_health: Gained IPv6LL May 7 23:39:26.107976 systemd[1]: run-containerd-runc-k8s.io-dff5ab123006ec87015c06bf6efd605f0fff7f7e2791716116eef4a3d8e69b99-runc.tmJYuk.mount: Deactivated successfully. May 7 23:39:28.272864 kubelet[2691]: E0507 23:39:28.272826 2691 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:49580->127.0.0.1:40141: write tcp 127.0.0.1:49580->127.0.0.1:40141: write: broken pipe May 7 23:39:30.399196 sshd[4567]: Connection closed by 10.0.0.1 port 38560 May 7 23:39:30.400592 sshd-session[4563]: pam_unix(sshd:session): session closed for user core May 7 23:39:30.403069 systemd[1]: sshd@28-10.0.0.15:22-10.0.0.1:38560.service: Deactivated successfully. May 7 23:39:30.404834 systemd[1]: session-29.scope: Deactivated successfully. May 7 23:39:30.406208 systemd-logind[1447]: Session 29 logged out. Waiting for processes to exit. May 7 23:39:30.407300 systemd-logind[1447]: Removed session 29.