Jul 6 23:43:57.856005 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 6 23:43:57.856030 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Sun Jul 6 21:57:11 -00 2025 Jul 6 23:43:57.856040 kernel: KASLR enabled Jul 6 23:43:57.856046 kernel: efi: EFI v2.7 by EDK II Jul 6 23:43:57.856052 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb21fd18 Jul 6 23:43:57.856057 kernel: random: crng init done Jul 6 23:43:57.856064 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Jul 6 23:43:57.856070 kernel: secureboot: Secure boot enabled Jul 6 23:43:57.856075 kernel: ACPI: Early table checksum verification disabled Jul 6 23:43:57.856083 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Jul 6 23:43:57.856089 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 6 23:43:57.856094 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:43:57.856100 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:43:57.856106 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:43:57.856113 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:43:57.856132 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:43:57.856138 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:43:57.856144 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:43:57.856151 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:43:57.856157 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:43:57.856163 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 6 23:43:57.856169 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 6 23:43:57.856175 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 6 23:43:57.856181 kernel: NODE_DATA(0) allocated [mem 0xdc737a00-0xdc73efff] Jul 6 23:43:57.856187 kernel: Zone ranges: Jul 6 23:43:57.856195 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 6 23:43:57.856205 kernel: DMA32 empty Jul 6 23:43:57.856212 kernel: Normal empty Jul 6 23:43:57.856218 kernel: Device empty Jul 6 23:43:57.856224 kernel: Movable zone start for each node Jul 6 23:43:57.856230 kernel: Early memory node ranges Jul 6 23:43:57.856236 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Jul 6 23:43:57.856242 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Jul 6 23:43:57.856248 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Jul 6 23:43:57.856255 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Jul 6 23:43:57.856261 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Jul 6 23:43:57.856266 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Jul 6 23:43:57.856275 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Jul 6 23:43:57.856281 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Jul 6 23:43:57.856287 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 6 23:43:57.856296 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 6 23:43:57.856302 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 6 23:43:57.856309 kernel: cma: Reserved 16 MiB at 0x00000000d7a00000 on node -1 Jul 6 23:43:57.856316 kernel: psci: probing for conduit method from ACPI. Jul 6 23:43:57.856324 kernel: psci: PSCIv1.1 detected in firmware. Jul 6 23:43:57.856330 kernel: psci: Using standard PSCI v0.2 function IDs Jul 6 23:43:57.856336 kernel: psci: Trusted OS migration not required Jul 6 23:43:57.856343 kernel: psci: SMC Calling Convention v1.1 Jul 6 23:43:57.856349 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 6 23:43:57.856356 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 6 23:43:57.856363 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 6 23:43:57.856369 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 6 23:43:57.856376 kernel: Detected PIPT I-cache on CPU0 Jul 6 23:43:57.856384 kernel: CPU features: detected: GIC system register CPU interface Jul 6 23:43:57.856390 kernel: CPU features: detected: Spectre-v4 Jul 6 23:43:57.856396 kernel: CPU features: detected: Spectre-BHB Jul 6 23:43:57.856403 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 6 23:43:57.856409 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 6 23:43:57.856416 kernel: CPU features: detected: ARM erratum 1418040 Jul 6 23:43:57.856422 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 6 23:43:57.856429 kernel: alternatives: applying boot alternatives Jul 6 23:43:57.856436 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d1bbaf8ae8f23de11dc703e14022523825f85f007c0c35003d7559228cbdda22 Jul 6 23:43:57.856443 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:43:57.856450 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 6 23:43:57.856458 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:43:57.856464 kernel: Fallback order for Node 0: 0 Jul 6 23:43:57.856471 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Jul 6 23:43:57.856477 kernel: Policy zone: DMA Jul 6 23:43:57.856484 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:43:57.856490 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Jul 6 23:43:57.856497 kernel: software IO TLB: area num 4. Jul 6 23:43:57.856504 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Jul 6 23:43:57.856510 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Jul 6 23:43:57.856517 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 6 23:43:57.856523 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:43:57.856530 kernel: rcu: RCU event tracing is enabled. Jul 6 23:43:57.856538 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 6 23:43:57.856545 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:43:57.856552 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:43:57.856558 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:43:57.856573 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 6 23:43:57.856580 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 6 23:43:57.856587 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 6 23:43:57.856593 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 6 23:43:57.856600 kernel: GICv3: 256 SPIs implemented Jul 6 23:43:57.856606 kernel: GICv3: 0 Extended SPIs implemented Jul 6 23:43:57.856612 kernel: Root IRQ handler: gic_handle_irq Jul 6 23:43:57.856620 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 6 23:43:57.856627 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jul 6 23:43:57.856633 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 6 23:43:57.856640 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 6 23:43:57.856646 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Jul 6 23:43:57.856653 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Jul 6 23:43:57.856660 kernel: GICv3: using LPI property table @0x0000000040130000 Jul 6 23:43:57.856666 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Jul 6 23:43:57.856672 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:43:57.856679 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:43:57.856685 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 6 23:43:57.856692 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 6 23:43:57.856700 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 6 23:43:57.856706 kernel: arm-pv: using stolen time PV Jul 6 23:43:57.856713 kernel: Console: colour dummy device 80x25 Jul 6 23:43:57.856720 kernel: ACPI: Core revision 20240827 Jul 6 23:43:57.856727 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 6 23:43:57.856734 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:43:57.856740 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 6 23:43:57.856747 kernel: landlock: Up and running. Jul 6 23:43:57.856754 kernel: SELinux: Initializing. Jul 6 23:43:57.856762 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:43:57.856769 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:43:57.856776 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:43:57.856782 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:43:57.856789 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 6 23:43:57.856796 kernel: Remapping and enabling EFI services. Jul 6 23:43:57.856802 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:43:57.856809 kernel: Detected PIPT I-cache on CPU1 Jul 6 23:43:57.856816 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 6 23:43:57.856824 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Jul 6 23:43:57.856836 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:43:57.856843 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 6 23:43:57.856852 kernel: Detected PIPT I-cache on CPU2 Jul 6 23:43:57.856859 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 6 23:43:57.856866 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Jul 6 23:43:57.856873 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:43:57.856880 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 6 23:43:57.856887 kernel: Detected PIPT I-cache on CPU3 Jul 6 23:43:57.856896 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 6 23:43:57.856903 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Jul 6 23:43:57.856910 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:43:57.856917 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 6 23:43:57.856924 kernel: smp: Brought up 1 node, 4 CPUs Jul 6 23:43:57.856931 kernel: SMP: Total of 4 processors activated. Jul 6 23:43:57.856938 kernel: CPU: All CPU(s) started at EL1 Jul 6 23:43:57.856945 kernel: CPU features: detected: 32-bit EL0 Support Jul 6 23:43:57.856952 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 6 23:43:57.856961 kernel: CPU features: detected: Common not Private translations Jul 6 23:43:57.856968 kernel: CPU features: detected: CRC32 instructions Jul 6 23:43:57.856975 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 6 23:43:57.856982 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 6 23:43:57.856988 kernel: CPU features: detected: LSE atomic instructions Jul 6 23:43:57.856995 kernel: CPU features: detected: Privileged Access Never Jul 6 23:43:57.857002 kernel: CPU features: detected: RAS Extension Support Jul 6 23:43:57.857009 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 6 23:43:57.857016 kernel: alternatives: applying system-wide alternatives Jul 6 23:43:57.857025 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Jul 6 23:43:57.857032 kernel: Memory: 2421860K/2572288K available (11136K kernel code, 2436K rwdata, 9076K rodata, 39488K init, 1038K bss, 128092K reserved, 16384K cma-reserved) Jul 6 23:43:57.857040 kernel: devtmpfs: initialized Jul 6 23:43:57.857047 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:43:57.857054 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 6 23:43:57.857061 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 6 23:43:57.857068 kernel: 0 pages in range for non-PLT usage Jul 6 23:43:57.857075 kernel: 508432 pages in range for PLT usage Jul 6 23:43:57.857082 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:43:57.857091 kernel: SMBIOS 3.0.0 present. Jul 6 23:43:57.857098 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jul 6 23:43:57.857105 kernel: DMI: Memory slots populated: 1/1 Jul 6 23:43:57.857112 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:43:57.857119 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 6 23:43:57.857131 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 6 23:43:57.857138 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 6 23:43:57.857145 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:43:57.857153 kernel: audit: type=2000 audit(0.027:1): state=initialized audit_enabled=0 res=1 Jul 6 23:43:57.857162 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:43:57.857169 kernel: cpuidle: using governor menu Jul 6 23:43:57.857176 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 6 23:43:57.857183 kernel: ASID allocator initialised with 32768 entries Jul 6 23:43:57.857192 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:43:57.857200 kernel: Serial: AMBA PL011 UART driver Jul 6 23:43:57.857209 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:43:57.857218 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:43:57.857226 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 6 23:43:57.857236 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 6 23:43:57.857243 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:43:57.857250 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:43:57.857258 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 6 23:43:57.857265 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 6 23:43:57.857272 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:43:57.857279 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:43:57.857286 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:43:57.857293 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:43:57.857301 kernel: ACPI: Interpreter enabled Jul 6 23:43:57.857308 kernel: ACPI: Using GIC for interrupt routing Jul 6 23:43:57.857315 kernel: ACPI: MCFG table detected, 1 entries Jul 6 23:43:57.857322 kernel: ACPI: CPU0 has been hot-added Jul 6 23:43:57.857329 kernel: ACPI: CPU1 has been hot-added Jul 6 23:43:57.857336 kernel: ACPI: CPU2 has been hot-added Jul 6 23:43:57.857343 kernel: ACPI: CPU3 has been hot-added Jul 6 23:43:57.857351 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 6 23:43:57.857358 kernel: printk: legacy console [ttyAMA0] enabled Jul 6 23:43:57.857366 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 6 23:43:57.857513 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:43:57.857593 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 6 23:43:57.857656 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 6 23:43:57.857726 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 6 23:43:57.857786 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 6 23:43:57.857795 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 6 23:43:57.857805 kernel: PCI host bridge to bus 0000:00 Jul 6 23:43:57.857876 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 6 23:43:57.857932 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 6 23:43:57.857986 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 6 23:43:57.858039 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 6 23:43:57.858131 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jul 6 23:43:57.858209 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 6 23:43:57.858274 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Jul 6 23:43:57.858335 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Jul 6 23:43:57.858395 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jul 6 23:43:57.858455 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jul 6 23:43:57.858516 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Jul 6 23:43:57.858672 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Jul 6 23:43:57.858748 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 6 23:43:57.858803 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 6 23:43:57.858862 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 6 23:43:57.858871 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 6 23:43:57.858878 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 6 23:43:57.858885 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 6 23:43:57.858893 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 6 23:43:57.858900 kernel: iommu: Default domain type: Translated Jul 6 23:43:57.858907 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 6 23:43:57.858916 kernel: efivars: Registered efivars operations Jul 6 23:43:57.858923 kernel: vgaarb: loaded Jul 6 23:43:57.858930 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 6 23:43:57.858937 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:43:57.858945 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:43:57.858952 kernel: pnp: PnP ACPI init Jul 6 23:43:57.859025 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 6 23:43:57.859037 kernel: pnp: PnP ACPI: found 1 devices Jul 6 23:43:57.859046 kernel: NET: Registered PF_INET protocol family Jul 6 23:43:57.859053 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 6 23:43:57.859060 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 6 23:43:57.859067 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:43:57.859075 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 6 23:43:57.859082 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 6 23:43:57.859089 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 6 23:43:57.859096 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:43:57.859103 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:43:57.859111 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:43:57.859118 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:43:57.859136 kernel: kvm [1]: HYP mode not available Jul 6 23:43:57.859143 kernel: Initialise system trusted keyrings Jul 6 23:43:57.859151 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 6 23:43:57.859158 kernel: Key type asymmetric registered Jul 6 23:43:57.859165 kernel: Asymmetric key parser 'x509' registered Jul 6 23:43:57.859172 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 6 23:43:57.859180 kernel: io scheduler mq-deadline registered Jul 6 23:43:57.859190 kernel: io scheduler kyber registered Jul 6 23:43:57.859197 kernel: io scheduler bfq registered Jul 6 23:43:57.859204 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 6 23:43:57.859211 kernel: ACPI: button: Power Button [PWRB] Jul 6 23:43:57.859219 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 6 23:43:57.859292 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 6 23:43:57.859302 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:43:57.859309 kernel: thunder_xcv, ver 1.0 Jul 6 23:43:57.859316 kernel: thunder_bgx, ver 1.0 Jul 6 23:43:57.859325 kernel: nicpf, ver 1.0 Jul 6 23:43:57.859332 kernel: nicvf, ver 1.0 Jul 6 23:43:57.859406 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 6 23:43:57.859478 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-06T23:43:57 UTC (1751845437) Jul 6 23:43:57.859487 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 6 23:43:57.859495 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 6 23:43:57.859502 kernel: watchdog: NMI not fully supported Jul 6 23:43:57.859509 kernel: watchdog: Hard watchdog permanently disabled Jul 6 23:43:57.859518 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:43:57.859526 kernel: Segment Routing with IPv6 Jul 6 23:43:57.859533 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:43:57.859540 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:43:57.859547 kernel: Key type dns_resolver registered Jul 6 23:43:57.859554 kernel: registered taskstats version 1 Jul 6 23:43:57.859561 kernel: Loading compiled-in X.509 certificates Jul 6 23:43:57.859652 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: f8c1d02496b1c3f2ac4a0c4b5b2a55d3dc0ca718' Jul 6 23:43:57.859660 kernel: Demotion targets for Node 0: null Jul 6 23:43:57.859670 kernel: Key type .fscrypt registered Jul 6 23:43:57.859677 kernel: Key type fscrypt-provisioning registered Jul 6 23:43:57.859685 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:43:57.859692 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:43:57.859699 kernel: ima: No architecture policies found Jul 6 23:43:57.859706 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 6 23:43:57.859713 kernel: clk: Disabling unused clocks Jul 6 23:43:57.859720 kernel: PM: genpd: Disabling unused power domains Jul 6 23:43:57.859727 kernel: Warning: unable to open an initial console. Jul 6 23:43:57.859736 kernel: Freeing unused kernel memory: 39488K Jul 6 23:43:57.859743 kernel: Run /init as init process Jul 6 23:43:57.859750 kernel: with arguments: Jul 6 23:43:57.859757 kernel: /init Jul 6 23:43:57.859764 kernel: with environment: Jul 6 23:43:57.859771 kernel: HOME=/ Jul 6 23:43:57.859778 kernel: TERM=linux Jul 6 23:43:57.859786 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:43:57.859793 systemd[1]: Successfully made /usr/ read-only. Jul 6 23:43:57.859806 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:43:57.859814 systemd[1]: Detected virtualization kvm. Jul 6 23:43:57.859821 systemd[1]: Detected architecture arm64. Jul 6 23:43:57.859828 systemd[1]: Running in initrd. Jul 6 23:43:57.859836 systemd[1]: No hostname configured, using default hostname. Jul 6 23:43:57.859844 systemd[1]: Hostname set to . Jul 6 23:43:57.859851 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:43:57.859860 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:43:57.859868 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:43:57.859876 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:43:57.859884 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:43:57.859892 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:43:57.859899 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:43:57.859908 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:43:57.859918 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:43:57.859926 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:43:57.859933 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:43:57.859941 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:43:57.859949 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:43:57.859957 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:43:57.859964 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:43:57.859972 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:43:57.859981 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:43:57.859989 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:43:57.859996 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:43:57.860004 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 6 23:43:57.860012 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:43:57.860020 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:43:57.860027 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:43:57.860035 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:43:57.860043 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:43:57.860052 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:43:57.860060 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:43:57.860068 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 6 23:43:57.860076 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:43:57.860084 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:43:57.860092 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:43:57.860099 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:43:57.860107 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:43:57.860117 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:43:57.860132 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:43:57.860164 systemd-journald[243]: Collecting audit messages is disabled. Jul 6 23:43:57.860186 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:43:57.860195 systemd-journald[243]: Journal started Jul 6 23:43:57.860214 systemd-journald[243]: Runtime Journal (/run/log/journal/cd4b57041da14abe82fd695838f0062e) is 6M, max 48.5M, 42.4M free. Jul 6 23:43:57.853381 systemd-modules-load[246]: Inserted module 'overlay' Jul 6 23:43:57.862240 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:43:57.863849 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:43:57.867852 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:43:57.872276 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:43:57.871659 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:43:57.874665 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:43:57.877940 systemd-modules-load[246]: Inserted module 'br_netfilter' Jul 6 23:43:57.878911 kernel: Bridge firewalling registered Jul 6 23:43:57.879463 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:43:57.881278 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:43:57.884655 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:43:57.885373 systemd-tmpfiles[266]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 6 23:43:57.890133 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:43:57.891811 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:43:57.898673 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:43:57.901928 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:43:57.904256 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:43:57.906949 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:43:57.933602 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d1bbaf8ae8f23de11dc703e14022523825f85f007c0c35003d7559228cbdda22 Jul 6 23:43:57.950238 systemd-resolved[290]: Positive Trust Anchors: Jul 6 23:43:57.950257 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:43:57.950289 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:43:57.956416 systemd-resolved[290]: Defaulting to hostname 'linux'. Jul 6 23:43:57.957850 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:43:57.960069 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:43:58.023582 kernel: SCSI subsystem initialized Jul 6 23:43:58.027594 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:43:58.035615 kernel: iscsi: registered transport (tcp) Jul 6 23:43:58.049607 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:43:58.049669 kernel: QLogic iSCSI HBA Driver Jul 6 23:43:58.070848 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:43:58.086436 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:43:58.088010 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:43:58.149056 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:43:58.151429 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:43:58.220608 kernel: raid6: neonx8 gen() 15641 MB/s Jul 6 23:43:58.237597 kernel: raid6: neonx4 gen() 15795 MB/s Jul 6 23:43:58.254591 kernel: raid6: neonx2 gen() 13019 MB/s Jul 6 23:43:58.271607 kernel: raid6: neonx1 gen() 10311 MB/s Jul 6 23:43:58.288628 kernel: raid6: int64x8 gen() 6795 MB/s Jul 6 23:43:58.305607 kernel: raid6: int64x4 gen() 7117 MB/s Jul 6 23:43:58.322595 kernel: raid6: int64x2 gen() 5961 MB/s Jul 6 23:43:58.339824 kernel: raid6: int64x1 gen() 5043 MB/s Jul 6 23:43:58.339847 kernel: raid6: using algorithm neonx4 gen() 15795 MB/s Jul 6 23:43:58.357805 kernel: raid6: .... xor() 12276 MB/s, rmw enabled Jul 6 23:43:58.357848 kernel: raid6: using neon recovery algorithm Jul 6 23:43:58.364599 kernel: xor: measuring software checksum speed Jul 6 23:43:58.365849 kernel: 8regs : 18082 MB/sec Jul 6 23:43:58.365873 kernel: 32regs : 21653 MB/sec Jul 6 23:43:58.367104 kernel: arm64_neon : 27851 MB/sec Jul 6 23:43:58.367118 kernel: xor: using function: arm64_neon (27851 MB/sec) Jul 6 23:43:58.424172 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:43:58.431056 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:43:58.433928 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:43:58.471209 systemd-udevd[499]: Using default interface naming scheme 'v255'. Jul 6 23:43:58.479068 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:43:58.481285 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:43:58.508611 dracut-pre-trigger[504]: rd.md=0: removing MD RAID activation Jul 6 23:43:58.536677 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:43:58.539503 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:43:58.589537 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:43:58.593368 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:43:58.642230 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 6 23:43:58.642417 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 6 23:43:58.645931 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:43:58.646059 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:43:58.657465 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 6 23:43:58.657491 kernel: GPT:9289727 != 19775487 Jul 6 23:43:58.657490 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:43:58.661323 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 6 23:43:58.661344 kernel: GPT:9289727 != 19775487 Jul 6 23:43:58.661353 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 6 23:43:58.661362 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:43:58.661918 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:43:58.690836 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 6 23:43:58.692414 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:43:58.694527 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:43:58.713385 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 6 23:43:58.719825 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 6 23:43:58.721130 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 6 23:43:58.730502 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 6 23:43:58.731835 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:43:58.733892 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:43:58.735988 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:43:58.738810 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:43:58.740905 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:43:58.765657 disk-uuid[592]: Primary Header is updated. Jul 6 23:43:58.765657 disk-uuid[592]: Secondary Entries is updated. Jul 6 23:43:58.765657 disk-uuid[592]: Secondary Header is updated. Jul 6 23:43:58.770493 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:43:58.771110 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:43:59.785624 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:43:59.788608 disk-uuid[597]: The operation has completed successfully. Jul 6 23:43:59.820516 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:43:59.821737 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:43:59.842603 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:43:59.871153 sh[612]: Success Jul 6 23:43:59.886651 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:43:59.886710 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:43:59.888604 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 6 23:43:59.902840 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 6 23:43:59.930243 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:43:59.934518 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:43:59.951939 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:43:59.959916 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 6 23:43:59.959979 kernel: BTRFS: device fsid 2cfafe0a-eb24-4e1d-b9c9-dec7de7e4c4d devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (624) Jul 6 23:43:59.961733 kernel: BTRFS info (device dm-0): first mount of filesystem 2cfafe0a-eb24-4e1d-b9c9-dec7de7e4c4d Jul 6 23:43:59.962746 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:43:59.963582 kernel: BTRFS info (device dm-0): using free-space-tree Jul 6 23:43:59.967479 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:43:59.968946 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 6 23:43:59.970445 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:43:59.971315 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:43:59.973065 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:44:00.006535 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (653) Jul 6 23:44:00.006600 kernel: BTRFS info (device vda6): first mount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 6 23:44:00.006619 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:44:00.008616 kernel: BTRFS info (device vda6): using free-space-tree Jul 6 23:44:00.015592 kernel: BTRFS info (device vda6): last unmount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 6 23:44:00.017285 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:44:00.019712 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:44:00.090888 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:44:00.095940 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:44:00.144264 systemd-networkd[800]: lo: Link UP Jul 6 23:44:00.144277 systemd-networkd[800]: lo: Gained carrier Jul 6 23:44:00.145033 systemd-networkd[800]: Enumeration completed Jul 6 23:44:00.145198 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:44:00.147315 systemd-networkd[800]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:44:00.147319 systemd-networkd[800]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:44:00.147882 systemd[1]: Reached target network.target - Network. Jul 6 23:44:00.148190 systemd-networkd[800]: eth0: Link UP Jul 6 23:44:00.148193 systemd-networkd[800]: eth0: Gained carrier Jul 6 23:44:00.148202 systemd-networkd[800]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:44:00.176669 systemd-networkd[800]: eth0: DHCPv4 address 10.0.0.128/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 6 23:44:00.177291 ignition[702]: Ignition 2.21.0 Jul 6 23:44:00.177299 ignition[702]: Stage: fetch-offline Jul 6 23:44:00.177339 ignition[702]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:44:00.177351 ignition[702]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:44:00.177554 ignition[702]: parsed url from cmdline: "" Jul 6 23:44:00.177558 ignition[702]: no config URL provided Jul 6 23:44:00.177562 ignition[702]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:44:00.177588 ignition[702]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:44:00.177611 ignition[702]: op(1): [started] loading QEMU firmware config module Jul 6 23:44:00.177615 ignition[702]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 6 23:44:00.183058 ignition[702]: op(1): [finished] loading QEMU firmware config module Jul 6 23:44:00.223999 ignition[702]: parsing config with SHA512: 80556ccd55e29b0df0c9f4c6719ebb8761f8fec01407aa0107b378d6512c37b9ef950ce43a14878c8313b436e2655c8bdf617b6f645924bdc2de182e7343dcd5 Jul 6 23:44:00.228395 unknown[702]: fetched base config from "system" Jul 6 23:44:00.228407 unknown[702]: fetched user config from "qemu" Jul 6 23:44:00.228788 ignition[702]: fetch-offline: fetch-offline passed Jul 6 23:44:00.228844 ignition[702]: Ignition finished successfully Jul 6 23:44:00.232298 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:44:00.234014 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 6 23:44:00.236905 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:44:00.277292 ignition[814]: Ignition 2.21.0 Jul 6 23:44:00.277310 ignition[814]: Stage: kargs Jul 6 23:44:00.277452 ignition[814]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:44:00.277461 ignition[814]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:44:00.278318 ignition[814]: kargs: kargs passed Jul 6 23:44:00.278373 ignition[814]: Ignition finished successfully Jul 6 23:44:00.283508 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:44:00.285762 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:44:00.317653 ignition[821]: Ignition 2.21.0 Jul 6 23:44:00.317669 ignition[821]: Stage: disks Jul 6 23:44:00.317858 ignition[821]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:44:00.317867 ignition[821]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:44:00.320391 ignition[821]: disks: disks passed Jul 6 23:44:00.322110 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:44:00.320454 ignition[821]: Ignition finished successfully Jul 6 23:44:00.323408 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:44:00.325229 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:44:00.327068 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:44:00.328964 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:44:00.331152 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:44:00.333937 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:44:00.360284 systemd-fsck[832]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 6 23:44:00.385744 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:44:00.388144 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:44:00.462666 kernel: EXT4-fs (vda9): mounted filesystem 8d88df29-f94d-4ab8-8fb6-af875603e6d4 r/w with ordered data mode. Quota mode: none. Jul 6 23:44:00.462701 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:44:00.463953 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:44:00.466508 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:44:00.468207 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:44:00.469203 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 6 23:44:00.469243 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:44:00.469266 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:44:00.480413 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:44:00.482643 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:44:00.488682 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (840) Jul 6 23:44:00.488715 kernel: BTRFS info (device vda6): first mount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 6 23:44:00.488725 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:44:00.488735 kernel: BTRFS info (device vda6): using free-space-tree Jul 6 23:44:00.493761 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:44:00.561854 initrd-setup-root[864]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:44:00.565343 initrd-setup-root[871]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:44:00.568452 initrd-setup-root[878]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:44:00.571563 initrd-setup-root[885]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:44:00.660624 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:44:00.664698 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:44:00.666650 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:44:00.681629 kernel: BTRFS info (device vda6): last unmount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 6 23:44:00.708506 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:44:00.717293 ignition[953]: INFO : Ignition 2.21.0 Jul 6 23:44:00.717293 ignition[953]: INFO : Stage: mount Jul 6 23:44:00.719052 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:44:00.719052 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:44:00.721890 ignition[953]: INFO : mount: mount passed Jul 6 23:44:00.721890 ignition[953]: INFO : Ignition finished successfully Jul 6 23:44:00.721624 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:44:00.723994 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:44:00.958054 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:44:00.959585 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:44:00.991587 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (967) Jul 6 23:44:00.994335 kernel: BTRFS info (device vda6): first mount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 6 23:44:00.994351 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:44:00.994361 kernel: BTRFS info (device vda6): using free-space-tree Jul 6 23:44:00.997752 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:44:01.035066 ignition[984]: INFO : Ignition 2.21.0 Jul 6 23:44:01.035066 ignition[984]: INFO : Stage: files Jul 6 23:44:01.036855 ignition[984]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:44:01.036855 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:44:01.036855 ignition[984]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:44:01.040153 ignition[984]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:44:01.040153 ignition[984]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:44:01.043089 ignition[984]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:44:01.043089 ignition[984]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:44:01.043089 ignition[984]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:44:01.042410 unknown[984]: wrote ssh authorized keys file for user: core Jul 6 23:44:01.048382 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 6 23:44:01.048382 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 6 23:44:01.089874 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 6 23:44:01.231411 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 6 23:44:01.231411 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 6 23:44:01.235179 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 6 23:44:01.298771 systemd-networkd[800]: eth0: Gained IPv6LL Jul 6 23:44:01.554231 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 6 23:44:01.637299 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 6 23:44:01.637299 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:44:01.641410 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:44:01.641410 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:44:01.641410 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:44:01.641410 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:44:01.641410 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:44:01.641410 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:44:01.641410 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:44:01.641410 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:44:01.641410 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:44:01.641410 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 6 23:44:01.641410 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 6 23:44:01.641410 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 6 23:44:01.641410 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 6 23:44:02.027898 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 6 23:44:02.693794 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 6 23:44:02.693794 ignition[984]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 6 23:44:02.697917 ignition[984]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:44:02.697917 ignition[984]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:44:02.697917 ignition[984]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 6 23:44:02.697917 ignition[984]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 6 23:44:02.697917 ignition[984]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 6 23:44:02.697917 ignition[984]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 6 23:44:02.697917 ignition[984]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 6 23:44:02.697917 ignition[984]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 6 23:44:02.737159 ignition[984]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 6 23:44:02.741038 ignition[984]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 6 23:44:02.742585 ignition[984]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 6 23:44:02.742585 ignition[984]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:44:02.742585 ignition[984]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:44:02.742585 ignition[984]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:44:02.742585 ignition[984]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:44:02.742585 ignition[984]: INFO : files: files passed Jul 6 23:44:02.742585 ignition[984]: INFO : Ignition finished successfully Jul 6 23:44:02.743058 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:44:02.747094 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:44:02.750780 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:44:02.769856 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:44:02.771124 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:44:02.774322 initrd-setup-root-after-ignition[1013]: grep: /sysroot/oem/oem-release: No such file or directory Jul 6 23:44:02.776750 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:44:02.776750 initrd-setup-root-after-ignition[1015]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:44:02.780073 initrd-setup-root-after-ignition[1019]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:44:02.783696 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:44:02.787974 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:44:02.790648 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:44:02.855869 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:44:02.856003 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:44:02.858265 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:44:02.860337 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:44:02.862392 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:44:02.863317 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:44:02.898440 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:44:02.901552 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:44:02.932621 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:44:02.934265 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:44:02.936678 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:44:02.938589 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:44:02.938725 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:44:02.941666 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:44:02.943927 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:44:02.945491 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:44:02.947792 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:44:02.949771 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:44:02.951992 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 6 23:44:02.954183 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:44:02.956149 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:44:02.958337 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:44:02.960539 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:44:02.962651 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:44:02.964308 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:44:02.964445 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:44:02.967731 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:44:02.970828 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:44:02.972890 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:44:02.973675 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:44:02.975178 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:44:02.975325 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:44:02.978196 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:44:02.978330 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:44:02.980398 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:44:02.982066 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:44:02.988648 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:44:02.989927 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:44:02.992214 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:44:02.993941 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:44:02.994041 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:44:02.995675 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:44:02.995780 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:44:02.997791 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:44:02.997926 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:44:02.999849 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:44:02.999955 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:44:03.002668 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:44:03.005461 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:44:03.006613 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:44:03.006771 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:44:03.008784 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:44:03.008891 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:44:03.015216 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:44:03.016794 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:44:03.027209 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:44:03.038870 ignition[1039]: INFO : Ignition 2.21.0 Jul 6 23:44:03.038870 ignition[1039]: INFO : Stage: umount Jul 6 23:44:03.040707 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:44:03.040707 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:44:03.043594 ignition[1039]: INFO : umount: umount passed Jul 6 23:44:03.043594 ignition[1039]: INFO : Ignition finished successfully Jul 6 23:44:03.046609 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:44:03.046729 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:44:03.048774 systemd[1]: Stopped target network.target - Network. Jul 6 23:44:03.052245 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:44:03.052319 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:44:03.054250 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:44:03.054306 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:44:03.056489 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:44:03.056626 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:44:03.058309 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:44:03.058365 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:44:03.060471 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:44:03.062362 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:44:03.067815 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:44:03.067926 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:44:03.071742 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 6 23:44:03.072010 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:44:03.072056 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:44:03.076393 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:44:03.076790 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:44:03.076939 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:44:03.081533 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 6 23:44:03.082082 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 6 23:44:03.083928 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:44:03.083974 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:44:03.087385 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:44:03.088340 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:44:03.088422 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:44:03.090630 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:44:03.090703 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:44:03.094723 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:44:03.094773 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:44:03.096209 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:44:03.100912 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 6 23:44:03.109060 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:44:03.109174 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:44:03.111338 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:44:03.111387 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:44:03.121786 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:44:03.122747 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:44:03.124246 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:44:03.124289 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:44:03.128321 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:44:03.128358 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:44:03.130186 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:44:03.130240 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:44:03.133159 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:44:03.133227 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:44:03.136463 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:44:03.136529 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:44:03.140455 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:44:03.141643 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 6 23:44:03.141713 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:44:03.145060 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:44:03.145114 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:44:03.148249 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 6 23:44:03.148293 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:44:03.151712 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:44:03.151761 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:44:03.154010 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:44:03.154058 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:44:03.158295 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:44:03.158397 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:44:03.159809 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:44:03.159895 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:44:03.162645 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:44:03.164538 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:44:03.189979 systemd[1]: Switching root. Jul 6 23:44:03.212992 systemd-journald[243]: Journal stopped Jul 6 23:44:04.125254 systemd-journald[243]: Received SIGTERM from PID 1 (systemd). Jul 6 23:44:04.125312 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:44:04.125324 kernel: SELinux: policy capability open_perms=1 Jul 6 23:44:04.125334 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:44:04.125343 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:44:04.125352 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:44:04.125363 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:44:04.125372 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:44:04.125387 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:44:04.125397 kernel: SELinux: policy capability userspace_initial_context=0 Jul 6 23:44:04.125407 kernel: audit: type=1403 audit(1751845443.433:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:44:04.125419 systemd[1]: Successfully loaded SELinux policy in 48.350ms. Jul 6 23:44:04.125435 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.537ms. Jul 6 23:44:04.125446 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:44:04.125457 systemd[1]: Detected virtualization kvm. Jul 6 23:44:04.125467 systemd[1]: Detected architecture arm64. Jul 6 23:44:04.125478 systemd[1]: Detected first boot. Jul 6 23:44:04.125493 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:44:04.125503 zram_generator::config[1086]: No configuration found. Jul 6 23:44:04.125513 kernel: NET: Registered PF_VSOCK protocol family Jul 6 23:44:04.125523 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:44:04.125534 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 6 23:44:04.125544 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 6 23:44:04.125554 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 6 23:44:04.125576 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 6 23:44:04.125589 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:44:04.125601 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:44:04.125616 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:44:04.125626 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:44:04.125636 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:44:04.125646 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:44:04.125656 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:44:04.125667 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:44:04.125678 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:44:04.125690 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:44:04.125701 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:44:04.125711 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:44:04.125721 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:44:04.125731 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:44:04.125741 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 6 23:44:04.125751 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:44:04.125762 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:44:04.125773 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 6 23:44:04.125783 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 6 23:44:04.125793 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 6 23:44:04.125803 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:44:04.125813 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:44:04.125823 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:44:04.125834 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:44:04.125844 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:44:04.125855 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:44:04.125867 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:44:04.125877 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 6 23:44:04.125887 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:44:04.125897 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:44:04.125908 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:44:04.125917 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:44:04.125927 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:44:04.125937 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:44:04.125947 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:44:04.125958 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:44:04.125968 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:44:04.125978 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:44:04.125988 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:44:04.125998 systemd[1]: Reached target machines.target - Containers. Jul 6 23:44:04.126009 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:44:04.126019 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:44:04.126029 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:44:04.126040 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:44:04.126050 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:44:04.126060 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:44:04.126070 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:44:04.126080 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:44:04.126089 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:44:04.126104 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:44:04.126116 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 6 23:44:04.126126 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 6 23:44:04.126138 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 6 23:44:04.126148 systemd[1]: Stopped systemd-fsck-usr.service. Jul 6 23:44:04.126159 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:44:04.126168 kernel: fuse: init (API version 7.41) Jul 6 23:44:04.126178 kernel: loop: module loaded Jul 6 23:44:04.126187 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:44:04.126198 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:44:04.126208 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:44:04.126218 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:44:04.126230 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 6 23:44:04.126240 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:44:04.126250 systemd[1]: verity-setup.service: Deactivated successfully. Jul 6 23:44:04.126259 systemd[1]: Stopped verity-setup.service. Jul 6 23:44:04.126269 kernel: ACPI: bus type drm_connector registered Jul 6 23:44:04.126280 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:44:04.126290 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:44:04.126300 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:44:04.126309 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:44:04.126319 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:44:04.126329 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:44:04.126340 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:44:04.126351 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:44:04.126361 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:44:04.126391 systemd-journald[1154]: Collecting audit messages is disabled. Jul 6 23:44:04.126415 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:44:04.126425 systemd-journald[1154]: Journal started Jul 6 23:44:04.126449 systemd-journald[1154]: Runtime Journal (/run/log/journal/cd4b57041da14abe82fd695838f0062e) is 6M, max 48.5M, 42.4M free. Jul 6 23:44:03.863376 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:44:03.884848 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 6 23:44:03.885275 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 6 23:44:04.127591 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:44:04.129341 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:44:04.129510 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:44:04.131009 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:44:04.131191 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:44:04.132457 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:44:04.132645 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:44:04.134135 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:44:04.134282 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:44:04.136889 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:44:04.137053 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:44:04.138494 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:44:04.140005 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:44:04.141500 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:44:04.144010 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 6 23:44:04.156897 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:44:04.159515 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:44:04.161668 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:44:04.162821 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:44:04.162853 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:44:04.164778 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 6 23:44:04.174702 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:44:04.175795 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:44:04.177178 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:44:04.179136 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:44:04.180412 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:44:04.183702 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:44:04.184947 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:44:04.185838 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:44:04.187759 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:44:04.213731 systemd-journald[1154]: Time spent on flushing to /var/log/journal/cd4b57041da14abe82fd695838f0062e is 21.369ms for 884 entries. Jul 6 23:44:04.213731 systemd-journald[1154]: System Journal (/var/log/journal/cd4b57041da14abe82fd695838f0062e) is 8M, max 195.6M, 187.6M free. Jul 6 23:44:04.239095 systemd-journald[1154]: Received client request to flush runtime journal. Jul 6 23:44:04.224863 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:44:04.229421 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:44:04.234750 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:44:04.236297 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:44:04.241676 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:44:04.243633 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:44:04.247067 kernel: loop0: detected capacity change from 0 to 138376 Jul 6 23:44:04.248277 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:44:04.253398 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 6 23:44:04.256511 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Jul 6 23:44:04.256828 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Jul 6 23:44:04.262141 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:44:04.263776 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:44:04.272409 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:44:04.275973 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:44:04.289482 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 6 23:44:04.299595 kernel: loop1: detected capacity change from 0 to 203944 Jul 6 23:44:04.310370 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:44:04.313679 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:44:04.326601 kernel: loop2: detected capacity change from 0 to 107312 Jul 6 23:44:04.341891 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. Jul 6 23:44:04.341910 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. Jul 6 23:44:04.346358 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:44:04.359586 kernel: loop3: detected capacity change from 0 to 138376 Jul 6 23:44:04.370622 kernel: loop4: detected capacity change from 0 to 203944 Jul 6 23:44:04.379617 kernel: loop5: detected capacity change from 0 to 107312 Jul 6 23:44:04.384363 (sd-merge)[1229]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 6 23:44:04.384759 (sd-merge)[1229]: Merged extensions into '/usr'. Jul 6 23:44:04.389617 systemd[1]: Reload requested from client PID 1203 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:44:04.389633 systemd[1]: Reloading... Jul 6 23:44:04.433614 zram_generator::config[1255]: No configuration found. Jul 6 23:44:04.529623 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:44:04.530765 ldconfig[1198]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:44:04.592704 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:44:04.592947 systemd[1]: Reloading finished in 202 ms. Jul 6 23:44:04.628392 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:44:04.629917 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:44:04.645817 systemd[1]: Starting ensure-sysext.service... Jul 6 23:44:04.647681 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:44:04.661228 systemd[1]: Reload requested from client PID 1289 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:44:04.661246 systemd[1]: Reloading... Jul 6 23:44:04.665807 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 6 23:44:04.666184 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 6 23:44:04.666506 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:44:04.666813 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:44:04.667513 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:44:04.667878 systemd-tmpfiles[1290]: ACLs are not supported, ignoring. Jul 6 23:44:04.667991 systemd-tmpfiles[1290]: ACLs are not supported, ignoring. Jul 6 23:44:04.670705 systemd-tmpfiles[1290]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:44:04.670816 systemd-tmpfiles[1290]: Skipping /boot Jul 6 23:44:04.679931 systemd-tmpfiles[1290]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:44:04.680062 systemd-tmpfiles[1290]: Skipping /boot Jul 6 23:44:04.709612 zram_generator::config[1317]: No configuration found. Jul 6 23:44:04.774832 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:44:04.837452 systemd[1]: Reloading finished in 175 ms. Jul 6 23:44:04.861177 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:44:04.866894 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:44:04.878708 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:44:04.881251 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:44:04.883660 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:44:04.887554 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:44:04.891722 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:44:04.895781 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:44:04.905984 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:44:04.909531 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:44:04.912552 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:44:04.920826 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:44:04.922061 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:44:04.922252 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:44:04.924324 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:44:04.926501 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:44:04.929164 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:44:04.929372 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:44:04.931550 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:44:04.933285 systemd-udevd[1358]: Using default interface naming scheme 'v255'. Jul 6 23:44:04.933533 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:44:04.937063 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:44:04.937271 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:44:04.945476 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:44:04.946872 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:44:04.949157 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:44:04.951363 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:44:04.954821 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:44:04.955015 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:44:04.956404 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:44:04.967612 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:44:04.974027 systemd[1]: Finished ensure-sysext.service. Jul 6 23:44:04.977003 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:44:04.977423 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:44:04.978113 augenrules[1392]: No rules Jul 6 23:44:04.979518 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:44:04.979996 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:44:04.981555 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:44:04.983398 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:44:04.983595 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:44:04.987059 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:44:04.987639 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:44:04.990846 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:44:04.992393 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:44:05.003525 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:44:05.007686 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:44:05.010125 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:44:05.010176 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:44:05.019861 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:44:05.021717 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:44:05.021793 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:44:05.024708 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 6 23:44:05.030043 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:44:05.033026 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:44:05.033715 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:44:05.057323 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 6 23:44:05.057560 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:44:05.101190 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 6 23:44:05.105719 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:44:05.140888 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:44:05.168562 systemd-networkd[1434]: lo: Link UP Jul 6 23:44:05.168586 systemd-networkd[1434]: lo: Gained carrier Jul 6 23:44:05.168673 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 6 23:44:05.169340 systemd-networkd[1434]: Enumeration completed Jul 6 23:44:05.169738 systemd-networkd[1434]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:44:05.169742 systemd-networkd[1434]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:44:05.169997 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:44:05.170147 systemd-networkd[1434]: eth0: Link UP Jul 6 23:44:05.170257 systemd-networkd[1434]: eth0: Gained carrier Jul 6 23:44:05.170271 systemd-networkd[1434]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:44:05.171880 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:44:05.179199 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 6 23:44:05.182748 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:44:05.185148 systemd-resolved[1356]: Positive Trust Anchors: Jul 6 23:44:05.185166 systemd-resolved[1356]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:44:05.185198 systemd-resolved[1356]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:44:05.186648 systemd-networkd[1434]: eth0: DHCPv4 address 10.0.0.128/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 6 23:44:05.187177 systemd-timesyncd[1436]: Network configuration changed, trying to establish connection. Jul 6 23:44:05.188069 systemd-timesyncd[1436]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 6 23:44:05.188129 systemd-timesyncd[1436]: Initial clock synchronization to Sun 2025-07-06 23:44:05.447404 UTC. Jul 6 23:44:05.198712 systemd-resolved[1356]: Defaulting to hostname 'linux'. Jul 6 23:44:05.200209 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:44:05.201955 systemd[1]: Reached target network.target - Network. Jul 6 23:44:05.203474 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:44:05.205712 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:44:05.207273 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:44:05.208639 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:44:05.209907 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:44:05.210943 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:44:05.212069 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:44:05.213218 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:44:05.213249 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:44:05.214092 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:44:05.217113 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:44:05.219818 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:44:05.222904 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 6 23:44:05.224295 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 6 23:44:05.225558 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 6 23:44:05.231479 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:44:05.233032 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 6 23:44:05.235093 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 6 23:44:05.236652 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:44:05.245509 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:44:05.246559 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:44:05.247554 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:44:05.247609 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:44:05.248794 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:44:05.250783 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:44:05.252604 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:44:05.254534 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:44:05.256771 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:44:05.257763 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:44:05.258688 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:44:05.262672 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:44:05.264482 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:44:05.266805 jq[1475]: false Jul 6 23:44:05.267203 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:44:05.271252 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:44:05.274182 extend-filesystems[1476]: Found /dev/vda6 Jul 6 23:44:05.274799 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:44:05.280420 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:44:05.281250 extend-filesystems[1476]: Found /dev/vda9 Jul 6 23:44:05.282775 extend-filesystems[1476]: Checking size of /dev/vda9 Jul 6 23:44:05.285857 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:44:05.286494 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:44:05.290418 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:44:05.293883 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:44:05.296747 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:44:05.296934 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:44:05.297181 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:44:05.297336 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:44:05.298419 extend-filesystems[1476]: Resized partition /dev/vda9 Jul 6 23:44:05.299776 jq[1499]: true Jul 6 23:44:05.300857 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:44:05.301046 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:44:05.309329 extend-filesystems[1504]: resize2fs 1.47.2 (1-Jan-2025) Jul 6 23:44:05.319927 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 6 23:44:05.325136 (ntainerd)[1511]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:44:05.327581 jq[1506]: true Jul 6 23:44:05.367774 tar[1505]: linux-arm64/helm Jul 6 23:44:05.383389 dbus-daemon[1473]: [system] SELinux support is enabled Jul 6 23:44:05.383579 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:44:05.387123 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:44:05.387164 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:44:05.388585 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:44:05.388607 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:44:05.394757 update_engine[1498]: I20250706 23:44:05.392894 1498 main.cc:92] Flatcar Update Engine starting Jul 6 23:44:05.396390 systemd-logind[1485]: Watching system buttons on /dev/input/event0 (Power Button) Jul 6 23:44:05.414863 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 6 23:44:05.414909 extend-filesystems[1504]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 6 23:44:05.414909 extend-filesystems[1504]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 6 23:44:05.414909 extend-filesystems[1504]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 6 23:44:05.418712 update_engine[1498]: I20250706 23:44:05.407025 1498 update_check_scheduler.cc:74] Next update check in 6m31s Jul 6 23:44:05.399336 systemd-logind[1485]: New seat seat0. Jul 6 23:44:05.418830 extend-filesystems[1476]: Resized filesystem in /dev/vda9 Jul 6 23:44:05.403711 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:44:05.407931 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:44:05.410445 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:44:05.420363 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:44:05.422158 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:44:05.423619 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:44:05.430687 bash[1539]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:44:05.432402 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:44:05.436955 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 6 23:44:05.490390 locksmithd[1538]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:44:05.570262 containerd[1511]: time="2025-07-06T23:44:05Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 6 23:44:05.572593 containerd[1511]: time="2025-07-06T23:44:05.572130840Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 6 23:44:05.582820 containerd[1511]: time="2025-07-06T23:44:05.582769320Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10µs" Jul 6 23:44:05.582820 containerd[1511]: time="2025-07-06T23:44:05.582808240Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 6 23:44:05.582820 containerd[1511]: time="2025-07-06T23:44:05.582826920Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 6 23:44:05.583015 containerd[1511]: time="2025-07-06T23:44:05.582988320Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 6 23:44:05.583015 containerd[1511]: time="2025-07-06T23:44:05.583011160Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 6 23:44:05.583077 containerd[1511]: time="2025-07-06T23:44:05.583034840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 6 23:44:05.583101 containerd[1511]: time="2025-07-06T23:44:05.583082360Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 6 23:44:05.583121 containerd[1511]: time="2025-07-06T23:44:05.583092960Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 6 23:44:05.583368 containerd[1511]: time="2025-07-06T23:44:05.583335640Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 6 23:44:05.583368 containerd[1511]: time="2025-07-06T23:44:05.583359800Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 6 23:44:05.583413 containerd[1511]: time="2025-07-06T23:44:05.583370680Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 6 23:44:05.583413 containerd[1511]: time="2025-07-06T23:44:05.583380440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 6 23:44:05.583467 containerd[1511]: time="2025-07-06T23:44:05.583452440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 6 23:44:05.583691 containerd[1511]: time="2025-07-06T23:44:05.583663600Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 6 23:44:05.583719 containerd[1511]: time="2025-07-06T23:44:05.583700800Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 6 23:44:05.583719 containerd[1511]: time="2025-07-06T23:44:05.583711840Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 6 23:44:05.583761 containerd[1511]: time="2025-07-06T23:44:05.583742920Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 6 23:44:05.583963 containerd[1511]: time="2025-07-06T23:44:05.583948200Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 6 23:44:05.584022 containerd[1511]: time="2025-07-06T23:44:05.584007480Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:44:05.589764 containerd[1511]: time="2025-07-06T23:44:05.589721440Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 6 23:44:05.589764 containerd[1511]: time="2025-07-06T23:44:05.589776080Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 6 23:44:05.589872 containerd[1511]: time="2025-07-06T23:44:05.589789760Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 6 23:44:05.589872 containerd[1511]: time="2025-07-06T23:44:05.589801080Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 6 23:44:05.589872 containerd[1511]: time="2025-07-06T23:44:05.589812640Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 6 23:44:05.589872 containerd[1511]: time="2025-07-06T23:44:05.589827160Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 6 23:44:05.589872 containerd[1511]: time="2025-07-06T23:44:05.589846640Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 6 23:44:05.589872 containerd[1511]: time="2025-07-06T23:44:05.589857960Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 6 23:44:05.589872 containerd[1511]: time="2025-07-06T23:44:05.589869120Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 6 23:44:05.589982 containerd[1511]: time="2025-07-06T23:44:05.589880080Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 6 23:44:05.589982 containerd[1511]: time="2025-07-06T23:44:05.589889120Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 6 23:44:05.589982 containerd[1511]: time="2025-07-06T23:44:05.589900840Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 6 23:44:05.590030 containerd[1511]: time="2025-07-06T23:44:05.590021120Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 6 23:44:05.590048 containerd[1511]: time="2025-07-06T23:44:05.590041400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 6 23:44:05.590064 containerd[1511]: time="2025-07-06T23:44:05.590055520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 6 23:44:05.590081 containerd[1511]: time="2025-07-06T23:44:05.590066880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 6 23:44:05.590081 containerd[1511]: time="2025-07-06T23:44:05.590077240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 6 23:44:05.590123 containerd[1511]: time="2025-07-06T23:44:05.590087200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 6 23:44:05.590123 containerd[1511]: time="2025-07-06T23:44:05.590106240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 6 23:44:05.590123 containerd[1511]: time="2025-07-06T23:44:05.590122360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 6 23:44:05.590180 containerd[1511]: time="2025-07-06T23:44:05.590133440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 6 23:44:05.590180 containerd[1511]: time="2025-07-06T23:44:05.590144000Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 6 23:44:05.590180 containerd[1511]: time="2025-07-06T23:44:05.590153560Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 6 23:44:05.590349 containerd[1511]: time="2025-07-06T23:44:05.590331560Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 6 23:44:05.590390 containerd[1511]: time="2025-07-06T23:44:05.590355760Z" level=info msg="Start snapshots syncer" Jul 6 23:44:05.590390 containerd[1511]: time="2025-07-06T23:44:05.590382080Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 6 23:44:05.591885 containerd[1511]: time="2025-07-06T23:44:05.591690640Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 6 23:44:05.592118 containerd[1511]: time="2025-07-06T23:44:05.592042600Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 6 23:44:05.592274 containerd[1511]: time="2025-07-06T23:44:05.592255440Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 6 23:44:05.592466 containerd[1511]: time="2025-07-06T23:44:05.592444240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 6 23:44:05.592538 containerd[1511]: time="2025-07-06T23:44:05.592524800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 6 23:44:05.592606 containerd[1511]: time="2025-07-06T23:44:05.592593040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 6 23:44:05.592672 containerd[1511]: time="2025-07-06T23:44:05.592659720Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 6 23:44:05.592724 containerd[1511]: time="2025-07-06T23:44:05.592713000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 6 23:44:05.592774 containerd[1511]: time="2025-07-06T23:44:05.592761960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 6 23:44:05.592825 containerd[1511]: time="2025-07-06T23:44:05.592813000Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 6 23:44:05.592893 containerd[1511]: time="2025-07-06T23:44:05.592879800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 6 23:44:05.592946 containerd[1511]: time="2025-07-06T23:44:05.592933480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 6 23:44:05.592999 containerd[1511]: time="2025-07-06T23:44:05.592986320Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 6 23:44:05.593143 containerd[1511]: time="2025-07-06T23:44:05.593092080Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 6 23:44:05.593237 containerd[1511]: time="2025-07-06T23:44:05.593126200Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 6 23:44:05.593286 containerd[1511]: time="2025-07-06T23:44:05.593271880Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 6 23:44:05.593336 containerd[1511]: time="2025-07-06T23:44:05.593323760Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 6 23:44:05.593378 containerd[1511]: time="2025-07-06T23:44:05.593366960Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 6 23:44:05.593425 containerd[1511]: time="2025-07-06T23:44:05.593413720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 6 23:44:05.593485 containerd[1511]: time="2025-07-06T23:44:05.593472800Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 6 23:44:05.593618 containerd[1511]: time="2025-07-06T23:44:05.593605760Z" level=info msg="runtime interface created" Jul 6 23:44:05.593659 containerd[1511]: time="2025-07-06T23:44:05.593649400Z" level=info msg="created NRI interface" Jul 6 23:44:05.593703 containerd[1511]: time="2025-07-06T23:44:05.593691840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 6 23:44:05.593753 containerd[1511]: time="2025-07-06T23:44:05.593741960Z" level=info msg="Connect containerd service" Jul 6 23:44:05.593823 containerd[1511]: time="2025-07-06T23:44:05.593811040Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:44:05.595153 containerd[1511]: time="2025-07-06T23:44:05.594791680Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:44:05.703864 containerd[1511]: time="2025-07-06T23:44:05.703696560Z" level=info msg="Start subscribing containerd event" Jul 6 23:44:05.703864 containerd[1511]: time="2025-07-06T23:44:05.703759920Z" level=info msg="Start recovering state" Jul 6 23:44:05.703864 containerd[1511]: time="2025-07-06T23:44:05.703837400Z" level=info msg="Start event monitor" Jul 6 23:44:05.703864 containerd[1511]: time="2025-07-06T23:44:05.703850000Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:44:05.703864 containerd[1511]: time="2025-07-06T23:44:05.703856600Z" level=info msg="Start streaming server" Jul 6 23:44:05.703864 containerd[1511]: time="2025-07-06T23:44:05.703864720Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 6 23:44:05.704021 containerd[1511]: time="2025-07-06T23:44:05.703873760Z" level=info msg="runtime interface starting up..." Jul 6 23:44:05.704021 containerd[1511]: time="2025-07-06T23:44:05.703880160Z" level=info msg="starting plugins..." Jul 6 23:44:05.704021 containerd[1511]: time="2025-07-06T23:44:05.703892560Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 6 23:44:05.704278 containerd[1511]: time="2025-07-06T23:44:05.704254640Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:44:05.704316 containerd[1511]: time="2025-07-06T23:44:05.704300040Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:44:05.705609 containerd[1511]: time="2025-07-06T23:44:05.704343960Z" level=info msg="containerd successfully booted in 0.134430s" Jul 6 23:44:05.704440 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:44:05.755127 tar[1505]: linux-arm64/LICENSE Jul 6 23:44:05.755127 tar[1505]: linux-arm64/README.md Jul 6 23:44:05.769859 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 6 23:44:06.274139 sshd_keygen[1495]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:44:06.294348 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:44:06.297987 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:44:06.329526 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:44:06.329848 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:44:06.332748 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:44:06.367860 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:44:06.371197 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:44:06.373847 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 6 23:44:06.375214 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:44:06.930978 systemd-networkd[1434]: eth0: Gained IPv6LL Jul 6 23:44:06.934195 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:44:06.936302 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:44:06.941054 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 6 23:44:06.943950 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:44:06.955188 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:44:06.973775 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 6 23:44:06.975418 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 6 23:44:06.977277 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 6 23:44:06.983990 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:44:07.560797 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:44:07.562556 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:44:07.563934 systemd[1]: Startup finished in 2.200s (kernel) + 5.799s (initrd) + 4.184s (userspace) = 12.184s. Jul 6 23:44:07.565181 (kubelet)[1609]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:44:08.101581 kubelet[1609]: E0706 23:44:08.101513 1609 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:44:08.103841 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:44:08.103980 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:44:08.104360 systemd[1]: kubelet.service: Consumed 907ms CPU time, 257.2M memory peak. Jul 6 23:44:10.903093 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:44:10.904281 systemd[1]: Started sshd@0-10.0.0.128:22-10.0.0.1:57634.service - OpenSSH per-connection server daemon (10.0.0.1:57634). Jul 6 23:44:11.000736 sshd[1622]: Accepted publickey for core from 10.0.0.1 port 57634 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:44:11.002542 sshd-session[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:44:11.012047 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:44:11.012973 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:44:11.018129 systemd-logind[1485]: New session 1 of user core. Jul 6 23:44:11.035366 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:44:11.038108 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:44:11.063783 (systemd)[1626]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:44:11.066116 systemd-logind[1485]: New session c1 of user core. Jul 6 23:44:11.188344 systemd[1626]: Queued start job for default target default.target. Jul 6 23:44:11.205694 systemd[1626]: Created slice app.slice - User Application Slice. Jul 6 23:44:11.205724 systemd[1626]: Reached target paths.target - Paths. Jul 6 23:44:11.205764 systemd[1626]: Reached target timers.target - Timers. Jul 6 23:44:11.207031 systemd[1626]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:44:11.216615 systemd[1626]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:44:11.216681 systemd[1626]: Reached target sockets.target - Sockets. Jul 6 23:44:11.216722 systemd[1626]: Reached target basic.target - Basic System. Jul 6 23:44:11.216755 systemd[1626]: Reached target default.target - Main User Target. Jul 6 23:44:11.216790 systemd[1626]: Startup finished in 144ms. Jul 6 23:44:11.217113 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:44:11.219037 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:44:11.286329 systemd[1]: Started sshd@1-10.0.0.128:22-10.0.0.1:57644.service - OpenSSH per-connection server daemon (10.0.0.1:57644). Jul 6 23:44:11.339940 sshd[1637]: Accepted publickey for core from 10.0.0.1 port 57644 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:44:11.341342 sshd-session[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:44:11.346020 systemd-logind[1485]: New session 2 of user core. Jul 6 23:44:11.356806 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:44:11.409672 sshd[1639]: Connection closed by 10.0.0.1 port 57644 Jul 6 23:44:11.410161 sshd-session[1637]: pam_unix(sshd:session): session closed for user core Jul 6 23:44:11.427711 systemd[1]: sshd@1-10.0.0.128:22-10.0.0.1:57644.service: Deactivated successfully. Jul 6 23:44:11.429975 systemd[1]: session-2.scope: Deactivated successfully. Jul 6 23:44:11.430811 systemd-logind[1485]: Session 2 logged out. Waiting for processes to exit. Jul 6 23:44:11.433804 systemd-logind[1485]: Removed session 2. Jul 6 23:44:11.434499 systemd[1]: Started sshd@2-10.0.0.128:22-10.0.0.1:57646.service - OpenSSH per-connection server daemon (10.0.0.1:57646). Jul 6 23:44:11.499697 sshd[1645]: Accepted publickey for core from 10.0.0.1 port 57646 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:44:11.501123 sshd-session[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:44:11.506402 systemd-logind[1485]: New session 3 of user core. Jul 6 23:44:11.521810 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:44:11.570378 sshd[1647]: Connection closed by 10.0.0.1 port 57646 Jul 6 23:44:11.570902 sshd-session[1645]: pam_unix(sshd:session): session closed for user core Jul 6 23:44:11.581770 systemd[1]: sshd@2-10.0.0.128:22-10.0.0.1:57646.service: Deactivated successfully. Jul 6 23:44:11.586796 systemd[1]: session-3.scope: Deactivated successfully. Jul 6 23:44:11.588560 systemd-logind[1485]: Session 3 logged out. Waiting for processes to exit. Jul 6 23:44:11.590337 systemd[1]: Started sshd@3-10.0.0.128:22-10.0.0.1:57662.service - OpenSSH per-connection server daemon (10.0.0.1:57662). Jul 6 23:44:11.591825 systemd-logind[1485]: Removed session 3. Jul 6 23:44:11.669647 sshd[1653]: Accepted publickey for core from 10.0.0.1 port 57662 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:44:11.670955 sshd-session[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:44:11.675670 systemd-logind[1485]: New session 4 of user core. Jul 6 23:44:11.694770 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:44:11.749333 sshd[1655]: Connection closed by 10.0.0.1 port 57662 Jul 6 23:44:11.749758 sshd-session[1653]: pam_unix(sshd:session): session closed for user core Jul 6 23:44:11.760754 systemd[1]: sshd@3-10.0.0.128:22-10.0.0.1:57662.service: Deactivated successfully. Jul 6 23:44:11.762336 systemd[1]: session-4.scope: Deactivated successfully. Jul 6 23:44:11.763134 systemd-logind[1485]: Session 4 logged out. Waiting for processes to exit. Jul 6 23:44:11.765416 systemd[1]: Started sshd@4-10.0.0.128:22-10.0.0.1:57666.service - OpenSSH per-connection server daemon (10.0.0.1:57666). Jul 6 23:44:11.766456 systemd-logind[1485]: Removed session 4. Jul 6 23:44:11.827218 sshd[1661]: Accepted publickey for core from 10.0.0.1 port 57666 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:44:11.828903 sshd-session[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:44:11.833803 systemd-logind[1485]: New session 5 of user core. Jul 6 23:44:11.847813 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:44:11.905935 sudo[1664]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 6 23:44:11.906211 sudo[1664]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:44:11.931319 sudo[1664]: pam_unix(sudo:session): session closed for user root Jul 6 23:44:11.934833 sshd[1663]: Connection closed by 10.0.0.1 port 57666 Jul 6 23:44:11.935260 sshd-session[1661]: pam_unix(sshd:session): session closed for user core Jul 6 23:44:11.945712 systemd[1]: sshd@4-10.0.0.128:22-10.0.0.1:57666.service: Deactivated successfully. Jul 6 23:44:11.947310 systemd[1]: session-5.scope: Deactivated successfully. Jul 6 23:44:11.948013 systemd-logind[1485]: Session 5 logged out. Waiting for processes to exit. Jul 6 23:44:11.950512 systemd[1]: Started sshd@5-10.0.0.128:22-10.0.0.1:57676.service - OpenSSH per-connection server daemon (10.0.0.1:57676). Jul 6 23:44:11.951431 systemd-logind[1485]: Removed session 5. Jul 6 23:44:12.010974 sshd[1670]: Accepted publickey for core from 10.0.0.1 port 57676 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:44:12.012267 sshd-session[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:44:12.016340 systemd-logind[1485]: New session 6 of user core. Jul 6 23:44:12.028778 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:44:12.080165 sudo[1674]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 6 23:44:12.080422 sudo[1674]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:44:12.138205 sudo[1674]: pam_unix(sudo:session): session closed for user root Jul 6 23:44:12.143403 sudo[1673]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 6 23:44:12.143699 sudo[1673]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:44:12.152898 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:44:12.188444 augenrules[1696]: No rules Jul 6 23:44:12.189750 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:44:12.189989 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:44:12.190969 sudo[1673]: pam_unix(sudo:session): session closed for user root Jul 6 23:44:12.192634 sshd[1672]: Connection closed by 10.0.0.1 port 57676 Jul 6 23:44:12.193056 sshd-session[1670]: pam_unix(sshd:session): session closed for user core Jul 6 23:44:12.206316 systemd[1]: sshd@5-10.0.0.128:22-10.0.0.1:57676.service: Deactivated successfully. Jul 6 23:44:12.209940 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:44:12.210900 systemd-logind[1485]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:44:12.214187 systemd[1]: Started sshd@6-10.0.0.128:22-10.0.0.1:57684.service - OpenSSH per-connection server daemon (10.0.0.1:57684). Jul 6 23:44:12.214763 systemd-logind[1485]: Removed session 6. Jul 6 23:44:12.272430 sshd[1705]: Accepted publickey for core from 10.0.0.1 port 57684 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:44:12.273848 sshd-session[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:44:12.279858 systemd-logind[1485]: New session 7 of user core. Jul 6 23:44:12.289798 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:44:12.340518 sudo[1708]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:44:12.340796 sudo[1708]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:44:12.729791 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 6 23:44:12.747997 (dockerd)[1730]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 6 23:44:13.047775 dockerd[1730]: time="2025-07-06T23:44:13.047185068Z" level=info msg="Starting up" Jul 6 23:44:13.050828 dockerd[1730]: time="2025-07-06T23:44:13.050772455Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 6 23:44:13.166067 systemd[1]: var-lib-docker-metacopy\x2dcheck1260056576-merged.mount: Deactivated successfully. Jul 6 23:44:13.180102 dockerd[1730]: time="2025-07-06T23:44:13.179901392Z" level=info msg="Loading containers: start." Jul 6 23:44:13.189610 kernel: Initializing XFRM netlink socket Jul 6 23:44:13.463516 systemd-networkd[1434]: docker0: Link UP Jul 6 23:44:13.469269 dockerd[1730]: time="2025-07-06T23:44:13.469206198Z" level=info msg="Loading containers: done." Jul 6 23:44:13.485475 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1155076768-merged.mount: Deactivated successfully. Jul 6 23:44:13.493798 dockerd[1730]: time="2025-07-06T23:44:13.493376223Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 6 23:44:13.493798 dockerd[1730]: time="2025-07-06T23:44:13.493475228Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 6 23:44:13.493798 dockerd[1730]: time="2025-07-06T23:44:13.493627340Z" level=info msg="Initializing buildkit" Jul 6 23:44:13.536407 dockerd[1730]: time="2025-07-06T23:44:13.536362718Z" level=info msg="Completed buildkit initialization" Jul 6 23:44:13.543166 dockerd[1730]: time="2025-07-06T23:44:13.543119070Z" level=info msg="Daemon has completed initialization" Jul 6 23:44:13.543817 dockerd[1730]: time="2025-07-06T23:44:13.543418353Z" level=info msg="API listen on /run/docker.sock" Jul 6 23:44:13.543495 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 6 23:44:14.201415 containerd[1511]: time="2025-07-06T23:44:14.201359078Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 6 23:44:14.935061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2489118687.mount: Deactivated successfully. Jul 6 23:44:15.721497 containerd[1511]: time="2025-07-06T23:44:15.721428017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:15.721937 containerd[1511]: time="2025-07-06T23:44:15.721898762Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=25651795" Jul 6 23:44:15.722934 containerd[1511]: time="2025-07-06T23:44:15.722903866Z" level=info msg="ImageCreate event name:\"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:15.725609 containerd[1511]: time="2025-07-06T23:44:15.725540241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:15.726758 containerd[1511]: time="2025-07-06T23:44:15.726709127Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"25648593\" in 1.525305856s" Jul 6 23:44:15.726758 containerd[1511]: time="2025-07-06T23:44:15.726756868Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 6 23:44:15.731157 containerd[1511]: time="2025-07-06T23:44:15.731111674Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 6 23:44:16.728940 containerd[1511]: time="2025-07-06T23:44:16.728873480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:16.731343 containerd[1511]: time="2025-07-06T23:44:16.731304160Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=22459679" Jul 6 23:44:16.732270 containerd[1511]: time="2025-07-06T23:44:16.732242940Z" level=info msg="ImageCreate event name:\"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:16.734709 containerd[1511]: time="2025-07-06T23:44:16.734677090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:16.736151 containerd[1511]: time="2025-07-06T23:44:16.736108033Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"23995467\" in 1.004955818s" Jul 6 23:44:16.736189 containerd[1511]: time="2025-07-06T23:44:16.736149907Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 6 23:44:16.736672 containerd[1511]: time="2025-07-06T23:44:16.736643886Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 6 23:44:17.674584 containerd[1511]: time="2025-07-06T23:44:17.674523811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:17.675060 containerd[1511]: time="2025-07-06T23:44:17.675032943Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=17125068" Jul 6 23:44:17.675913 containerd[1511]: time="2025-07-06T23:44:17.675884653Z" level=info msg="ImageCreate event name:\"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:17.678189 containerd[1511]: time="2025-07-06T23:44:17.678149473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:17.679640 containerd[1511]: time="2025-07-06T23:44:17.679603083Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"18660874\" in 942.926247ms" Jul 6 23:44:17.679690 containerd[1511]: time="2025-07-06T23:44:17.679640480Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 6 23:44:17.680139 containerd[1511]: time="2025-07-06T23:44:17.680114029Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 6 23:44:18.326888 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 6 23:44:18.328385 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:44:18.488083 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:44:18.492316 (kubelet)[2017]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:44:18.542347 kubelet[2017]: E0706 23:44:18.542283 2017 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:44:18.545760 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:44:18.545903 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:44:18.546332 systemd[1]: kubelet.service: Consumed 155ms CPU time, 106.3M memory peak. Jul 6 23:44:18.612893 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1689345643.mount: Deactivated successfully. Jul 6 23:44:18.898047 containerd[1511]: time="2025-07-06T23:44:18.897930090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:18.898625 containerd[1511]: time="2025-07-06T23:44:18.898592629Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=26915959" Jul 6 23:44:18.899587 containerd[1511]: time="2025-07-06T23:44:18.899520530Z" level=info msg="ImageCreate event name:\"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:18.903625 containerd[1511]: time="2025-07-06T23:44:18.903577979Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:18.904663 containerd[1511]: time="2025-07-06T23:44:18.904630206Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"26914976\" in 1.224485035s" Jul 6 23:44:18.904733 containerd[1511]: time="2025-07-06T23:44:18.904665677Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 6 23:44:18.905215 containerd[1511]: time="2025-07-06T23:44:18.905186697Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 6 23:44:19.384627 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4032309815.mount: Deactivated successfully. Jul 6 23:44:20.218587 containerd[1511]: time="2025-07-06T23:44:20.218512817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:20.221265 containerd[1511]: time="2025-07-06T23:44:20.221210158Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 6 23:44:20.223164 containerd[1511]: time="2025-07-06T23:44:20.223105348Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:20.227434 containerd[1511]: time="2025-07-06T23:44:20.227385484Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:20.228463 containerd[1511]: time="2025-07-06T23:44:20.228423610Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.32309251s" Jul 6 23:44:20.228519 containerd[1511]: time="2025-07-06T23:44:20.228462885Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 6 23:44:20.229003 containerd[1511]: time="2025-07-06T23:44:20.228939858Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 6 23:44:20.774562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount258253336.mount: Deactivated successfully. Jul 6 23:44:20.779608 containerd[1511]: time="2025-07-06T23:44:20.779540627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:44:20.780891 containerd[1511]: time="2025-07-06T23:44:20.780832937Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 6 23:44:20.781826 containerd[1511]: time="2025-07-06T23:44:20.781786884Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:44:20.785332 containerd[1511]: time="2025-07-06T23:44:20.785296707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:44:20.785765 containerd[1511]: time="2025-07-06T23:44:20.785735008Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 556.616421ms" Jul 6 23:44:20.785765 containerd[1511]: time="2025-07-06T23:44:20.785762103Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 6 23:44:20.786343 containerd[1511]: time="2025-07-06T23:44:20.786292864Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 6 23:44:21.311160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2211767200.mount: Deactivated successfully. Jul 6 23:44:22.729591 containerd[1511]: time="2025-07-06T23:44:22.729445222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:22.730979 containerd[1511]: time="2025-07-06T23:44:22.730939120Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" Jul 6 23:44:22.733594 containerd[1511]: time="2025-07-06T23:44:22.733460175Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:22.737196 containerd[1511]: time="2025-07-06T23:44:22.737142098Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:22.738499 containerd[1511]: time="2025-07-06T23:44:22.738453459Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 1.952128566s" Jul 6 23:44:22.738499 containerd[1511]: time="2025-07-06T23:44:22.738495780Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 6 23:44:27.791044 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:44:27.791263 systemd[1]: kubelet.service: Consumed 155ms CPU time, 106.3M memory peak. Jul 6 23:44:27.793724 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:44:27.816658 systemd[1]: Reload requested from client PID 2169 ('systemctl') (unit session-7.scope)... Jul 6 23:44:27.816675 systemd[1]: Reloading... Jul 6 23:44:27.883603 zram_generator::config[2208]: No configuration found. Jul 6 23:44:27.964827 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:44:28.057970 systemd[1]: Reloading finished in 240 ms. Jul 6 23:44:28.109235 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 6 23:44:28.109325 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 6 23:44:28.109665 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:44:28.109729 systemd[1]: kubelet.service: Consumed 91ms CPU time, 95M memory peak. Jul 6 23:44:28.111692 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:44:28.241214 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:44:28.269986 (kubelet)[2256]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:44:28.311704 kubelet[2256]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:44:28.311704 kubelet[2256]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 6 23:44:28.311704 kubelet[2256]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:44:28.311704 kubelet[2256]: I0706 23:44:28.311644 2256 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:44:29.285067 kubelet[2256]: I0706 23:44:29.285006 2256 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 6 23:44:29.285067 kubelet[2256]: I0706 23:44:29.285043 2256 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:44:29.285305 kubelet[2256]: I0706 23:44:29.285277 2256 server.go:934] "Client rotation is on, will bootstrap in background" Jul 6 23:44:29.323476 kubelet[2256]: E0706 23:44:29.323408 2256 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.128:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:44:29.323952 kubelet[2256]: I0706 23:44:29.323909 2256 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:44:29.334025 kubelet[2256]: I0706 23:44:29.333995 2256 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 6 23:44:29.339615 kubelet[2256]: I0706 23:44:29.337654 2256 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:44:29.339615 kubelet[2256]: I0706 23:44:29.338439 2256 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 6 23:44:29.339615 kubelet[2256]: I0706 23:44:29.338632 2256 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:44:29.339615 kubelet[2256]: I0706 23:44:29.338663 2256 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:44:29.339795 kubelet[2256]: I0706 23:44:29.338918 2256 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:44:29.339795 kubelet[2256]: I0706 23:44:29.338927 2256 container_manager_linux.go:300] "Creating device plugin manager" Jul 6 23:44:29.339795 kubelet[2256]: I0706 23:44:29.339176 2256 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:44:29.341171 kubelet[2256]: I0706 23:44:29.341133 2256 kubelet.go:408] "Attempting to sync node with API server" Jul 6 23:44:29.341171 kubelet[2256]: I0706 23:44:29.341172 2256 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:44:29.341246 kubelet[2256]: I0706 23:44:29.341196 2256 kubelet.go:314] "Adding apiserver pod source" Jul 6 23:44:29.341310 kubelet[2256]: I0706 23:44:29.341287 2256 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:44:29.345770 kubelet[2256]: W0706 23:44:29.345599 2256 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Jul 6 23:44:29.345770 kubelet[2256]: E0706 23:44:29.345660 2256 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:44:29.346665 kubelet[2256]: W0706 23:44:29.346622 2256 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.128:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Jul 6 23:44:29.346730 kubelet[2256]: E0706 23:44:29.346675 2256 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.128:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:44:29.347617 kubelet[2256]: I0706 23:44:29.347594 2256 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 6 23:44:29.348344 kubelet[2256]: I0706 23:44:29.348319 2256 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:44:29.348493 kubelet[2256]: W0706 23:44:29.348483 2256 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:44:29.349474 kubelet[2256]: I0706 23:44:29.349454 2256 server.go:1274] "Started kubelet" Jul 6 23:44:29.350607 kubelet[2256]: I0706 23:44:29.350498 2256 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:44:29.350823 kubelet[2256]: I0706 23:44:29.350791 2256 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:44:29.350823 kubelet[2256]: I0706 23:44:29.350814 2256 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:44:29.351496 kubelet[2256]: I0706 23:44:29.351412 2256 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:44:29.352454 kubelet[2256]: I0706 23:44:29.352378 2256 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:44:29.352454 kubelet[2256]: I0706 23:44:29.352439 2256 server.go:449] "Adding debug handlers to kubelet server" Jul 6 23:44:29.355872 kubelet[2256]: E0706 23:44:29.355809 2256 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:44:29.355872 kubelet[2256]: I0706 23:44:29.355845 2256 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 6 23:44:29.356753 kubelet[2256]: I0706 23:44:29.356075 2256 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 6 23:44:29.356753 kubelet[2256]: I0706 23:44:29.356135 2256 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:44:29.356753 kubelet[2256]: W0706 23:44:29.356640 2256 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Jul 6 23:44:29.356753 kubelet[2256]: E0706 23:44:29.356687 2256 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:44:29.356953 kubelet[2256]: E0706 23:44:29.355412 2256 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.128:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.128:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184fce2c35f33846 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-06 23:44:29.34942727 +0000 UTC m=+1.076019411,LastTimestamp:2025-07-06 23:44:29.34942727 +0000 UTC m=+1.076019411,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 6 23:44:29.357175 kubelet[2256]: E0706 23:44:29.357155 2256 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:44:29.357276 kubelet[2256]: E0706 23:44:29.357238 2256 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="200ms" Jul 6 23:44:29.357463 kubelet[2256]: I0706 23:44:29.357443 2256 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:44:29.357549 kubelet[2256]: I0706 23:44:29.357529 2256 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:44:29.359302 kubelet[2256]: I0706 23:44:29.359277 2256 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:44:29.372976 kubelet[2256]: I0706 23:44:29.372927 2256 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:44:29.373148 kubelet[2256]: I0706 23:44:29.373115 2256 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 6 23:44:29.373405 kubelet[2256]: I0706 23:44:29.373336 2256 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 6 23:44:29.373405 kubelet[2256]: I0706 23:44:29.373378 2256 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:44:29.374682 kubelet[2256]: I0706 23:44:29.374623 2256 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:44:29.374682 kubelet[2256]: I0706 23:44:29.374661 2256 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 6 23:44:29.374780 kubelet[2256]: I0706 23:44:29.374713 2256 kubelet.go:2321] "Starting kubelet main sync loop" Jul 6 23:44:29.374780 kubelet[2256]: E0706 23:44:29.374755 2256 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:44:29.375238 kubelet[2256]: W0706 23:44:29.375177 2256 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Jul 6 23:44:29.375238 kubelet[2256]: E0706 23:44:29.375211 2256 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:44:29.456399 kubelet[2256]: E0706 23:44:29.456365 2256 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:44:29.475671 kubelet[2256]: E0706 23:44:29.475624 2256 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 6 23:44:29.496216 kubelet[2256]: I0706 23:44:29.496192 2256 policy_none.go:49] "None policy: Start" Jul 6 23:44:29.497013 kubelet[2256]: I0706 23:44:29.496979 2256 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 6 23:44:29.497013 kubelet[2256]: I0706 23:44:29.497015 2256 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:44:29.504313 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 6 23:44:29.518081 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 6 23:44:29.521498 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 6 23:44:29.539742 kubelet[2256]: I0706 23:44:29.539640 2256 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:44:29.539919 kubelet[2256]: I0706 23:44:29.539894 2256 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:44:29.539971 kubelet[2256]: I0706 23:44:29.539913 2256 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:44:29.540185 kubelet[2256]: I0706 23:44:29.540166 2256 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:44:29.543415 kubelet[2256]: E0706 23:44:29.542935 2256 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 6 23:44:29.558325 kubelet[2256]: E0706 23:44:29.558278 2256 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="400ms" Jul 6 23:44:29.641772 kubelet[2256]: I0706 23:44:29.641697 2256 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 6 23:44:29.642259 kubelet[2256]: E0706 23:44:29.642233 2256 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.128:6443/api/v1/nodes\": dial tcp 10.0.0.128:6443: connect: connection refused" node="localhost" Jul 6 23:44:29.685954 systemd[1]: Created slice kubepods-burstable-pod667381d6571378b9d274640723eca452.slice - libcontainer container kubepods-burstable-pod667381d6571378b9d274640723eca452.slice. Jul 6 23:44:29.711349 systemd[1]: Created slice kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice - libcontainer container kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice. Jul 6 23:44:29.715946 systemd[1]: Created slice kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice - libcontainer container kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice. Jul 6 23:44:29.757816 kubelet[2256]: I0706 23:44:29.757768 2256 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 6 23:44:29.757816 kubelet[2256]: I0706 23:44:29.757804 2256 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/667381d6571378b9d274640723eca452-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"667381d6571378b9d274640723eca452\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:44:29.757967 kubelet[2256]: I0706 23:44:29.757841 2256 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/667381d6571378b9d274640723eca452-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"667381d6571378b9d274640723eca452\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:44:29.757967 kubelet[2256]: I0706 23:44:29.757858 2256 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/667381d6571378b9d274640723eca452-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"667381d6571378b9d274640723eca452\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:44:29.757967 kubelet[2256]: I0706 23:44:29.757875 2256 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:44:29.757967 kubelet[2256]: I0706 23:44:29.757894 2256 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:44:29.757967 kubelet[2256]: I0706 23:44:29.757909 2256 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:44:29.758069 kubelet[2256]: I0706 23:44:29.757934 2256 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:44:29.758069 kubelet[2256]: I0706 23:44:29.757954 2256 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:44:29.843812 kubelet[2256]: I0706 23:44:29.843709 2256 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 6 23:44:29.844105 kubelet[2256]: E0706 23:44:29.844067 2256 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.128:6443/api/v1/nodes\": dial tcp 10.0.0.128:6443: connect: connection refused" node="localhost" Jul 6 23:44:29.959727 kubelet[2256]: E0706 23:44:29.959670 2256 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="800ms" Jul 6 23:44:30.008097 kubelet[2256]: E0706 23:44:30.008057 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:44:30.008783 containerd[1511]: time="2025-07-06T23:44:30.008734674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:667381d6571378b9d274640723eca452,Namespace:kube-system,Attempt:0,}" Jul 6 23:44:30.014421 kubelet[2256]: E0706 23:44:30.014359 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:44:30.014919 containerd[1511]: time="2025-07-06T23:44:30.014866810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 6 23:44:30.019168 kubelet[2256]: E0706 23:44:30.019099 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:44:30.019860 containerd[1511]: time="2025-07-06T23:44:30.019685100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 6 23:44:30.036859 containerd[1511]: time="2025-07-06T23:44:30.036748117Z" level=info msg="connecting to shim 84abc631b02c08c56f76e2341a7cb071e4e0f082915e911c02e2cacca7ad6a49" address="unix:///run/containerd/s/bcc252a4b804e066f2454ad30e30acbe02c0f2fecddce34457f6f2d8e69f0825" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:44:30.051895 containerd[1511]: time="2025-07-06T23:44:30.051850959Z" level=info msg="connecting to shim 52bb40efca27c4189e90697163a88be6aacdf272465e8c91e876750900479393" address="unix:///run/containerd/s/333a31bdf5101d792f8f68b606f8833dd46e94fab6ce2c853d88eb9aab2566f9" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:44:30.064873 containerd[1511]: time="2025-07-06T23:44:30.064813949Z" level=info msg="connecting to shim b58dad3e32429a3ee7c0d39d0d69d2bf01078052580c5941f9ebf109003c0072" address="unix:///run/containerd/s/ca37ffbaf3baa0826b1464078e303ad30068b7c05707e4d46f3bafed18411224" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:44:30.076783 systemd[1]: Started cri-containerd-52bb40efca27c4189e90697163a88be6aacdf272465e8c91e876750900479393.scope - libcontainer container 52bb40efca27c4189e90697163a88be6aacdf272465e8c91e876750900479393. Jul 6 23:44:30.078232 systemd[1]: Started cri-containerd-84abc631b02c08c56f76e2341a7cb071e4e0f082915e911c02e2cacca7ad6a49.scope - libcontainer container 84abc631b02c08c56f76e2341a7cb071e4e0f082915e911c02e2cacca7ad6a49. Jul 6 23:44:30.096776 systemd[1]: Started cri-containerd-b58dad3e32429a3ee7c0d39d0d69d2bf01078052580c5941f9ebf109003c0072.scope - libcontainer container b58dad3e32429a3ee7c0d39d0d69d2bf01078052580c5941f9ebf109003c0072. Jul 6 23:44:30.133233 containerd[1511]: time="2025-07-06T23:44:30.133141716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"52bb40efca27c4189e90697163a88be6aacdf272465e8c91e876750900479393\"" Jul 6 23:44:30.134628 kubelet[2256]: E0706 23:44:30.134592 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:44:30.136538 containerd[1511]: time="2025-07-06T23:44:30.136497846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:667381d6571378b9d274640723eca452,Namespace:kube-system,Attempt:0,} returns sandbox id \"84abc631b02c08c56f76e2341a7cb071e4e0f082915e911c02e2cacca7ad6a49\"" Jul 6 23:44:30.137290 kubelet[2256]: E0706 23:44:30.137161 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:44:30.137981 containerd[1511]: time="2025-07-06T23:44:30.137946589Z" level=info msg="CreateContainer within sandbox \"52bb40efca27c4189e90697163a88be6aacdf272465e8c91e876750900479393\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 6 23:44:30.139207 containerd[1511]: time="2025-07-06T23:44:30.139171078Z" level=info msg="CreateContainer within sandbox \"84abc631b02c08c56f76e2341a7cb071e4e0f082915e911c02e2cacca7ad6a49\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 6 23:44:30.147117 containerd[1511]: time="2025-07-06T23:44:30.147053513Z" level=info msg="Container 801b25dba4e532f88aa8e4eabf477bcc3410737ccac3a94fe7b57de75342d558: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:44:30.154015 containerd[1511]: time="2025-07-06T23:44:30.153968959Z" level=info msg="Container 4f301476dc2791dff907210f388faf05b5bedb0bdb4dbf86cb29f47f18d19841: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:44:30.157020 containerd[1511]: time="2025-07-06T23:44:30.156977551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"b58dad3e32429a3ee7c0d39d0d69d2bf01078052580c5941f9ebf109003c0072\"" Jul 6 23:44:30.158181 kubelet[2256]: E0706 23:44:30.158144 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:44:30.159712 containerd[1511]: time="2025-07-06T23:44:30.159683546Z" level=info msg="CreateContainer within sandbox \"b58dad3e32429a3ee7c0d39d0d69d2bf01078052580c5941f9ebf109003c0072\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 6 23:44:30.166998 containerd[1511]: time="2025-07-06T23:44:30.166952977Z" level=info msg="CreateContainer within sandbox \"52bb40efca27c4189e90697163a88be6aacdf272465e8c91e876750900479393\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"801b25dba4e532f88aa8e4eabf477bcc3410737ccac3a94fe7b57de75342d558\"" Jul 6 23:44:30.167710 containerd[1511]: time="2025-07-06T23:44:30.167676447Z" level=info msg="StartContainer for \"801b25dba4e532f88aa8e4eabf477bcc3410737ccac3a94fe7b57de75342d558\"" Jul 6 23:44:30.169019 containerd[1511]: time="2025-07-06T23:44:30.168982043Z" level=info msg="connecting to shim 801b25dba4e532f88aa8e4eabf477bcc3410737ccac3a94fe7b57de75342d558" address="unix:///run/containerd/s/333a31bdf5101d792f8f68b606f8833dd46e94fab6ce2c853d88eb9aab2566f9" protocol=ttrpc version=3 Jul 6 23:44:30.174009 containerd[1511]: time="2025-07-06T23:44:30.173968794Z" level=info msg="CreateContainer within sandbox \"84abc631b02c08c56f76e2341a7cb071e4e0f082915e911c02e2cacca7ad6a49\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4f301476dc2791dff907210f388faf05b5bedb0bdb4dbf86cb29f47f18d19841\"" Jul 6 23:44:30.174680 containerd[1511]: time="2025-07-06T23:44:30.174655456Z" level=info msg="StartContainer for \"4f301476dc2791dff907210f388faf05b5bedb0bdb4dbf86cb29f47f18d19841\"" Jul 6 23:44:30.176126 containerd[1511]: time="2025-07-06T23:44:30.176077164Z" level=info msg="connecting to shim 4f301476dc2791dff907210f388faf05b5bedb0bdb4dbf86cb29f47f18d19841" address="unix:///run/containerd/s/bcc252a4b804e066f2454ad30e30acbe02c0f2fecddce34457f6f2d8e69f0825" protocol=ttrpc version=3 Jul 6 23:44:30.178088 containerd[1511]: time="2025-07-06T23:44:30.178056805Z" level=info msg="Container a1aa5049a6139e82385070bcf70a24b7b9a40a7c817653a4feff08551bf810fc: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:44:30.186376 containerd[1511]: time="2025-07-06T23:44:30.186332878Z" level=info msg="CreateContainer within sandbox \"b58dad3e32429a3ee7c0d39d0d69d2bf01078052580c5941f9ebf109003c0072\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a1aa5049a6139e82385070bcf70a24b7b9a40a7c817653a4feff08551bf810fc\"" Jul 6 23:44:30.186867 kubelet[2256]: W0706 23:44:30.186800 2256 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Jul 6 23:44:30.186985 kubelet[2256]: E0706 23:44:30.186879 2256 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:44:30.187673 containerd[1511]: time="2025-07-06T23:44:30.187390187Z" level=info msg="StartContainer for \"a1aa5049a6139e82385070bcf70a24b7b9a40a7c817653a4feff08551bf810fc\"" Jul 6 23:44:30.188777 systemd[1]: Started cri-containerd-801b25dba4e532f88aa8e4eabf477bcc3410737ccac3a94fe7b57de75342d558.scope - libcontainer container 801b25dba4e532f88aa8e4eabf477bcc3410737ccac3a94fe7b57de75342d558. Jul 6 23:44:30.189633 containerd[1511]: time="2025-07-06T23:44:30.189603174Z" level=info msg="connecting to shim a1aa5049a6139e82385070bcf70a24b7b9a40a7c817653a4feff08551bf810fc" address="unix:///run/containerd/s/ca37ffbaf3baa0826b1464078e303ad30068b7c05707e4d46f3bafed18411224" protocol=ttrpc version=3 Jul 6 23:44:30.192760 systemd[1]: Started cri-containerd-4f301476dc2791dff907210f388faf05b5bedb0bdb4dbf86cb29f47f18d19841.scope - libcontainer container 4f301476dc2791dff907210f388faf05b5bedb0bdb4dbf86cb29f47f18d19841. Jul 6 23:44:30.220826 systemd[1]: Started cri-containerd-a1aa5049a6139e82385070bcf70a24b7b9a40a7c817653a4feff08551bf810fc.scope - libcontainer container a1aa5049a6139e82385070bcf70a24b7b9a40a7c817653a4feff08551bf810fc. Jul 6 23:44:30.247770 kubelet[2256]: I0706 23:44:30.247152 2256 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 6 23:44:30.248460 kubelet[2256]: E0706 23:44:30.248169 2256 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.128:6443/api/v1/nodes\": dial tcp 10.0.0.128:6443: connect: connection refused" node="localhost" Jul 6 23:44:30.258039 containerd[1511]: time="2025-07-06T23:44:30.258001074Z" level=info msg="StartContainer for \"801b25dba4e532f88aa8e4eabf477bcc3410737ccac3a94fe7b57de75342d558\" returns successfully" Jul 6 23:44:30.284845 containerd[1511]: time="2025-07-06T23:44:30.284423827Z" level=info msg="StartContainer for \"4f301476dc2791dff907210f388faf05b5bedb0bdb4dbf86cb29f47f18d19841\" returns successfully" Jul 6 23:44:30.289207 kubelet[2256]: W0706 23:44:30.289149 2256 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.128:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Jul 6 23:44:30.289440 kubelet[2256]: E0706 23:44:30.289395 2256 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.128:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:44:30.322010 containerd[1511]: time="2025-07-06T23:44:30.321951490Z" level=info msg="StartContainer for \"a1aa5049a6139e82385070bcf70a24b7b9a40a7c817653a4feff08551bf810fc\" returns successfully" Jul 6 23:44:30.384893 kubelet[2256]: E0706 23:44:30.384782 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:44:30.392159 kubelet[2256]: E0706 23:44:30.392135 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:44:30.394894 kubelet[2256]: E0706 23:44:30.394830 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:44:30.443595 kubelet[2256]: W0706 23:44:30.443174 2256 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Jul 6 23:44:30.443595 kubelet[2256]: E0706 23:44:30.443248 2256 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:44:31.051706 kubelet[2256]: I0706 23:44:31.051667 2256 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 6 23:44:31.399694 kubelet[2256]: E0706 23:44:31.399595 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:44:31.471828 kubelet[2256]: E0706 23:44:31.471798 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:44:31.726286 kubelet[2256]: E0706 23:44:31.726179 2256 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 6 23:44:31.833013 kubelet[2256]: I0706 23:44:31.832969 2256 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 6 23:44:32.343906 kubelet[2256]: I0706 23:44:32.343861 2256 apiserver.go:52] "Watching apiserver" Jul 6 23:44:32.356651 kubelet[2256]: I0706 23:44:32.356614 2256 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 6 23:44:33.723741 kubelet[2256]: E0706 23:44:33.723691 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:44:33.802362 systemd[1]: Reload requested from client PID 2535 ('systemctl') (unit session-7.scope)... Jul 6 23:44:33.802378 systemd[1]: Reloading... Jul 6 23:44:33.869598 zram_generator::config[2578]: No configuration found. Jul 6 23:44:33.940500 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:44:34.037533 systemd[1]: Reloading finished in 234 ms. Jul 6 23:44:34.069733 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:44:34.080698 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:44:34.080952 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:44:34.081023 systemd[1]: kubelet.service: Consumed 1.487s CPU time, 127.3M memory peak. Jul 6 23:44:34.082832 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:44:34.230093 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:44:34.234625 (kubelet)[2620]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:44:34.274066 kubelet[2620]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:44:34.274066 kubelet[2620]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 6 23:44:34.274066 kubelet[2620]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:44:34.274066 kubelet[2620]: I0706 23:44:34.273530 2620 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:44:34.280401 kubelet[2620]: I0706 23:44:34.279979 2620 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 6 23:44:34.280401 kubelet[2620]: I0706 23:44:34.280012 2620 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:44:34.280401 kubelet[2620]: I0706 23:44:34.280238 2620 server.go:934] "Client rotation is on, will bootstrap in background" Jul 6 23:44:34.281558 kubelet[2620]: I0706 23:44:34.281528 2620 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 6 23:44:34.283579 kubelet[2620]: I0706 23:44:34.283521 2620 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:44:34.287903 kubelet[2620]: I0706 23:44:34.287808 2620 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 6 23:44:34.291403 kubelet[2620]: I0706 23:44:34.291323 2620 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:44:34.291949 kubelet[2620]: I0706 23:44:34.291925 2620 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 6 23:44:34.292154 kubelet[2620]: I0706 23:44:34.292109 2620 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:44:34.292539 kubelet[2620]: I0706 23:44:34.292148 2620 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:44:34.292539 kubelet[2620]: I0706 23:44:34.292491 2620 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:44:34.292539 kubelet[2620]: I0706 23:44:34.292502 2620 container_manager_linux.go:300] "Creating device plugin manager" Jul 6 23:44:34.292937 kubelet[2620]: I0706 23:44:34.292556 2620 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:44:34.292937 kubelet[2620]: I0706 23:44:34.292697 2620 kubelet.go:408] "Attempting to sync node with API server" Jul 6 23:44:34.292937 kubelet[2620]: I0706 23:44:34.292711 2620 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:44:34.292937 kubelet[2620]: I0706 23:44:34.292749 2620 kubelet.go:314] "Adding apiserver pod source" Jul 6 23:44:34.292937 kubelet[2620]: I0706 23:44:34.292767 2620 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:44:34.295966 kubelet[2620]: I0706 23:44:34.295944 2620 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 6 23:44:34.297571 kubelet[2620]: I0706 23:44:34.296605 2620 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:44:34.297571 kubelet[2620]: I0706 23:44:34.297181 2620 server.go:1274] "Started kubelet" Jul 6 23:44:34.299596 kubelet[2620]: I0706 23:44:34.298059 2620 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:44:34.299596 kubelet[2620]: I0706 23:44:34.299003 2620 server.go:449] "Adding debug handlers to kubelet server" Jul 6 23:44:34.300889 kubelet[2620]: I0706 23:44:34.300047 2620 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:44:34.300889 kubelet[2620]: I0706 23:44:34.300354 2620 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:44:34.303397 kubelet[2620]: I0706 23:44:34.302832 2620 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:44:34.305621 kubelet[2620]: E0706 23:44:34.305595 2620 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:44:34.311412 kubelet[2620]: I0706 23:44:34.310113 2620 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 6 23:44:34.311412 kubelet[2620]: I0706 23:44:34.303419 2620 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:44:34.311412 kubelet[2620]: I0706 23:44:34.310312 2620 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 6 23:44:34.311412 kubelet[2620]: I0706 23:44:34.310436 2620 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:44:34.314838 kubelet[2620]: E0706 23:44:34.313127 2620 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:44:34.315303 kubelet[2620]: I0706 23:44:34.315277 2620 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:44:34.315404 kubelet[2620]: I0706 23:44:34.315382 2620 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:44:34.317032 kubelet[2620]: I0706 23:44:34.316996 2620 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:44:34.318920 kubelet[2620]: I0706 23:44:34.318887 2620 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:44:34.321008 kubelet[2620]: I0706 23:44:34.320982 2620 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:44:34.321086 kubelet[2620]: I0706 23:44:34.321077 2620 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 6 23:44:34.321188 kubelet[2620]: I0706 23:44:34.321177 2620 kubelet.go:2321] "Starting kubelet main sync loop" Jul 6 23:44:34.321290 kubelet[2620]: E0706 23:44:34.321272 2620 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:44:34.350327 kubelet[2620]: I0706 23:44:34.350295 2620 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 6 23:44:34.350327 kubelet[2620]: I0706 23:44:34.350316 2620 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 6 23:44:34.350327 kubelet[2620]: I0706 23:44:34.350337 2620 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:44:34.350510 kubelet[2620]: I0706 23:44:34.350489 2620 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 6 23:44:34.350542 kubelet[2620]: I0706 23:44:34.350507 2620 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 6 23:44:34.350542 kubelet[2620]: I0706 23:44:34.350526 2620 policy_none.go:49] "None policy: Start" Jul 6 23:44:34.351280 kubelet[2620]: I0706 23:44:34.351263 2620 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 6 23:44:34.351324 kubelet[2620]: I0706 23:44:34.351288 2620 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:44:34.351442 kubelet[2620]: I0706 23:44:34.351430 2620 state_mem.go:75] "Updated machine memory state" Jul 6 23:44:34.355484 kubelet[2620]: I0706 23:44:34.355460 2620 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:44:34.356017 kubelet[2620]: I0706 23:44:34.355994 2620 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:44:34.356682 kubelet[2620]: I0706 23:44:34.356012 2620 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:44:34.357906 kubelet[2620]: I0706 23:44:34.357408 2620 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:44:34.430217 kubelet[2620]: E0706 23:44:34.430184 2620 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 6 23:44:34.460899 kubelet[2620]: I0706 23:44:34.460856 2620 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 6 23:44:34.466729 kubelet[2620]: I0706 23:44:34.466668 2620 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 6 23:44:34.466729 kubelet[2620]: I0706 23:44:34.466748 2620 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 6 23:44:34.511127 kubelet[2620]: I0706 23:44:34.511082 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:44:34.511127 kubelet[2620]: I0706 23:44:34.511133 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:44:34.511280 kubelet[2620]: I0706 23:44:34.511157 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 6 23:44:34.511280 kubelet[2620]: I0706 23:44:34.511173 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/667381d6571378b9d274640723eca452-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"667381d6571378b9d274640723eca452\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:44:34.511280 kubelet[2620]: I0706 23:44:34.511198 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/667381d6571378b9d274640723eca452-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"667381d6571378b9d274640723eca452\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:44:34.511280 kubelet[2620]: I0706 23:44:34.511214 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:44:34.511280 kubelet[2620]: I0706 23:44:34.511228 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:44:34.511395 kubelet[2620]: I0706 23:44:34.511242 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/667381d6571378b9d274640723eca452-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"667381d6571378b9d274640723eca452\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:44:34.511395 kubelet[2620]: I0706 23:44:34.511257 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:44:34.729729 kubelet[2620]: E0706 23:44:34.729619 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:44:34.730932 kubelet[2620]: E0706 23:44:34.730848 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:44:34.730932 kubelet[2620]: E0706 23:44:34.730863 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:44:34.808144 sudo[2655]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 6 23:44:34.808404 sudo[2655]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 6 23:44:35.254757 sudo[2655]: pam_unix(sudo:session): session closed for user root Jul 6 23:44:35.293834 kubelet[2620]: I0706 23:44:35.293765 2620 apiserver.go:52] "Watching apiserver" Jul 6 23:44:35.311342 kubelet[2620]: I0706 23:44:35.311299 2620 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 6 23:44:35.334639 kubelet[2620]: E0706 23:44:35.334356 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:44:35.336210 kubelet[2620]: E0706 23:44:35.336050 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:44:35.342363 kubelet[2620]: E0706 23:44:35.342225 2620 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 6 23:44:35.342666 kubelet[2620]: E0706 23:44:35.342406 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:44:35.408589 kubelet[2620]: I0706 23:44:35.408506 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.408487509 podStartE2EDuration="1.408487509s" podCreationTimestamp="2025-07-06 23:44:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:44:35.401660625 +0000 UTC m=+1.163411015" watchObservedRunningTime="2025-07-06 23:44:35.408487509 +0000 UTC m=+1.170237899" Jul 6 23:44:35.408968 kubelet[2620]: I0706 23:44:35.408915 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.408868165 podStartE2EDuration="1.408868165s" podCreationTimestamp="2025-07-06 23:44:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:44:35.378730124 +0000 UTC m=+1.140480514" watchObservedRunningTime="2025-07-06 23:44:35.408868165 +0000 UTC m=+1.170618515" Jul 6 23:44:35.424636 kubelet[2620]: I0706 23:44:35.424562 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.424544055 podStartE2EDuration="2.424544055s" podCreationTimestamp="2025-07-06 23:44:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:44:35.415935691 +0000 UTC m=+1.177686041" watchObservedRunningTime="2025-07-06 23:44:35.424544055 +0000 UTC m=+1.186294445" Jul 6 23:44:36.336409 kubelet[2620]: E0706 23:44:36.336324 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:44:37.165432 sudo[1708]: pam_unix(sudo:session): session closed for user root Jul 6 23:44:37.167886 sshd[1707]: Connection closed by 10.0.0.1 port 57684 Jul 6 23:44:37.168546 sshd-session[1705]: pam_unix(sshd:session): session closed for user core Jul 6 23:44:37.173053 systemd-logind[1485]: Session 7 logged out. Waiting for processes to exit. Jul 6 23:44:37.173173 systemd[1]: sshd@6-10.0.0.128:22-10.0.0.1:57684.service: Deactivated successfully. Jul 6 23:44:37.176211 systemd[1]: session-7.scope: Deactivated successfully. Jul 6 23:44:37.176493 systemd[1]: session-7.scope: Consumed 7.692s CPU time, 271.7M memory peak. Jul 6 23:44:37.180229 systemd-logind[1485]: Removed session 7. Jul 6 23:44:37.652462 kubelet[2620]: E0706 23:44:37.652370 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:44:40.218080 kubelet[2620]: I0706 23:44:40.218050 2620 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 6 23:44:40.218514 containerd[1511]: time="2025-07-06T23:44:40.218482816Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 6 23:44:40.219090 kubelet[2620]: I0706 23:44:40.218671 2620 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 6 23:44:40.710442 kubelet[2620]: E0706 23:44:40.710410 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:44:41.111316 systemd[1]: Created slice kubepods-besteffort-pod2fe6c89b_bc09_4ec5_a370_dbf19770fed1.slice - libcontainer container kubepods-besteffort-pod2fe6c89b_bc09_4ec5_a370_dbf19770fed1.slice. Jul 6 23:44:41.139720 systemd[1]: Created slice kubepods-burstable-pod3704ddd3_4fa6_40db_9488_8da98e53077c.slice - libcontainer container kubepods-burstable-pod3704ddd3_4fa6_40db_9488_8da98e53077c.slice. Jul 6 23:44:41.153866 kubelet[2620]: I0706 23:44:41.153821 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2fe6c89b-bc09-4ec5-a370-dbf19770fed1-lib-modules\") pod \"kube-proxy-tb8gm\" (UID: \"2fe6c89b-bc09-4ec5-a370-dbf19770fed1\") " pod="kube-system/kube-proxy-tb8gm" Jul 6 23:44:41.153866 kubelet[2620]: I0706 23:44:41.153865 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3704ddd3-4fa6-40db-9488-8da98e53077c-etc-cni-netd\") pod \"cilium-mmk9r\" (UID: \"3704ddd3-4fa6-40db-9488-8da98e53077c\") " pod="kube-system/cilium-mmk9r" Jul 6 23:44:41.154032 kubelet[2620]: I0706 23:44:41.153885 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3704ddd3-4fa6-40db-9488-8da98e53077c-host-proc-sys-kernel\") pod \"cilium-mmk9r\" (UID: \"3704ddd3-4fa6-40db-9488-8da98e53077c\") " pod="kube-system/cilium-mmk9r" Jul 6 23:44:41.154032 kubelet[2620]: I0706 23:44:41.153904 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3704ddd3-4fa6-40db-9488-8da98e53077c-bpf-maps\") pod \"cilium-mmk9r\" (UID: \"3704ddd3-4fa6-40db-9488-8da98e53077c\") " pod="kube-system/cilium-mmk9r" Jul 6 23:44:41.154032 kubelet[2620]: I0706 23:44:41.153920 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3704ddd3-4fa6-40db-9488-8da98e53077c-cilium-cgroup\") pod \"cilium-mmk9r\" (UID: \"3704ddd3-4fa6-40db-9488-8da98e53077c\") " pod="kube-system/cilium-mmk9r" Jul 6 23:44:41.154032 kubelet[2620]: I0706 23:44:41.153934 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3704ddd3-4fa6-40db-9488-8da98e53077c-clustermesh-secrets\") pod \"cilium-mmk9r\" (UID: \"3704ddd3-4fa6-40db-9488-8da98e53077c\") " pod="kube-system/cilium-mmk9r" Jul 6 23:44:41.154032 kubelet[2620]: I0706 23:44:41.153948 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2fe6c89b-bc09-4ec5-a370-dbf19770fed1-kube-proxy\") pod \"kube-proxy-tb8gm\" (UID: \"2fe6c89b-bc09-4ec5-a370-dbf19770fed1\") " pod="kube-system/kube-proxy-tb8gm" Jul 6 23:44:41.154126 kubelet[2620]: I0706 23:44:41.153962 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kpt8\" (UniqueName: \"kubernetes.io/projected/2fe6c89b-bc09-4ec5-a370-dbf19770fed1-kube-api-access-9kpt8\") pod \"kube-proxy-tb8gm\" (UID: \"2fe6c89b-bc09-4ec5-a370-dbf19770fed1\") " pod="kube-system/kube-proxy-tb8gm" Jul 6 23:44:41.154126 kubelet[2620]: I0706 23:44:41.153980 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3704ddd3-4fa6-40db-9488-8da98e53077c-hubble-tls\") pod \"cilium-mmk9r\" (UID: \"3704ddd3-4fa6-40db-9488-8da98e53077c\") " pod="kube-system/cilium-mmk9r" Jul 6 23:44:41.154126 kubelet[2620]: I0706 23:44:41.153996 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zth4\" (UniqueName: \"kubernetes.io/projected/3704ddd3-4fa6-40db-9488-8da98e53077c-kube-api-access-5zth4\") pod \"cilium-mmk9r\" (UID: \"3704ddd3-4fa6-40db-9488-8da98e53077c\") " pod="kube-system/cilium-mmk9r" Jul 6 23:44:41.154126 kubelet[2620]: I0706 23:44:41.154045 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2fe6c89b-bc09-4ec5-a370-dbf19770fed1-xtables-lock\") pod \"kube-proxy-tb8gm\" (UID: \"2fe6c89b-bc09-4ec5-a370-dbf19770fed1\") " pod="kube-system/kube-proxy-tb8gm" Jul 6 23:44:41.154126 kubelet[2620]: I0706 23:44:41.154078 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3704ddd3-4fa6-40db-9488-8da98e53077c-cni-path\") pod \"cilium-mmk9r\" (UID: \"3704ddd3-4fa6-40db-9488-8da98e53077c\") " pod="kube-system/cilium-mmk9r" Jul 6 23:44:41.154217 kubelet[2620]: I0706 23:44:41.154100 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3704ddd3-4fa6-40db-9488-8da98e53077c-xtables-lock\") pod \"cilium-mmk9r\" (UID: \"3704ddd3-4fa6-40db-9488-8da98e53077c\") " pod="kube-system/cilium-mmk9r" Jul 6 23:44:41.154217 kubelet[2620]: I0706 23:44:41.154116 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3704ddd3-4fa6-40db-9488-8da98e53077c-host-proc-sys-net\") pod \"cilium-mmk9r\" (UID: \"3704ddd3-4fa6-40db-9488-8da98e53077c\") " pod="kube-system/cilium-mmk9r" Jul 6 23:44:41.154217 kubelet[2620]: I0706 23:44:41.154143 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3704ddd3-4fa6-40db-9488-8da98e53077c-cilium-run\") pod \"cilium-mmk9r\" (UID: \"3704ddd3-4fa6-40db-9488-8da98e53077c\") " pod="kube-system/cilium-mmk9r" Jul 6 23:44:41.154217 kubelet[2620]: I0706 23:44:41.154176 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3704ddd3-4fa6-40db-9488-8da98e53077c-hostproc\") pod \"cilium-mmk9r\" (UID: \"3704ddd3-4fa6-40db-9488-8da98e53077c\") " pod="kube-system/cilium-mmk9r" Jul 6 23:44:41.154217 kubelet[2620]: I0706 23:44:41.154190 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3704ddd3-4fa6-40db-9488-8da98e53077c-lib-modules\") pod \"cilium-mmk9r\" (UID: \"3704ddd3-4fa6-40db-9488-8da98e53077c\") " pod="kube-system/cilium-mmk9r" Jul 6 23:44:41.154217 kubelet[2620]: I0706 23:44:41.154205 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3704ddd3-4fa6-40db-9488-8da98e53077c-cilium-config-path\") pod \"cilium-mmk9r\" (UID: \"3704ddd3-4fa6-40db-9488-8da98e53077c\") " pod="kube-system/cilium-mmk9r" Jul 6 23:44:41.246402 systemd[1]: Created slice kubepods-besteffort-pod174a76bf_d4fa_4d7d_b8f8_15c25a927459.slice - libcontainer container kubepods-besteffort-pod174a76bf_d4fa_4d7d_b8f8_15c25a927459.slice. Jul 6 23:44:41.255123 kubelet[2620]: I0706 23:44:41.254712 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/174a76bf-d4fa-4d7d-b8f8-15c25a927459-cilium-config-path\") pod \"cilium-operator-5d85765b45-km7pm\" (UID: \"174a76bf-d4fa-4d7d-b8f8-15c25a927459\") " pod="kube-system/cilium-operator-5d85765b45-km7pm" Jul 6 23:44:41.255123 kubelet[2620]: I0706 23:44:41.254761 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpkmv\" (UniqueName: \"kubernetes.io/projected/174a76bf-d4fa-4d7d-b8f8-15c25a927459-kube-api-access-xpkmv\") pod \"cilium-operator-5d85765b45-km7pm\" (UID: \"174a76bf-d4fa-4d7d-b8f8-15c25a927459\") " pod="kube-system/cilium-operator-5d85765b45-km7pm" Jul 6 23:44:41.344062 kubelet[2620]: E0706 23:44:41.343999 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:44:41.435496 kubelet[2620]: E0706 23:44:41.435248 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:44:41.435910 containerd[1511]: time="2025-07-06T23:44:41.435856321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tb8gm,Uid:2fe6c89b-bc09-4ec5-a370-dbf19770fed1,Namespace:kube-system,Attempt:0,}" Jul 6 23:44:41.445385 kubelet[2620]: E0706 23:44:41.445337 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:44:41.446212 containerd[1511]: time="2025-07-06T23:44:41.446158209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mmk9r,Uid:3704ddd3-4fa6-40db-9488-8da98e53077c,Namespace:kube-system,Attempt:0,}" Jul 6 23:44:41.451375 containerd[1511]: time="2025-07-06T23:44:41.451254306Z" level=info msg="connecting to shim f1bf14fcd2e4159d72c2554e3465ad8fdf35a3dfab8e69ed21721ea2af171c62" address="unix:///run/containerd/s/a6e5d7dca10fea1e3dc273520c57a251177884d13b7fdd8a9899ece92554a936" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:44:41.466227 containerd[1511]: time="2025-07-06T23:44:41.466179255Z" level=info msg="connecting to shim 4538a34f5408c526df6d714df0d1216b8d9a7dec9f81186e792cd73d62f98ac9" address="unix:///run/containerd/s/9f86a92e65e5a0d46c940224758c2bf8b845f9f492d5505fbdc2b5fa290efbba" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:44:41.480782 systemd[1]: Started cri-containerd-f1bf14fcd2e4159d72c2554e3465ad8fdf35a3dfab8e69ed21721ea2af171c62.scope - libcontainer container f1bf14fcd2e4159d72c2554e3465ad8fdf35a3dfab8e69ed21721ea2af171c62. Jul 6 23:44:41.488003 systemd[1]: Started cri-containerd-4538a34f5408c526df6d714df0d1216b8d9a7dec9f81186e792cd73d62f98ac9.scope - libcontainer container 4538a34f5408c526df6d714df0d1216b8d9a7dec9f81186e792cd73d62f98ac9. Jul 6 23:44:41.510189 containerd[1511]: time="2025-07-06T23:44:41.510145940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tb8gm,Uid:2fe6c89b-bc09-4ec5-a370-dbf19770fed1,Namespace:kube-system,Attempt:0,} returns sandbox id \"f1bf14fcd2e4159d72c2554e3465ad8fdf35a3dfab8e69ed21721ea2af171c62\"" Jul 6 23:44:41.511036 kubelet[2620]: E0706 23:44:41.511009 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:44:41.516993 containerd[1511]: time="2025-07-06T23:44:41.516944685Z" level=info msg="CreateContainer within sandbox \"f1bf14fcd2e4159d72c2554e3465ad8fdf35a3dfab8e69ed21721ea2af171c62\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 6 23:44:41.521912 containerd[1511]: time="2025-07-06T23:44:41.521867015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mmk9r,Uid:3704ddd3-4fa6-40db-9488-8da98e53077c,Namespace:kube-system,Attempt:0,} returns sandbox id \"4538a34f5408c526df6d714df0d1216b8d9a7dec9f81186e792cd73d62f98ac9\"" Jul 6 23:44:41.522544 kubelet[2620]: E0706 23:44:41.522487 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:44:41.523930 containerd[1511]: time="2025-07-06T23:44:41.523592754Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 6 23:44:41.532869 containerd[1511]: time="2025-07-06T23:44:41.532823869Z" level=info msg="Container baaca4ad192e7009199daaeed5f19ced4c3196f162cde64c7721120433bb58b8: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:44:41.539869 containerd[1511]: time="2025-07-06T23:44:41.539821592Z" level=info msg="CreateContainer within sandbox \"f1bf14fcd2e4159d72c2554e3465ad8fdf35a3dfab8e69ed21721ea2af171c62\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"baaca4ad192e7009199daaeed5f19ced4c3196f162cde64c7721120433bb58b8\"" Jul 6 23:44:41.540890 containerd[1511]: time="2025-07-06T23:44:41.540785992Z" level=info msg="StartContainer for \"baaca4ad192e7009199daaeed5f19ced4c3196f162cde64c7721120433bb58b8\"" Jul 6 23:44:41.543069 containerd[1511]: time="2025-07-06T23:44:41.543031150Z" level=info msg="connecting to shim baaca4ad192e7009199daaeed5f19ced4c3196f162cde64c7721120433bb58b8" address="unix:///run/containerd/s/a6e5d7dca10fea1e3dc273520c57a251177884d13b7fdd8a9899ece92554a936" protocol=ttrpc version=3 Jul 6 23:44:41.551799 kubelet[2620]: E0706 23:44:41.551698 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:44:41.553300 containerd[1511]: time="2025-07-06T23:44:41.553249516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-km7pm,Uid:174a76bf-d4fa-4d7d-b8f8-15c25a927459,Namespace:kube-system,Attempt:0,}" Jul 6 23:44:41.572885 systemd[1]: Started cri-containerd-baaca4ad192e7009199daaeed5f19ced4c3196f162cde64c7721120433bb58b8.scope - libcontainer container baaca4ad192e7009199daaeed5f19ced4c3196f162cde64c7721120433bb58b8. Jul 6 23:44:41.574353 containerd[1511]: time="2025-07-06T23:44:41.574217394Z" level=info msg="connecting to shim 72dac2e406a7715e89329b0ca6f1cb73e79d55a19be50ac1834ca82c2cf31793" address="unix:///run/containerd/s/2a8ccc18618eb6d297e6b90fb9dc3dffb57e066a24484da1f61ff85c1a046c52" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:44:41.600764 systemd[1]: Started cri-containerd-72dac2e406a7715e89329b0ca6f1cb73e79d55a19be50ac1834ca82c2cf31793.scope - libcontainer container 72dac2e406a7715e89329b0ca6f1cb73e79d55a19be50ac1834ca82c2cf31793. Jul 6 23:44:41.626127 containerd[1511]: time="2025-07-06T23:44:41.626089254Z" level=info msg="StartContainer for \"baaca4ad192e7009199daaeed5f19ced4c3196f162cde64c7721120433bb58b8\" returns successfully" Jul 6 23:44:41.645542 containerd[1511]: time="2025-07-06T23:44:41.645497876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-km7pm,Uid:174a76bf-d4fa-4d7d-b8f8-15c25a927459,Namespace:kube-system,Attempt:0,} returns sandbox id \"72dac2e406a7715e89329b0ca6f1cb73e79d55a19be50ac1834ca82c2cf31793\"" Jul 6 23:44:41.646386 kubelet[2620]: E0706 23:44:41.646359 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:44:42.350798 kubelet[2620]: E0706 23:44:42.350734 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:44:45.055423 kubelet[2620]: E0706 23:44:45.055207 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:44:45.062625 kubelet[2620]: I0706 23:44:45.062279 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tb8gm" podStartSLOduration=4.062261705 podStartE2EDuration="4.062261705s" podCreationTimestamp="2025-07-06 23:44:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:44:42.363294523 +0000 UTC m=+8.125044913" watchObservedRunningTime="2025-07-06 23:44:45.062261705 +0000 UTC m=+10.824012095" Jul 6 23:44:47.669413 kubelet[2620]: E0706 23:44:47.668832 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:44:50.386311 update_engine[1498]: I20250706 23:44:50.386219 1498 update_attempter.cc:509] Updating boot flags... Jul 6 23:44:53.193385 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2729117760.mount: Deactivated successfully. Jul 6 23:44:54.666859 containerd[1511]: time="2025-07-06T23:44:54.666800874Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:54.667896 containerd[1511]: time="2025-07-06T23:44:54.667626402Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 6 23:44:54.668657 containerd[1511]: time="2025-07-06T23:44:54.668587845Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:54.670149 containerd[1511]: time="2025-07-06T23:44:54.670111269Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 13.146476617s" Jul 6 23:44:54.670339 containerd[1511]: time="2025-07-06T23:44:54.670243863Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 6 23:44:54.676093 containerd[1511]: time="2025-07-06T23:44:54.676050328Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 6 23:44:54.685589 containerd[1511]: time="2025-07-06T23:44:54.685533282Z" level=info msg="CreateContainer within sandbox \"4538a34f5408c526df6d714df0d1216b8d9a7dec9f81186e792cd73d62f98ac9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 6 23:44:54.693320 containerd[1511]: time="2025-07-06T23:44:54.693275916Z" level=info msg="Container fd4c9ad713c42a1dfdaa9f9e5f6bef2a8ecc42250b7962b97c3a8ec017809044: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:44:54.696937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2724592322.mount: Deactivated successfully. Jul 6 23:44:54.701392 containerd[1511]: time="2025-07-06T23:44:54.701336110Z" level=info msg="CreateContainer within sandbox \"4538a34f5408c526df6d714df0d1216b8d9a7dec9f81186e792cd73d62f98ac9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fd4c9ad713c42a1dfdaa9f9e5f6bef2a8ecc42250b7962b97c3a8ec017809044\"" Jul 6 23:44:54.701853 containerd[1511]: time="2025-07-06T23:44:54.701817792Z" level=info msg="StartContainer for \"fd4c9ad713c42a1dfdaa9f9e5f6bef2a8ecc42250b7962b97c3a8ec017809044\"" Jul 6 23:44:54.702724 containerd[1511]: time="2025-07-06T23:44:54.702686251Z" level=info msg="connecting to shim fd4c9ad713c42a1dfdaa9f9e5f6bef2a8ecc42250b7962b97c3a8ec017809044" address="unix:///run/containerd/s/9f86a92e65e5a0d46c940224758c2bf8b845f9f492d5505fbdc2b5fa290efbba" protocol=ttrpc version=3 Jul 6 23:44:54.748777 systemd[1]: Started cri-containerd-fd4c9ad713c42a1dfdaa9f9e5f6bef2a8ecc42250b7962b97c3a8ec017809044.scope - libcontainer container fd4c9ad713c42a1dfdaa9f9e5f6bef2a8ecc42250b7962b97c3a8ec017809044. Jul 6 23:44:54.785909 containerd[1511]: time="2025-07-06T23:44:54.785852763Z" level=info msg="StartContainer for \"fd4c9ad713c42a1dfdaa9f9e5f6bef2a8ecc42250b7962b97c3a8ec017809044\" returns successfully" Jul 6 23:44:54.833324 systemd[1]: cri-containerd-fd4c9ad713c42a1dfdaa9f9e5f6bef2a8ecc42250b7962b97c3a8ec017809044.scope: Deactivated successfully. Jul 6 23:44:54.873758 containerd[1511]: time="2025-07-06T23:44:54.873692734Z" level=info msg="received exit event container_id:\"fd4c9ad713c42a1dfdaa9f9e5f6bef2a8ecc42250b7962b97c3a8ec017809044\" id:\"fd4c9ad713c42a1dfdaa9f9e5f6bef2a8ecc42250b7962b97c3a8ec017809044\" pid:3054 exited_at:{seconds:1751845494 nanos:856409812}" Jul 6 23:44:54.875131 containerd[1511]: time="2025-07-06T23:44:54.875082045Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fd4c9ad713c42a1dfdaa9f9e5f6bef2a8ecc42250b7962b97c3a8ec017809044\" id:\"fd4c9ad713c42a1dfdaa9f9e5f6bef2a8ecc42250b7962b97c3a8ec017809044\" pid:3054 exited_at:{seconds:1751845494 nanos:856409812}" Jul 6 23:44:54.925619 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd4c9ad713c42a1dfdaa9f9e5f6bef2a8ecc42250b7962b97c3a8ec017809044-rootfs.mount: Deactivated successfully. Jul 6 23:44:55.388061 kubelet[2620]: E0706 23:44:55.388016 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:44:55.391692 containerd[1511]: time="2025-07-06T23:44:55.391651688Z" level=info msg="CreateContainer within sandbox \"4538a34f5408c526df6d714df0d1216b8d9a7dec9f81186e792cd73d62f98ac9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 6 23:44:55.399263 containerd[1511]: time="2025-07-06T23:44:55.399212148Z" level=info msg="Container cd19445dbcb57171e284ef347468dac615978377f8e1789eaa5bfc830560820d: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:44:55.408426 containerd[1511]: time="2025-07-06T23:44:55.408363471Z" level=info msg="CreateContainer within sandbox \"4538a34f5408c526df6d714df0d1216b8d9a7dec9f81186e792cd73d62f98ac9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cd19445dbcb57171e284ef347468dac615978377f8e1789eaa5bfc830560820d\"" Jul 6 23:44:55.409046 containerd[1511]: time="2025-07-06T23:44:55.408994943Z" level=info msg="StartContainer for \"cd19445dbcb57171e284ef347468dac615978377f8e1789eaa5bfc830560820d\"" Jul 6 23:44:55.410130 containerd[1511]: time="2025-07-06T23:44:55.410102530Z" level=info msg="connecting to shim cd19445dbcb57171e284ef347468dac615978377f8e1789eaa5bfc830560820d" address="unix:///run/containerd/s/9f86a92e65e5a0d46c940224758c2bf8b845f9f492d5505fbdc2b5fa290efbba" protocol=ttrpc version=3 Jul 6 23:44:55.431785 systemd[1]: Started cri-containerd-cd19445dbcb57171e284ef347468dac615978377f8e1789eaa5bfc830560820d.scope - libcontainer container cd19445dbcb57171e284ef347468dac615978377f8e1789eaa5bfc830560820d. Jul 6 23:44:55.493765 containerd[1511]: time="2025-07-06T23:44:55.493722100Z" level=info msg="StartContainer for \"cd19445dbcb57171e284ef347468dac615978377f8e1789eaa5bfc830560820d\" returns successfully" Jul 6 23:44:55.524740 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:44:55.524965 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:44:55.525327 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:44:55.526852 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:44:55.528674 systemd[1]: cri-containerd-cd19445dbcb57171e284ef347468dac615978377f8e1789eaa5bfc830560820d.scope: Deactivated successfully. Jul 6 23:44:55.529424 containerd[1511]: time="2025-07-06T23:44:55.529389766Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cd19445dbcb57171e284ef347468dac615978377f8e1789eaa5bfc830560820d\" id:\"cd19445dbcb57171e284ef347468dac615978377f8e1789eaa5bfc830560820d\" pid:3100 exited_at:{seconds:1751845495 nanos:528482628}" Jul 6 23:44:55.544438 containerd[1511]: time="2025-07-06T23:44:55.544376094Z" level=info msg="received exit event container_id:\"cd19445dbcb57171e284ef347468dac615978377f8e1789eaa5bfc830560820d\" id:\"cd19445dbcb57171e284ef347468dac615978377f8e1789eaa5bfc830560820d\" pid:3100 exited_at:{seconds:1751845495 nanos:528482628}" Jul 6 23:44:55.564585 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:44:55.770826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1128717914.mount: Deactivated successfully. Jul 6 23:44:56.386710 kubelet[2620]: E0706 23:44:56.386652 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:44:56.389126 containerd[1511]: time="2025-07-06T23:44:56.389078155Z" level=info msg="CreateContainer within sandbox \"4538a34f5408c526df6d714df0d1216b8d9a7dec9f81186e792cd73d62f98ac9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 6 23:44:56.438991 containerd[1511]: time="2025-07-06T23:44:56.438944254Z" level=info msg="Container d674cd8ebce2d6907c11153eb536f9c9fdd8c288b2f1284033243b2ae2530b71: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:44:56.443461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1672153736.mount: Deactivated successfully. Jul 6 23:44:56.451620 containerd[1511]: time="2025-07-06T23:44:56.451550711Z" level=info msg="CreateContainer within sandbox \"4538a34f5408c526df6d714df0d1216b8d9a7dec9f81186e792cd73d62f98ac9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d674cd8ebce2d6907c11153eb536f9c9fdd8c288b2f1284033243b2ae2530b71\"" Jul 6 23:44:56.452116 containerd[1511]: time="2025-07-06T23:44:56.452091555Z" level=info msg="StartContainer for \"d674cd8ebce2d6907c11153eb536f9c9fdd8c288b2f1284033243b2ae2530b71\"" Jul 6 23:44:56.453828 containerd[1511]: time="2025-07-06T23:44:56.453785464Z" level=info msg="connecting to shim d674cd8ebce2d6907c11153eb536f9c9fdd8c288b2f1284033243b2ae2530b71" address="unix:///run/containerd/s/9f86a92e65e5a0d46c940224758c2bf8b845f9f492d5505fbdc2b5fa290efbba" protocol=ttrpc version=3 Jul 6 23:44:56.478780 systemd[1]: Started cri-containerd-d674cd8ebce2d6907c11153eb536f9c9fdd8c288b2f1284033243b2ae2530b71.scope - libcontainer container d674cd8ebce2d6907c11153eb536f9c9fdd8c288b2f1284033243b2ae2530b71. Jul 6 23:44:56.514410 containerd[1511]: time="2025-07-06T23:44:56.514366225Z" level=info msg="StartContainer for \"d674cd8ebce2d6907c11153eb536f9c9fdd8c288b2f1284033243b2ae2530b71\" returns successfully" Jul 6 23:44:56.526824 systemd[1]: cri-containerd-d674cd8ebce2d6907c11153eb536f9c9fdd8c288b2f1284033243b2ae2530b71.scope: Deactivated successfully. Jul 6 23:44:56.528006 containerd[1511]: time="2025-07-06T23:44:56.527960149Z" level=info msg="received exit event container_id:\"d674cd8ebce2d6907c11153eb536f9c9fdd8c288b2f1284033243b2ae2530b71\" id:\"d674cd8ebce2d6907c11153eb536f9c9fdd8c288b2f1284033243b2ae2530b71\" pid:3151 exited_at:{seconds:1751845496 nanos:527767745}" Jul 6 23:44:56.529173 containerd[1511]: time="2025-07-06T23:44:56.529133658Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d674cd8ebce2d6907c11153eb536f9c9fdd8c288b2f1284033243b2ae2530b71\" id:\"d674cd8ebce2d6907c11153eb536f9c9fdd8c288b2f1284033243b2ae2530b71\" pid:3151 exited_at:{seconds:1751845496 nanos:527767745}" Jul 6 23:44:57.394524 kubelet[2620]: E0706 23:44:57.393586 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:44:57.400961 containerd[1511]: time="2025-07-06T23:44:57.400900635Z" level=info msg="CreateContainer within sandbox \"4538a34f5408c526df6d714df0d1216b8d9a7dec9f81186e792cd73d62f98ac9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 6 23:44:57.470627 containerd[1511]: time="2025-07-06T23:44:57.470539963Z" level=info msg="Container 4fb7fbd6d78079b58d6fa6ea79e69e7ed5867fc052fa48f6db5408c8b6101f3c: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:44:57.471113 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3011768733.mount: Deactivated successfully. Jul 6 23:44:57.477904 containerd[1511]: time="2025-07-06T23:44:57.477852968Z" level=info msg="CreateContainer within sandbox \"4538a34f5408c526df6d714df0d1216b8d9a7dec9f81186e792cd73d62f98ac9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4fb7fbd6d78079b58d6fa6ea79e69e7ed5867fc052fa48f6db5408c8b6101f3c\"" Jul 6 23:44:57.478728 containerd[1511]: time="2025-07-06T23:44:57.478500830Z" level=info msg="StartContainer for \"4fb7fbd6d78079b58d6fa6ea79e69e7ed5867fc052fa48f6db5408c8b6101f3c\"" Jul 6 23:44:57.479827 containerd[1511]: time="2025-07-06T23:44:57.479784552Z" level=info msg="connecting to shim 4fb7fbd6d78079b58d6fa6ea79e69e7ed5867fc052fa48f6db5408c8b6101f3c" address="unix:///run/containerd/s/9f86a92e65e5a0d46c940224758c2bf8b845f9f492d5505fbdc2b5fa290efbba" protocol=ttrpc version=3 Jul 6 23:44:57.503787 systemd[1]: Started cri-containerd-4fb7fbd6d78079b58d6fa6ea79e69e7ed5867fc052fa48f6db5408c8b6101f3c.scope - libcontainer container 4fb7fbd6d78079b58d6fa6ea79e69e7ed5867fc052fa48f6db5408c8b6101f3c. Jul 6 23:44:57.529962 systemd[1]: cri-containerd-4fb7fbd6d78079b58d6fa6ea79e69e7ed5867fc052fa48f6db5408c8b6101f3c.scope: Deactivated successfully. Jul 6 23:44:57.530556 containerd[1511]: time="2025-07-06T23:44:57.530510328Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4fb7fbd6d78079b58d6fa6ea79e69e7ed5867fc052fa48f6db5408c8b6101f3c\" id:\"4fb7fbd6d78079b58d6fa6ea79e69e7ed5867fc052fa48f6db5408c8b6101f3c\" pid:3190 exited_at:{seconds:1751845497 nanos:530070831}" Jul 6 23:44:57.531323 containerd[1511]: time="2025-07-06T23:44:57.531168913Z" level=info msg="received exit event container_id:\"4fb7fbd6d78079b58d6fa6ea79e69e7ed5867fc052fa48f6db5408c8b6101f3c\" id:\"4fb7fbd6d78079b58d6fa6ea79e69e7ed5867fc052fa48f6db5408c8b6101f3c\" pid:3190 exited_at:{seconds:1751845497 nanos:530070831}" Jul 6 23:44:57.538226 containerd[1511]: time="2025-07-06T23:44:57.538182292Z" level=info msg="StartContainer for \"4fb7fbd6d78079b58d6fa6ea79e69e7ed5867fc052fa48f6db5408c8b6101f3c\" returns successfully" Jul 6 23:44:57.694663 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4fb7fbd6d78079b58d6fa6ea79e69e7ed5867fc052fa48f6db5408c8b6101f3c-rootfs.mount: Deactivated successfully. Jul 6 23:44:58.399264 kubelet[2620]: E0706 23:44:58.399204 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:44:58.406655 containerd[1511]: time="2025-07-06T23:44:58.406185031Z" level=info msg="CreateContainer within sandbox \"4538a34f5408c526df6d714df0d1216b8d9a7dec9f81186e792cd73d62f98ac9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 6 23:44:58.442601 containerd[1511]: time="2025-07-06T23:44:58.442358865Z" level=info msg="Container 5a62ba561c105b97d9ae16f2a819618bc7cc10394d820e84ee8e30d81e8e439d: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:44:58.460319 containerd[1511]: time="2025-07-06T23:44:58.460262583Z" level=info msg="CreateContainer within sandbox \"4538a34f5408c526df6d714df0d1216b8d9a7dec9f81186e792cd73d62f98ac9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5a62ba561c105b97d9ae16f2a819618bc7cc10394d820e84ee8e30d81e8e439d\"" Jul 6 23:44:58.461848 containerd[1511]: time="2025-07-06T23:44:58.461786063Z" level=info msg="StartContainer for \"5a62ba561c105b97d9ae16f2a819618bc7cc10394d820e84ee8e30d81e8e439d\"" Jul 6 23:44:58.463207 containerd[1511]: time="2025-07-06T23:44:58.463170633Z" level=info msg="connecting to shim 5a62ba561c105b97d9ae16f2a819618bc7cc10394d820e84ee8e30d81e8e439d" address="unix:///run/containerd/s/9f86a92e65e5a0d46c940224758c2bf8b845f9f492d5505fbdc2b5fa290efbba" protocol=ttrpc version=3 Jul 6 23:44:58.483783 systemd[1]: Started cri-containerd-5a62ba561c105b97d9ae16f2a819618bc7cc10394d820e84ee8e30d81e8e439d.scope - libcontainer container 5a62ba561c105b97d9ae16f2a819618bc7cc10394d820e84ee8e30d81e8e439d. Jul 6 23:44:58.547118 containerd[1511]: time="2025-07-06T23:44:58.544645896Z" level=info msg="StartContainer for \"5a62ba561c105b97d9ae16f2a819618bc7cc10394d820e84ee8e30d81e8e439d\" returns successfully" Jul 6 23:44:58.711495 containerd[1511]: time="2025-07-06T23:44:58.711274312Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a62ba561c105b97d9ae16f2a819618bc7cc10394d820e84ee8e30d81e8e439d\" id:\"6917f0582540cd6e323d8f89e62817632f06473b0a6a6a2378a68862c7d163b8\" pid:3264 exited_at:{seconds:1751845498 nanos:710945523}" Jul 6 23:44:58.743032 kubelet[2620]: I0706 23:44:58.742965 2620 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 6 23:44:58.824108 systemd[1]: Created slice kubepods-burstable-pod358e0c2d_be2b_46ca_b865_068140596478.slice - libcontainer container kubepods-burstable-pod358e0c2d_be2b_46ca_b865_068140596478.slice. Jul 6 23:44:58.839815 systemd[1]: Created slice kubepods-burstable-podb37978f6_f312_436a_b2cf_ac4f367809e5.slice - libcontainer container kubepods-burstable-podb37978f6_f312_436a_b2cf_ac4f367809e5.slice. Jul 6 23:44:58.976250 containerd[1511]: time="2025-07-06T23:44:58.975975515Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:58.977253 containerd[1511]: time="2025-07-06T23:44:58.977209174Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 6 23:44:58.978197 containerd[1511]: time="2025-07-06T23:44:58.978156853Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:44:58.979676 containerd[1511]: time="2025-07-06T23:44:58.979621320Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 4.303354018s" Jul 6 23:44:58.979676 containerd[1511]: time="2025-07-06T23:44:58.979665970Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 6 23:44:58.980506 kubelet[2620]: I0706 23:44:58.980464 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/358e0c2d-be2b-46ca-b865-068140596478-config-volume\") pod \"coredns-7c65d6cfc9-7q899\" (UID: \"358e0c2d-be2b-46ca-b865-068140596478\") " pod="kube-system/coredns-7c65d6cfc9-7q899" Jul 6 23:44:58.980585 kubelet[2620]: I0706 23:44:58.980505 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cn7z\" (UniqueName: \"kubernetes.io/projected/b37978f6-f312-436a-b2cf-ac4f367809e5-kube-api-access-9cn7z\") pod \"coredns-7c65d6cfc9-fpwkn\" (UID: \"b37978f6-f312-436a-b2cf-ac4f367809e5\") " pod="kube-system/coredns-7c65d6cfc9-fpwkn" Jul 6 23:44:58.980585 kubelet[2620]: I0706 23:44:58.980526 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7m7p\" (UniqueName: \"kubernetes.io/projected/358e0c2d-be2b-46ca-b865-068140596478-kube-api-access-v7m7p\") pod \"coredns-7c65d6cfc9-7q899\" (UID: \"358e0c2d-be2b-46ca-b865-068140596478\") " pod="kube-system/coredns-7c65d6cfc9-7q899" Jul 6 23:44:58.980585 kubelet[2620]: I0706 23:44:58.980547 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b37978f6-f312-436a-b2cf-ac4f367809e5-config-volume\") pod \"coredns-7c65d6cfc9-fpwkn\" (UID: \"b37978f6-f312-436a-b2cf-ac4f367809e5\") " pod="kube-system/coredns-7c65d6cfc9-fpwkn" Jul 6 23:44:58.982479 containerd[1511]: time="2025-07-06T23:44:58.982442993Z" level=info msg="CreateContainer within sandbox \"72dac2e406a7715e89329b0ca6f1cb73e79d55a19be50ac1834ca82c2cf31793\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 6 23:44:58.993936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1536568594.mount: Deactivated successfully. Jul 6 23:44:59.003399 containerd[1511]: time="2025-07-06T23:44:59.003341573Z" level=info msg="Container 661c6d66e62f8406e3fdd6cbf6a97049042626f35fe566eb5fd71a65961d9158: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:44:59.023118 containerd[1511]: time="2025-07-06T23:44:59.023073017Z" level=info msg="CreateContainer within sandbox \"72dac2e406a7715e89329b0ca6f1cb73e79d55a19be50ac1834ca82c2cf31793\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"661c6d66e62f8406e3fdd6cbf6a97049042626f35fe566eb5fd71a65961d9158\"" Jul 6 23:44:59.024651 containerd[1511]: time="2025-07-06T23:44:59.024610405Z" level=info msg="StartContainer for \"661c6d66e62f8406e3fdd6cbf6a97049042626f35fe566eb5fd71a65961d9158\"" Jul 6 23:44:59.025576 containerd[1511]: time="2025-07-06T23:44:59.025537112Z" level=info msg="connecting to shim 661c6d66e62f8406e3fdd6cbf6a97049042626f35fe566eb5fd71a65961d9158" address="unix:///run/containerd/s/2a8ccc18618eb6d297e6b90fb9dc3dffb57e066a24484da1f61ff85c1a046c52" protocol=ttrpc version=3 Jul 6 23:44:59.055827 systemd[1]: Started cri-containerd-661c6d66e62f8406e3fdd6cbf6a97049042626f35fe566eb5fd71a65961d9158.scope - libcontainer container 661c6d66e62f8406e3fdd6cbf6a97049042626f35fe566eb5fd71a65961d9158. Jul 6 23:44:59.111277 containerd[1511]: time="2025-07-06T23:44:59.107877413Z" level=info msg="StartContainer for \"661c6d66e62f8406e3fdd6cbf6a97049042626f35fe566eb5fd71a65961d9158\" returns successfully" Jul 6 23:44:59.130930 kubelet[2620]: E0706 23:44:59.130876 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:44:59.133067 containerd[1511]: time="2025-07-06T23:44:59.131731045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7q899,Uid:358e0c2d-be2b-46ca-b865-068140596478,Namespace:kube-system,Attempt:0,}" Jul 6 23:44:59.144780 kubelet[2620]: E0706 23:44:59.144652 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:44:59.153046 containerd[1511]: time="2025-07-06T23:44:59.146722377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fpwkn,Uid:b37978f6-f312-436a-b2cf-ac4f367809e5,Namespace:kube-system,Attempt:0,}" Jul 6 23:44:59.403482 kubelet[2620]: E0706 23:44:59.403440 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:44:59.411398 kubelet[2620]: E0706 23:44:59.411364 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:44:59.435325 kubelet[2620]: I0706 23:44:59.435190 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-km7pm" podStartSLOduration=1.101576595 podStartE2EDuration="18.435170403s" podCreationTimestamp="2025-07-06 23:44:41 +0000 UTC" firstStartedPulling="2025-07-06 23:44:41.647324265 +0000 UTC m=+7.409074615" lastFinishedPulling="2025-07-06 23:44:58.980918033 +0000 UTC m=+24.742668423" observedRunningTime="2025-07-06 23:44:59.434974844 +0000 UTC m=+25.196725194" watchObservedRunningTime="2025-07-06 23:44:59.435170403 +0000 UTC m=+25.196920793" Jul 6 23:45:00.413317 kubelet[2620]: E0706 23:45:00.413220 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:45:00.413717 kubelet[2620]: E0706 23:45:00.413376 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:45:01.415512 kubelet[2620]: E0706 23:45:01.415466 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:45:03.201528 systemd-networkd[1434]: cilium_host: Link UP Jul 6 23:45:03.202138 systemd-networkd[1434]: cilium_net: Link UP Jul 6 23:45:03.202816 systemd-networkd[1434]: cilium_net: Gained carrier Jul 6 23:45:03.203291 systemd-networkd[1434]: cilium_host: Gained carrier Jul 6 23:45:03.320771 systemd-networkd[1434]: cilium_vxlan: Link UP Jul 6 23:45:03.320779 systemd-networkd[1434]: cilium_vxlan: Gained carrier Jul 6 23:45:03.450798 systemd-networkd[1434]: cilium_net: Gained IPv6LL Jul 6 23:45:03.687621 kernel: NET: Registered PF_ALG protocol family Jul 6 23:45:03.762732 systemd-networkd[1434]: cilium_host: Gained IPv6LL Jul 6 23:45:04.363965 systemd-networkd[1434]: lxc_health: Link UP Jul 6 23:45:04.364725 systemd-networkd[1434]: lxc_health: Gained carrier Jul 6 23:45:04.774596 kernel: eth0: renamed from tmp03e4a Jul 6 23:45:04.790957 systemd-networkd[1434]: lxc1f1d418a725a: Link UP Jul 6 23:45:04.794319 systemd-networkd[1434]: lxc0ad20881dbcb: Link UP Jul 6 23:45:04.794551 systemd-networkd[1434]: cilium_vxlan: Gained IPv6LL Jul 6 23:45:04.796667 kernel: eth0: renamed from tmp308f4 Jul 6 23:45:04.795076 systemd-networkd[1434]: lxc1f1d418a725a: Gained carrier Jul 6 23:45:04.799698 systemd-networkd[1434]: lxc0ad20881dbcb: Gained carrier Jul 6 23:45:05.457593 kubelet[2620]: E0706 23:45:05.449376 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:45:05.482155 kubelet[2620]: I0706 23:45:05.482076 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mmk9r" podStartSLOduration=11.331875325 podStartE2EDuration="24.482058476s" podCreationTimestamp="2025-07-06 23:44:41 +0000 UTC" firstStartedPulling="2025-07-06 23:44:41.523139969 +0000 UTC m=+7.284890359" lastFinishedPulling="2025-07-06 23:44:54.67332312 +0000 UTC m=+20.435073510" observedRunningTime="2025-07-06 23:44:59.492530206 +0000 UTC m=+25.254280596" watchObservedRunningTime="2025-07-06 23:45:05.482058476 +0000 UTC m=+31.243808866" Jul 6 23:45:05.549973 systemd[1]: Started sshd@7-10.0.0.128:22-10.0.0.1:54568.service - OpenSSH per-connection server daemon (10.0.0.1:54568). Jul 6 23:45:05.626936 sshd[3776]: Accepted publickey for core from 10.0.0.1 port 54568 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:45:05.630754 sshd-session[3776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:45:05.638400 systemd-logind[1485]: New session 8 of user core. Jul 6 23:45:05.646787 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 6 23:45:05.798019 sshd[3778]: Connection closed by 10.0.0.1 port 54568 Jul 6 23:45:05.798609 sshd-session[3776]: pam_unix(sshd:session): session closed for user core Jul 6 23:45:05.802337 systemd[1]: sshd@7-10.0.0.128:22-10.0.0.1:54568.service: Deactivated successfully. Jul 6 23:45:05.804778 systemd[1]: session-8.scope: Deactivated successfully. Jul 6 23:45:05.805925 systemd-logind[1485]: Session 8 logged out. Waiting for processes to exit. Jul 6 23:45:05.807277 systemd-logind[1485]: Removed session 8. Jul 6 23:45:05.810774 systemd-networkd[1434]: lxc_health: Gained IPv6LL Jul 6 23:45:05.874737 systemd-networkd[1434]: lxc0ad20881dbcb: Gained IPv6LL Jul 6 23:45:06.194843 systemd-networkd[1434]: lxc1f1d418a725a: Gained IPv6LL Jul 6 23:45:06.423056 kubelet[2620]: E0706 23:45:06.423007 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:45:08.726509 containerd[1511]: time="2025-07-06T23:45:08.726461211Z" level=info msg="connecting to shim 03e4a3e41d51354230ac290e137dd6674d2f09eb6dd96763bf297ee2a96517c6" address="unix:///run/containerd/s/1aaff70b79a4831321fbd17d2b31245e88e2893f7332e5ff3cc9b553f6ac8a6e" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:45:08.726929 containerd[1511]: time="2025-07-06T23:45:08.726702365Z" level=info msg="connecting to shim 308f4840ba48a6251b7698e7abc02ac2884ec76c741285f9551f5da8405c0e34" address="unix:///run/containerd/s/86d2f854ed8e7af6c79ac5adcda3d20d5494ff71de3ebfd306da9c30175aeee5" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:45:08.753808 systemd[1]: Started cri-containerd-03e4a3e41d51354230ac290e137dd6674d2f09eb6dd96763bf297ee2a96517c6.scope - libcontainer container 03e4a3e41d51354230ac290e137dd6674d2f09eb6dd96763bf297ee2a96517c6. Jul 6 23:45:08.757072 systemd[1]: Started cri-containerd-308f4840ba48a6251b7698e7abc02ac2884ec76c741285f9551f5da8405c0e34.scope - libcontainer container 308f4840ba48a6251b7698e7abc02ac2884ec76c741285f9551f5da8405c0e34. Jul 6 23:45:08.771429 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:45:08.772864 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:45:08.794556 containerd[1511]: time="2025-07-06T23:45:08.794496064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7q899,Uid:358e0c2d-be2b-46ca-b865-068140596478,Namespace:kube-system,Attempt:0,} returns sandbox id \"03e4a3e41d51354230ac290e137dd6674d2f09eb6dd96763bf297ee2a96517c6\"" Jul 6 23:45:08.797560 containerd[1511]: time="2025-07-06T23:45:08.797530573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fpwkn,Uid:b37978f6-f312-436a-b2cf-ac4f367809e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"308f4840ba48a6251b7698e7abc02ac2884ec76c741285f9551f5da8405c0e34\"" Jul 6 23:45:08.799002 kubelet[2620]: E0706 23:45:08.798980 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:45:08.819969 kubelet[2620]: E0706 23:45:08.819757 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:45:08.824079 containerd[1511]: time="2025-07-06T23:45:08.824030917Z" level=info msg="CreateContainer within sandbox \"308f4840ba48a6251b7698e7abc02ac2884ec76c741285f9551f5da8405c0e34\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:45:08.834860 containerd[1511]: time="2025-07-06T23:45:08.834818881Z" level=info msg="Container 555d12a58b1aaca777cafc447025df7504f454d93a548e84dba4a6c090fa3856: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:45:08.839485 containerd[1511]: time="2025-07-06T23:45:08.839424732Z" level=info msg="CreateContainer within sandbox \"03e4a3e41d51354230ac290e137dd6674d2f09eb6dd96763bf297ee2a96517c6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:45:08.843548 containerd[1511]: time="2025-07-06T23:45:08.843495667Z" level=info msg="CreateContainer within sandbox \"308f4840ba48a6251b7698e7abc02ac2884ec76c741285f9551f5da8405c0e34\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"555d12a58b1aaca777cafc447025df7504f454d93a548e84dba4a6c090fa3856\"" Jul 6 23:45:08.844267 containerd[1511]: time="2025-07-06T23:45:08.844169402Z" level=info msg="StartContainer for \"555d12a58b1aaca777cafc447025df7504f454d93a548e84dba4a6c090fa3856\"" Jul 6 23:45:08.845107 containerd[1511]: time="2025-07-06T23:45:08.845076010Z" level=info msg="connecting to shim 555d12a58b1aaca777cafc447025df7504f454d93a548e84dba4a6c090fa3856" address="unix:///run/containerd/s/86d2f854ed8e7af6c79ac5adcda3d20d5494ff71de3ebfd306da9c30175aeee5" protocol=ttrpc version=3 Jul 6 23:45:08.850478 containerd[1511]: time="2025-07-06T23:45:08.850441568Z" level=info msg="Container 9d54e8c4d5bdaeae00b96fcff19df431849ef3efc42ca73159b846848d0a725d: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:45:08.855627 containerd[1511]: time="2025-07-06T23:45:08.855585975Z" level=info msg="CreateContainer within sandbox \"03e4a3e41d51354230ac290e137dd6674d2f09eb6dd96763bf297ee2a96517c6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9d54e8c4d5bdaeae00b96fcff19df431849ef3efc42ca73159b846848d0a725d\"" Jul 6 23:45:08.857715 containerd[1511]: time="2025-07-06T23:45:08.857683632Z" level=info msg="StartContainer for \"9d54e8c4d5bdaeae00b96fcff19df431849ef3efc42ca73159b846848d0a725d\"" Jul 6 23:45:08.859426 containerd[1511]: time="2025-07-06T23:45:08.859364029Z" level=info msg="connecting to shim 9d54e8c4d5bdaeae00b96fcff19df431849ef3efc42ca73159b846848d0a725d" address="unix:///run/containerd/s/1aaff70b79a4831321fbd17d2b31245e88e2893f7332e5ff3cc9b553f6ac8a6e" protocol=ttrpc version=3 Jul 6 23:45:08.866742 systemd[1]: Started cri-containerd-555d12a58b1aaca777cafc447025df7504f454d93a548e84dba4a6c090fa3856.scope - libcontainer container 555d12a58b1aaca777cafc447025df7504f454d93a548e84dba4a6c090fa3856. Jul 6 23:45:08.883772 systemd[1]: Started cri-containerd-9d54e8c4d5bdaeae00b96fcff19df431849ef3efc42ca73159b846848d0a725d.scope - libcontainer container 9d54e8c4d5bdaeae00b96fcff19df431849ef3efc42ca73159b846848d0a725d. Jul 6 23:45:08.933711 containerd[1511]: time="2025-07-06T23:45:08.933663486Z" level=info msg="StartContainer for \"555d12a58b1aaca777cafc447025df7504f454d93a548e84dba4a6c090fa3856\" returns successfully" Jul 6 23:45:08.934194 containerd[1511]: time="2025-07-06T23:45:08.934070744Z" level=info msg="StartContainer for \"9d54e8c4d5bdaeae00b96fcff19df431849ef3efc42ca73159b846848d0a725d\" returns successfully" Jul 6 23:45:09.435774 kubelet[2620]: E0706 23:45:09.435724 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:45:09.436137 kubelet[2620]: E0706 23:45:09.436101 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:45:09.453399 kubelet[2620]: I0706 23:45:09.453324 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-7q899" podStartSLOduration=28.453310052 podStartE2EDuration="28.453310052s" podCreationTimestamp="2025-07-06 23:44:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:45:09.453128227 +0000 UTC m=+35.214878617" watchObservedRunningTime="2025-07-06 23:45:09.453310052 +0000 UTC m=+35.215060442" Jul 6 23:45:09.465687 kubelet[2620]: I0706 23:45:09.465618 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-fpwkn" podStartSLOduration=28.46559949 podStartE2EDuration="28.46559949s" podCreationTimestamp="2025-07-06 23:44:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:45:09.464364241 +0000 UTC m=+35.226114591" watchObservedRunningTime="2025-07-06 23:45:09.46559949 +0000 UTC m=+35.227349880" Jul 6 23:45:10.436550 kubelet[2620]: E0706 23:45:10.436515 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:45:10.822070 systemd[1]: Started sshd@8-10.0.0.128:22-10.0.0.1:54584.service - OpenSSH per-connection server daemon (10.0.0.1:54584). Jul 6 23:45:10.882287 sshd[3977]: Accepted publickey for core from 10.0.0.1 port 54584 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:45:10.883615 sshd-session[3977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:45:10.887523 systemd-logind[1485]: New session 9 of user core. Jul 6 23:45:10.895751 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 6 23:45:11.010903 sshd[3979]: Connection closed by 10.0.0.1 port 54584 Jul 6 23:45:11.011679 sshd-session[3977]: pam_unix(sshd:session): session closed for user core Jul 6 23:45:11.015756 systemd-logind[1485]: Session 9 logged out. Waiting for processes to exit. Jul 6 23:45:11.015932 systemd[1]: sshd@8-10.0.0.128:22-10.0.0.1:54584.service: Deactivated successfully. Jul 6 23:45:11.017676 systemd[1]: session-9.scope: Deactivated successfully. Jul 6 23:45:11.018992 systemd-logind[1485]: Removed session 9. Jul 6 23:45:16.026980 systemd[1]: Started sshd@9-10.0.0.128:22-10.0.0.1:45080.service - OpenSSH per-connection server daemon (10.0.0.1:45080). Jul 6 23:45:16.085819 sshd[3998]: Accepted publickey for core from 10.0.0.1 port 45080 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:45:16.087653 sshd-session[3998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:45:16.092661 systemd-logind[1485]: New session 10 of user core. Jul 6 23:45:16.107968 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 6 23:45:16.240618 sshd[4000]: Connection closed by 10.0.0.1 port 45080 Jul 6 23:45:16.241155 sshd-session[3998]: pam_unix(sshd:session): session closed for user core Jul 6 23:45:16.248204 systemd-logind[1485]: Session 10 logged out. Waiting for processes to exit. Jul 6 23:45:16.249035 systemd[1]: sshd@9-10.0.0.128:22-10.0.0.1:45080.service: Deactivated successfully. Jul 6 23:45:16.252357 systemd[1]: session-10.scope: Deactivated successfully. Jul 6 23:45:16.255104 systemd-logind[1485]: Removed session 10. Jul 6 23:45:19.146329 kubelet[2620]: E0706 23:45:19.146246 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:45:19.453183 kubelet[2620]: E0706 23:45:19.452961 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:45:21.255903 systemd[1]: Started sshd@10-10.0.0.128:22-10.0.0.1:45090.service - OpenSSH per-connection server daemon (10.0.0.1:45090). Jul 6 23:45:21.317226 sshd[4019]: Accepted publickey for core from 10.0.0.1 port 45090 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:45:21.318546 sshd-session[4019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:45:21.323506 systemd-logind[1485]: New session 11 of user core. Jul 6 23:45:21.333820 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 6 23:45:21.479136 sshd[4021]: Connection closed by 10.0.0.1 port 45090 Jul 6 23:45:21.480081 sshd-session[4019]: pam_unix(sshd:session): session closed for user core Jul 6 23:45:21.492213 systemd[1]: sshd@10-10.0.0.128:22-10.0.0.1:45090.service: Deactivated successfully. Jul 6 23:45:21.495182 systemd[1]: session-11.scope: Deactivated successfully. Jul 6 23:45:21.496809 systemd-logind[1485]: Session 11 logged out. Waiting for processes to exit. Jul 6 23:45:21.500194 systemd[1]: Started sshd@11-10.0.0.128:22-10.0.0.1:45104.service - OpenSSH per-connection server daemon (10.0.0.1:45104). Jul 6 23:45:21.502867 systemd-logind[1485]: Removed session 11. Jul 6 23:45:21.560758 sshd[4035]: Accepted publickey for core from 10.0.0.1 port 45104 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:45:21.562120 sshd-session[4035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:45:21.570386 systemd-logind[1485]: New session 12 of user core. Jul 6 23:45:21.576801 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 6 23:45:21.771674 sshd[4037]: Connection closed by 10.0.0.1 port 45104 Jul 6 23:45:21.773846 sshd-session[4035]: pam_unix(sshd:session): session closed for user core Jul 6 23:45:21.788369 systemd[1]: sshd@11-10.0.0.128:22-10.0.0.1:45104.service: Deactivated successfully. Jul 6 23:45:21.793513 systemd[1]: session-12.scope: Deactivated successfully. Jul 6 23:45:21.798249 systemd-logind[1485]: Session 12 logged out. Waiting for processes to exit. Jul 6 23:45:21.802152 systemd[1]: Started sshd@12-10.0.0.128:22-10.0.0.1:45112.service - OpenSSH per-connection server daemon (10.0.0.1:45112). Jul 6 23:45:21.804283 systemd-logind[1485]: Removed session 12. Jul 6 23:45:21.872199 sshd[4049]: Accepted publickey for core from 10.0.0.1 port 45112 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:45:21.873533 sshd-session[4049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:45:21.877692 systemd-logind[1485]: New session 13 of user core. Jul 6 23:45:21.887768 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 6 23:45:22.005157 sshd[4051]: Connection closed by 10.0.0.1 port 45112 Jul 6 23:45:22.005520 sshd-session[4049]: pam_unix(sshd:session): session closed for user core Jul 6 23:45:22.009195 systemd[1]: sshd@12-10.0.0.128:22-10.0.0.1:45112.service: Deactivated successfully. Jul 6 23:45:22.010996 systemd[1]: session-13.scope: Deactivated successfully. Jul 6 23:45:22.013244 systemd-logind[1485]: Session 13 logged out. Waiting for processes to exit. Jul 6 23:45:22.015057 systemd-logind[1485]: Removed session 13. Jul 6 23:45:27.022897 systemd[1]: Started sshd@13-10.0.0.128:22-10.0.0.1:41784.service - OpenSSH per-connection server daemon (10.0.0.1:41784). Jul 6 23:45:27.095773 sshd[4065]: Accepted publickey for core from 10.0.0.1 port 41784 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:45:27.099345 sshd-session[4065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:45:27.105115 systemd-logind[1485]: New session 14 of user core. Jul 6 23:45:27.119808 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 6 23:45:27.254205 sshd[4067]: Connection closed by 10.0.0.1 port 41784 Jul 6 23:45:27.256672 sshd-session[4065]: pam_unix(sshd:session): session closed for user core Jul 6 23:45:27.261364 systemd[1]: sshd@13-10.0.0.128:22-10.0.0.1:41784.service: Deactivated successfully. Jul 6 23:45:27.263414 systemd[1]: session-14.scope: Deactivated successfully. Jul 6 23:45:27.265227 systemd-logind[1485]: Session 14 logged out. Waiting for processes to exit. Jul 6 23:45:27.268033 systemd-logind[1485]: Removed session 14. Jul 6 23:45:32.268695 systemd[1]: Started sshd@14-10.0.0.128:22-10.0.0.1:41790.service - OpenSSH per-connection server daemon (10.0.0.1:41790). Jul 6 23:45:32.329279 sshd[4081]: Accepted publickey for core from 10.0.0.1 port 41790 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:45:32.330291 sshd-session[4081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:45:32.334959 systemd-logind[1485]: New session 15 of user core. Jul 6 23:45:32.346794 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 6 23:45:32.472555 sshd[4083]: Connection closed by 10.0.0.1 port 41790 Jul 6 23:45:32.473330 sshd-session[4081]: pam_unix(sshd:session): session closed for user core Jul 6 23:45:32.486238 systemd[1]: sshd@14-10.0.0.128:22-10.0.0.1:41790.service: Deactivated successfully. Jul 6 23:45:32.491763 systemd[1]: session-15.scope: Deactivated successfully. Jul 6 23:45:32.493823 systemd-logind[1485]: Session 15 logged out. Waiting for processes to exit. Jul 6 23:45:32.497413 systemd[1]: Started sshd@15-10.0.0.128:22-10.0.0.1:52330.service - OpenSSH per-connection server daemon (10.0.0.1:52330). Jul 6 23:45:32.499082 systemd-logind[1485]: Removed session 15. Jul 6 23:45:32.553892 sshd[4097]: Accepted publickey for core from 10.0.0.1 port 52330 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:45:32.555687 sshd-session[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:45:32.560170 systemd-logind[1485]: New session 16 of user core. Jul 6 23:45:32.573852 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 6 23:45:32.834228 sshd[4099]: Connection closed by 10.0.0.1 port 52330 Jul 6 23:45:32.837393 sshd-session[4097]: pam_unix(sshd:session): session closed for user core Jul 6 23:45:32.853700 systemd[1]: sshd@15-10.0.0.128:22-10.0.0.1:52330.service: Deactivated successfully. Jul 6 23:45:32.856043 systemd[1]: session-16.scope: Deactivated successfully. Jul 6 23:45:32.858319 systemd-logind[1485]: Session 16 logged out. Waiting for processes to exit. Jul 6 23:45:32.863221 systemd[1]: Started sshd@16-10.0.0.128:22-10.0.0.1:52340.service - OpenSSH per-connection server daemon (10.0.0.1:52340). Jul 6 23:45:32.864700 systemd-logind[1485]: Removed session 16. Jul 6 23:45:32.926821 sshd[4110]: Accepted publickey for core from 10.0.0.1 port 52340 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:45:32.928369 sshd-session[4110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:45:32.933768 systemd-logind[1485]: New session 17 of user core. Jul 6 23:45:32.942776 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 6 23:45:34.359110 sshd[4112]: Connection closed by 10.0.0.1 port 52340 Jul 6 23:45:34.361768 sshd-session[4110]: pam_unix(sshd:session): session closed for user core Jul 6 23:45:34.378690 systemd[1]: sshd@16-10.0.0.128:22-10.0.0.1:52340.service: Deactivated successfully. Jul 6 23:45:34.383009 systemd[1]: session-17.scope: Deactivated successfully. Jul 6 23:45:34.388856 systemd-logind[1485]: Session 17 logged out. Waiting for processes to exit. Jul 6 23:45:34.394433 systemd[1]: Started sshd@17-10.0.0.128:22-10.0.0.1:52354.service - OpenSSH per-connection server daemon (10.0.0.1:52354). Jul 6 23:45:34.395218 systemd-logind[1485]: Removed session 17. Jul 6 23:45:34.457095 sshd[4137]: Accepted publickey for core from 10.0.0.1 port 52354 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:45:34.458639 sshd-session[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:45:34.464263 systemd-logind[1485]: New session 18 of user core. Jul 6 23:45:34.481789 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 6 23:45:34.727652 sshd[4139]: Connection closed by 10.0.0.1 port 52354 Jul 6 23:45:34.728935 sshd-session[4137]: pam_unix(sshd:session): session closed for user core Jul 6 23:45:34.738371 systemd[1]: sshd@17-10.0.0.128:22-10.0.0.1:52354.service: Deactivated successfully. Jul 6 23:45:34.746620 systemd[1]: session-18.scope: Deactivated successfully. Jul 6 23:45:34.748488 systemd-logind[1485]: Session 18 logged out. Waiting for processes to exit. Jul 6 23:45:34.757503 systemd[1]: Started sshd@18-10.0.0.128:22-10.0.0.1:52364.service - OpenSSH per-connection server daemon (10.0.0.1:52364). Jul 6 23:45:34.758302 systemd-logind[1485]: Removed session 18. Jul 6 23:45:34.812437 sshd[4151]: Accepted publickey for core from 10.0.0.1 port 52364 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:45:34.813920 sshd-session[4151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:45:34.819563 systemd-logind[1485]: New session 19 of user core. Jul 6 23:45:34.823747 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 6 23:45:34.943267 sshd[4153]: Connection closed by 10.0.0.1 port 52364 Jul 6 23:45:34.943927 sshd-session[4151]: pam_unix(sshd:session): session closed for user core Jul 6 23:45:34.949398 systemd[1]: sshd@18-10.0.0.128:22-10.0.0.1:52364.service: Deactivated successfully. Jul 6 23:45:34.951489 systemd[1]: session-19.scope: Deactivated successfully. Jul 6 23:45:34.952341 systemd-logind[1485]: Session 19 logged out. Waiting for processes to exit. Jul 6 23:45:34.954200 systemd-logind[1485]: Removed session 19. Jul 6 23:45:39.962835 systemd[1]: Started sshd@19-10.0.0.128:22-10.0.0.1:52378.service - OpenSSH per-connection server daemon (10.0.0.1:52378). Jul 6 23:45:40.018129 sshd[4170]: Accepted publickey for core from 10.0.0.1 port 52378 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:45:40.019445 sshd-session[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:45:40.023976 systemd-logind[1485]: New session 20 of user core. Jul 6 23:45:40.034673 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 6 23:45:40.162692 sshd[4172]: Connection closed by 10.0.0.1 port 52378 Jul 6 23:45:40.163236 sshd-session[4170]: pam_unix(sshd:session): session closed for user core Jul 6 23:45:40.168283 systemd[1]: sshd@19-10.0.0.128:22-10.0.0.1:52378.service: Deactivated successfully. Jul 6 23:45:40.170089 systemd[1]: session-20.scope: Deactivated successfully. Jul 6 23:45:40.173692 systemd-logind[1485]: Session 20 logged out. Waiting for processes to exit. Jul 6 23:45:40.175055 systemd-logind[1485]: Removed session 20. Jul 6 23:45:43.322179 kubelet[2620]: E0706 23:45:43.322098 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:45:45.178878 systemd[1]: Started sshd@20-10.0.0.128:22-10.0.0.1:41328.service - OpenSSH per-connection server daemon (10.0.0.1:41328). Jul 6 23:45:45.223910 sshd[4187]: Accepted publickey for core from 10.0.0.1 port 41328 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:45:45.225303 sshd-session[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:45:45.229503 systemd-logind[1485]: New session 21 of user core. Jul 6 23:45:45.236747 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 6 23:45:45.345110 sshd[4189]: Connection closed by 10.0.0.1 port 41328 Jul 6 23:45:45.344439 sshd-session[4187]: pam_unix(sshd:session): session closed for user core Jul 6 23:45:45.348770 systemd[1]: sshd@20-10.0.0.128:22-10.0.0.1:41328.service: Deactivated successfully. Jul 6 23:45:45.350555 systemd[1]: session-21.scope: Deactivated successfully. Jul 6 23:45:45.351421 systemd-logind[1485]: Session 21 logged out. Waiting for processes to exit. Jul 6 23:45:45.352782 systemd-logind[1485]: Removed session 21. Jul 6 23:45:50.363399 systemd[1]: Started sshd@21-10.0.0.128:22-10.0.0.1:41336.service - OpenSSH per-connection server daemon (10.0.0.1:41336). Jul 6 23:45:50.414585 sshd[4202]: Accepted publickey for core from 10.0.0.1 port 41336 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:45:50.416004 sshd-session[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:45:50.424523 systemd-logind[1485]: New session 22 of user core. Jul 6 23:45:50.435866 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 6 23:45:50.551093 sshd[4204]: Connection closed by 10.0.0.1 port 41336 Jul 6 23:45:50.551838 sshd-session[4202]: pam_unix(sshd:session): session closed for user core Jul 6 23:45:50.573075 systemd[1]: sshd@21-10.0.0.128:22-10.0.0.1:41336.service: Deactivated successfully. Jul 6 23:45:50.576536 systemd[1]: session-22.scope: Deactivated successfully. Jul 6 23:45:50.580975 systemd-logind[1485]: Session 22 logged out. Waiting for processes to exit. Jul 6 23:45:50.583527 systemd[1]: Started sshd@22-10.0.0.128:22-10.0.0.1:41340.service - OpenSSH per-connection server daemon (10.0.0.1:41340). Jul 6 23:45:50.586437 systemd-logind[1485]: Removed session 22. Jul 6 23:45:50.649196 sshd[4217]: Accepted publickey for core from 10.0.0.1 port 41340 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:45:50.650615 sshd-session[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:45:50.654789 systemd-logind[1485]: New session 23 of user core. Jul 6 23:45:50.669751 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 6 23:45:53.261153 containerd[1511]: time="2025-07-06T23:45:53.261101403Z" level=info msg="StopContainer for \"661c6d66e62f8406e3fdd6cbf6a97049042626f35fe566eb5fd71a65961d9158\" with timeout 30 (s)" Jul 6 23:45:53.262242 containerd[1511]: time="2025-07-06T23:45:53.261679338Z" level=info msg="Stop container \"661c6d66e62f8406e3fdd6cbf6a97049042626f35fe566eb5fd71a65961d9158\" with signal terminated" Jul 6 23:45:53.273516 systemd[1]: cri-containerd-661c6d66e62f8406e3fdd6cbf6a97049042626f35fe566eb5fd71a65961d9158.scope: Deactivated successfully. Jul 6 23:45:53.277311 containerd[1511]: time="2025-07-06T23:45:53.277211571Z" level=info msg="TaskExit event in podsandbox handler container_id:\"661c6d66e62f8406e3fdd6cbf6a97049042626f35fe566eb5fd71a65961d9158\" id:\"661c6d66e62f8406e3fdd6cbf6a97049042626f35fe566eb5fd71a65961d9158\" pid:3309 exited_at:{seconds:1751845553 nanos:276731113}" Jul 6 23:45:53.277311 containerd[1511]: time="2025-07-06T23:45:53.277062218Z" level=info msg="received exit event container_id:\"661c6d66e62f8406e3fdd6cbf6a97049042626f35fe566eb5fd71a65961d9158\" id:\"661c6d66e62f8406e3fdd6cbf6a97049042626f35fe566eb5fd71a65961d9158\" pid:3309 exited_at:{seconds:1751845553 nanos:276731113}" Jul 6 23:45:53.297398 containerd[1511]: time="2025-07-06T23:45:53.297343282Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:45:53.298311 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-661c6d66e62f8406e3fdd6cbf6a97049042626f35fe566eb5fd71a65961d9158-rootfs.mount: Deactivated successfully. Jul 6 23:45:53.302711 containerd[1511]: time="2025-07-06T23:45:53.302610369Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a62ba561c105b97d9ae16f2a819618bc7cc10394d820e84ee8e30d81e8e439d\" id:\"d6679103f158ceb5697b8f8f076a176a13b5137727fb4f0119793ebbad8e4d4d\" pid:4253 exited_at:{seconds:1751845553 nanos:302120071}" Jul 6 23:45:53.304855 containerd[1511]: time="2025-07-06T23:45:53.304823191Z" level=info msg="StopContainer for \"5a62ba561c105b97d9ae16f2a819618bc7cc10394d820e84ee8e30d81e8e439d\" with timeout 2 (s)" Jul 6 23:45:53.305170 containerd[1511]: time="2025-07-06T23:45:53.305146977Z" level=info msg="Stop container \"5a62ba561c105b97d9ae16f2a819618bc7cc10394d820e84ee8e30d81e8e439d\" with signal terminated" Jul 6 23:45:53.313016 systemd-networkd[1434]: lxc_health: Link DOWN Jul 6 23:45:53.313025 systemd-networkd[1434]: lxc_health: Lost carrier Jul 6 23:45:53.319119 containerd[1511]: time="2025-07-06T23:45:53.319084121Z" level=info msg="StopContainer for \"661c6d66e62f8406e3fdd6cbf6a97049042626f35fe566eb5fd71a65961d9158\" returns successfully" Jul 6 23:45:53.322396 containerd[1511]: time="2025-07-06T23:45:53.322207023Z" level=info msg="StopPodSandbox for \"72dac2e406a7715e89329b0ca6f1cb73e79d55a19be50ac1834ca82c2cf31793\"" Jul 6 23:45:53.324468 kubelet[2620]: E0706 23:45:53.322986 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:45:53.324468 kubelet[2620]: E0706 23:45:53.323020 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:45:53.331423 containerd[1511]: time="2025-07-06T23:45:53.331367539Z" level=info msg="Container to stop \"661c6d66e62f8406e3fdd6cbf6a97049042626f35fe566eb5fd71a65961d9158\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:45:53.333381 systemd[1]: cri-containerd-5a62ba561c105b97d9ae16f2a819618bc7cc10394d820e84ee8e30d81e8e439d.scope: Deactivated successfully. Jul 6 23:45:53.333783 systemd[1]: cri-containerd-5a62ba561c105b97d9ae16f2a819618bc7cc10394d820e84ee8e30d81e8e439d.scope: Consumed 7.236s CPU time, 121.1M memory peak, 164K read from disk, 12.9M written to disk. Jul 6 23:45:53.335681 containerd[1511]: time="2025-07-06T23:45:53.335649509Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a62ba561c105b97d9ae16f2a819618bc7cc10394d820e84ee8e30d81e8e439d\" id:\"5a62ba561c105b97d9ae16f2a819618bc7cc10394d820e84ee8e30d81e8e439d\" pid:3231 exited_at:{seconds:1751845553 nanos:335310404}" Jul 6 23:45:53.335799 containerd[1511]: time="2025-07-06T23:45:53.335650349Z" level=info msg="received exit event container_id:\"5a62ba561c105b97d9ae16f2a819618bc7cc10394d820e84ee8e30d81e8e439d\" id:\"5a62ba561c105b97d9ae16f2a819618bc7cc10394d820e84ee8e30d81e8e439d\" pid:3231 exited_at:{seconds:1751845553 nanos:335310404}" Jul 6 23:45:53.349925 systemd[1]: cri-containerd-72dac2e406a7715e89329b0ca6f1cb73e79d55a19be50ac1834ca82c2cf31793.scope: Deactivated successfully. Jul 6 23:45:53.352282 containerd[1511]: time="2025-07-06T23:45:53.352127461Z" level=info msg="TaskExit event in podsandbox handler container_id:\"72dac2e406a7715e89329b0ca6f1cb73e79d55a19be50ac1834ca82c2cf31793\" id:\"72dac2e406a7715e89329b0ca6f1cb73e79d55a19be50ac1834ca82c2cf31793\" pid:2842 exit_status:137 exited_at:{seconds:1751845553 nanos:351827955}" Jul 6 23:45:53.360992 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a62ba561c105b97d9ae16f2a819618bc7cc10394d820e84ee8e30d81e8e439d-rootfs.mount: Deactivated successfully. Jul 6 23:45:53.372138 containerd[1511]: time="2025-07-06T23:45:53.372097219Z" level=info msg="StopContainer for \"5a62ba561c105b97d9ae16f2a819618bc7cc10394d820e84ee8e30d81e8e439d\" returns successfully" Jul 6 23:45:53.373109 containerd[1511]: time="2025-07-06T23:45:53.373063536Z" level=info msg="StopPodSandbox for \"4538a34f5408c526df6d714df0d1216b8d9a7dec9f81186e792cd73d62f98ac9\"" Jul 6 23:45:53.373470 containerd[1511]: time="2025-07-06T23:45:53.373293166Z" level=info msg="Container to stop \"fd4c9ad713c42a1dfdaa9f9e5f6bef2a8ecc42250b7962b97c3a8ec017809044\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:45:53.373470 containerd[1511]: time="2025-07-06T23:45:53.373401641Z" level=info msg="Container to stop \"d674cd8ebce2d6907c11153eb536f9c9fdd8c288b2f1284033243b2ae2530b71\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:45:53.373470 containerd[1511]: time="2025-07-06T23:45:53.373411681Z" level=info msg="Container to stop \"5a62ba561c105b97d9ae16f2a819618bc7cc10394d820e84ee8e30d81e8e439d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:45:53.373470 containerd[1511]: time="2025-07-06T23:45:53.373423080Z" level=info msg="Container to stop \"cd19445dbcb57171e284ef347468dac615978377f8e1789eaa5bfc830560820d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:45:53.373470 containerd[1511]: time="2025-07-06T23:45:53.373444039Z" level=info msg="Container to stop \"4fb7fbd6d78079b58d6fa6ea79e69e7ed5867fc052fa48f6db5408c8b6101f3c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:45:53.379998 systemd[1]: cri-containerd-4538a34f5408c526df6d714df0d1216b8d9a7dec9f81186e792cd73d62f98ac9.scope: Deactivated successfully. Jul 6 23:45:53.392611 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72dac2e406a7715e89329b0ca6f1cb73e79d55a19be50ac1834ca82c2cf31793-rootfs.mount: Deactivated successfully. Jul 6 23:45:53.395824 containerd[1511]: time="2025-07-06T23:45:53.395734815Z" level=info msg="shim disconnected" id=72dac2e406a7715e89329b0ca6f1cb73e79d55a19be50ac1834ca82c2cf31793 namespace=k8s.io Jul 6 23:45:53.397663 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-72dac2e406a7715e89329b0ca6f1cb73e79d55a19be50ac1834ca82c2cf31793-shm.mount: Deactivated successfully. Jul 6 23:45:53.410136 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4538a34f5408c526df6d714df0d1216b8d9a7dec9f81186e792cd73d62f98ac9-rootfs.mount: Deactivated successfully. Jul 6 23:45:53.413373 containerd[1511]: time="2025-07-06T23:45:53.395774653Z" level=warning msg="cleaning up after shim disconnected" id=72dac2e406a7715e89329b0ca6f1cb73e79d55a19be50ac1834ca82c2cf31793 namespace=k8s.io Jul 6 23:45:53.413653 containerd[1511]: time="2025-07-06T23:45:53.413475871Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:45:53.413653 containerd[1511]: time="2025-07-06T23:45:53.396152476Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4538a34f5408c526df6d714df0d1216b8d9a7dec9f81186e792cd73d62f98ac9\" id:\"4538a34f5408c526df6d714df0d1216b8d9a7dec9f81186e792cd73d62f98ac9\" pid:2772 exit_status:137 exited_at:{seconds:1751845553 nanos:380474529}" Jul 6 23:45:53.413714 containerd[1511]: time="2025-07-06T23:45:53.398738042Z" level=info msg="TearDown network for sandbox \"72dac2e406a7715e89329b0ca6f1cb73e79d55a19be50ac1834ca82c2cf31793\" successfully" Jul 6 23:45:53.413714 containerd[1511]: time="2025-07-06T23:45:53.413696741Z" level=info msg="StopPodSandbox for \"72dac2e406a7715e89329b0ca6f1cb73e79d55a19be50ac1834ca82c2cf31793\" returns successfully" Jul 6 23:45:53.413863 containerd[1511]: time="2025-07-06T23:45:53.399608283Z" level=info msg="received exit event sandbox_id:\"72dac2e406a7715e89329b0ca6f1cb73e79d55a19be50ac1834ca82c2cf31793\" exit_status:137 exited_at:{seconds:1751845553 nanos:351827955}" Jul 6 23:45:53.414626 containerd[1511]: time="2025-07-06T23:45:53.414594821Z" level=info msg="received exit event sandbox_id:\"4538a34f5408c526df6d714df0d1216b8d9a7dec9f81186e792cd73d62f98ac9\" exit_status:137 exited_at:{seconds:1751845553 nanos:380474529}" Jul 6 23:45:53.414683 containerd[1511]: time="2025-07-06T23:45:53.414658018Z" level=info msg="shim disconnected" id=4538a34f5408c526df6d714df0d1216b8d9a7dec9f81186e792cd73d62f98ac9 namespace=k8s.io Jul 6 23:45:53.414710 containerd[1511]: time="2025-07-06T23:45:53.414682017Z" level=warning msg="cleaning up after shim disconnected" id=4538a34f5408c526df6d714df0d1216b8d9a7dec9f81186e792cd73d62f98ac9 namespace=k8s.io Jul 6 23:45:53.414731 containerd[1511]: time="2025-07-06T23:45:53.414708576Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:45:53.418080 containerd[1511]: time="2025-07-06T23:45:53.417852517Z" level=info msg="TearDown network for sandbox \"4538a34f5408c526df6d714df0d1216b8d9a7dec9f81186e792cd73d62f98ac9\" successfully" Jul 6 23:45:53.418080 containerd[1511]: time="2025-07-06T23:45:53.417914195Z" level=info msg="StopPodSandbox for \"4538a34f5408c526df6d714df0d1216b8d9a7dec9f81186e792cd73d62f98ac9\" returns successfully" Jul 6 23:45:53.535931 kubelet[2620]: I0706 23:45:53.534153 2620 scope.go:117] "RemoveContainer" containerID="661c6d66e62f8406e3fdd6cbf6a97049042626f35fe566eb5fd71a65961d9158" Jul 6 23:45:53.545248 containerd[1511]: time="2025-07-06T23:45:53.545006019Z" level=info msg="RemoveContainer for \"661c6d66e62f8406e3fdd6cbf6a97049042626f35fe566eb5fd71a65961d9158\"" Jul 6 23:45:53.567908 containerd[1511]: time="2025-07-06T23:45:53.567844850Z" level=info msg="RemoveContainer for \"661c6d66e62f8406e3fdd6cbf6a97049042626f35fe566eb5fd71a65961d9158\" returns successfully" Jul 6 23:45:53.568384 kubelet[2620]: I0706 23:45:53.568361 2620 scope.go:117] "RemoveContainer" containerID="661c6d66e62f8406e3fdd6cbf6a97049042626f35fe566eb5fd71a65961d9158" Jul 6 23:45:53.568820 containerd[1511]: time="2025-07-06T23:45:53.568780649Z" level=error msg="ContainerStatus for \"661c6d66e62f8406e3fdd6cbf6a97049042626f35fe566eb5fd71a65961d9158\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"661c6d66e62f8406e3fdd6cbf6a97049042626f35fe566eb5fd71a65961d9158\": not found" Jul 6 23:45:53.573848 kubelet[2620]: E0706 23:45:53.573812 2620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"661c6d66e62f8406e3fdd6cbf6a97049042626f35fe566eb5fd71a65961d9158\": not found" containerID="661c6d66e62f8406e3fdd6cbf6a97049042626f35fe566eb5fd71a65961d9158" Jul 6 23:45:53.573940 kubelet[2620]: I0706 23:45:53.573859 2620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"661c6d66e62f8406e3fdd6cbf6a97049042626f35fe566eb5fd71a65961d9158"} err="failed to get container status \"661c6d66e62f8406e3fdd6cbf6a97049042626f35fe566eb5fd71a65961d9158\": rpc error: code = NotFound desc = an error occurred when try to find container \"661c6d66e62f8406e3fdd6cbf6a97049042626f35fe566eb5fd71a65961d9158\": not found" Jul 6 23:45:53.573977 kubelet[2620]: I0706 23:45:53.573941 2620 scope.go:117] "RemoveContainer" containerID="5a62ba561c105b97d9ae16f2a819618bc7cc10394d820e84ee8e30d81e8e439d" Jul 6 23:45:53.578625 containerd[1511]: time="2025-07-06T23:45:53.578176873Z" level=info msg="RemoveContainer for \"5a62ba561c105b97d9ae16f2a819618bc7cc10394d820e84ee8e30d81e8e439d\"" Jul 6 23:45:53.588073 containerd[1511]: time="2025-07-06T23:45:53.588030038Z" level=info msg="RemoveContainer for \"5a62ba561c105b97d9ae16f2a819618bc7cc10394d820e84ee8e30d81e8e439d\" returns successfully" Jul 6 23:45:53.588466 kubelet[2620]: I0706 23:45:53.588443 2620 scope.go:117] "RemoveContainer" containerID="4fb7fbd6d78079b58d6fa6ea79e69e7ed5867fc052fa48f6db5408c8b6101f3c" Jul 6 23:45:53.590393 containerd[1511]: time="2025-07-06T23:45:53.590364375Z" level=info msg="RemoveContainer for \"4fb7fbd6d78079b58d6fa6ea79e69e7ed5867fc052fa48f6db5408c8b6101f3c\"" Jul 6 23:45:53.594211 containerd[1511]: time="2025-07-06T23:45:53.594102370Z" level=info msg="RemoveContainer for \"4fb7fbd6d78079b58d6fa6ea79e69e7ed5867fc052fa48f6db5408c8b6101f3c\" returns successfully" Jul 6 23:45:53.594423 kubelet[2620]: I0706 23:45:53.594380 2620 scope.go:117] "RemoveContainer" containerID="d674cd8ebce2d6907c11153eb536f9c9fdd8c288b2f1284033243b2ae2530b71" Jul 6 23:45:53.597043 containerd[1511]: time="2025-07-06T23:45:53.597009721Z" level=info msg="RemoveContainer for \"d674cd8ebce2d6907c11153eb536f9c9fdd8c288b2f1284033243b2ae2530b71\"" Jul 6 23:45:53.601914 containerd[1511]: time="2025-07-06T23:45:53.601864747Z" level=info msg="RemoveContainer for \"d674cd8ebce2d6907c11153eb536f9c9fdd8c288b2f1284033243b2ae2530b71\" returns successfully" Jul 6 23:45:53.602228 kubelet[2620]: I0706 23:45:53.602202 2620 scope.go:117] "RemoveContainer" containerID="cd19445dbcb57171e284ef347468dac615978377f8e1789eaa5bfc830560820d" Jul 6 23:45:53.603877 containerd[1511]: time="2025-07-06T23:45:53.603830660Z" level=info msg="RemoveContainer for \"cd19445dbcb57171e284ef347468dac615978377f8e1789eaa5bfc830560820d\"" Jul 6 23:45:53.607230 containerd[1511]: time="2025-07-06T23:45:53.607123754Z" level=info msg="RemoveContainer for \"cd19445dbcb57171e284ef347468dac615978377f8e1789eaa5bfc830560820d\" returns successfully" Jul 6 23:45:53.607479 kubelet[2620]: I0706 23:45:53.607447 2620 scope.go:117] "RemoveContainer" containerID="fd4c9ad713c42a1dfdaa9f9e5f6bef2a8ecc42250b7962b97c3a8ec017809044" Jul 6 23:45:53.609284 containerd[1511]: time="2025-07-06T23:45:53.609247341Z" level=info msg="RemoveContainer for \"fd4c9ad713c42a1dfdaa9f9e5f6bef2a8ecc42250b7962b97c3a8ec017809044\"" Jul 6 23:45:53.612196 containerd[1511]: time="2025-07-06T23:45:53.612167412Z" level=info msg="RemoveContainer for \"fd4c9ad713c42a1dfdaa9f9e5f6bef2a8ecc42250b7962b97c3a8ec017809044\" returns successfully" Jul 6 23:45:53.612440 kubelet[2620]: I0706 23:45:53.612415 2620 scope.go:117] "RemoveContainer" containerID="5a62ba561c105b97d9ae16f2a819618bc7cc10394d820e84ee8e30d81e8e439d" Jul 6 23:45:53.612840 containerd[1511]: time="2025-07-06T23:45:53.612811143Z" level=error msg="ContainerStatus for \"5a62ba561c105b97d9ae16f2a819618bc7cc10394d820e84ee8e30d81e8e439d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5a62ba561c105b97d9ae16f2a819618bc7cc10394d820e84ee8e30d81e8e439d\": not found" Jul 6 23:45:53.613027 kubelet[2620]: E0706 23:45:53.612944 2620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5a62ba561c105b97d9ae16f2a819618bc7cc10394d820e84ee8e30d81e8e439d\": not found" containerID="5a62ba561c105b97d9ae16f2a819618bc7cc10394d820e84ee8e30d81e8e439d" Jul 6 23:45:53.613175 kubelet[2620]: I0706 23:45:53.613086 2620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5a62ba561c105b97d9ae16f2a819618bc7cc10394d820e84ee8e30d81e8e439d"} err="failed to get container status \"5a62ba561c105b97d9ae16f2a819618bc7cc10394d820e84ee8e30d81e8e439d\": rpc error: code = NotFound desc = an error occurred when try to find container \"5a62ba561c105b97d9ae16f2a819618bc7cc10394d820e84ee8e30d81e8e439d\": not found" Jul 6 23:45:53.613175 kubelet[2620]: I0706 23:45:53.613125 2620 scope.go:117] "RemoveContainer" containerID="4fb7fbd6d78079b58d6fa6ea79e69e7ed5867fc052fa48f6db5408c8b6101f3c" Jul 6 23:45:53.613578 containerd[1511]: time="2025-07-06T23:45:53.613533591Z" level=error msg="ContainerStatus for \"4fb7fbd6d78079b58d6fa6ea79e69e7ed5867fc052fa48f6db5408c8b6101f3c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4fb7fbd6d78079b58d6fa6ea79e69e7ed5867fc052fa48f6db5408c8b6101f3c\": not found" Jul 6 23:45:53.613908 kubelet[2620]: E0706 23:45:53.613810 2620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4fb7fbd6d78079b58d6fa6ea79e69e7ed5867fc052fa48f6db5408c8b6101f3c\": not found" containerID="4fb7fbd6d78079b58d6fa6ea79e69e7ed5867fc052fa48f6db5408c8b6101f3c" Jul 6 23:45:53.613908 kubelet[2620]: I0706 23:45:53.613852 2620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4fb7fbd6d78079b58d6fa6ea79e69e7ed5867fc052fa48f6db5408c8b6101f3c"} err="failed to get container status \"4fb7fbd6d78079b58d6fa6ea79e69e7ed5867fc052fa48f6db5408c8b6101f3c\": rpc error: code = NotFound desc = an error occurred when try to find container \"4fb7fbd6d78079b58d6fa6ea79e69e7ed5867fc052fa48f6db5408c8b6101f3c\": not found" Jul 6 23:45:53.613908 kubelet[2620]: I0706 23:45:53.613874 2620 scope.go:117] "RemoveContainer" containerID="d674cd8ebce2d6907c11153eb536f9c9fdd8c288b2f1284033243b2ae2530b71" Jul 6 23:45:53.614273 containerd[1511]: time="2025-07-06T23:45:53.614236840Z" level=error msg="ContainerStatus for \"d674cd8ebce2d6907c11153eb536f9c9fdd8c288b2f1284033243b2ae2530b71\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d674cd8ebce2d6907c11153eb536f9c9fdd8c288b2f1284033243b2ae2530b71\": not found" Jul 6 23:45:53.614396 kubelet[2620]: E0706 23:45:53.614369 2620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d674cd8ebce2d6907c11153eb536f9c9fdd8c288b2f1284033243b2ae2530b71\": not found" containerID="d674cd8ebce2d6907c11153eb536f9c9fdd8c288b2f1284033243b2ae2530b71" Jul 6 23:45:53.614429 kubelet[2620]: I0706 23:45:53.614401 2620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d674cd8ebce2d6907c11153eb536f9c9fdd8c288b2f1284033243b2ae2530b71"} err="failed to get container status \"d674cd8ebce2d6907c11153eb536f9c9fdd8c288b2f1284033243b2ae2530b71\": rpc error: code = NotFound desc = an error occurred when try to find container \"d674cd8ebce2d6907c11153eb536f9c9fdd8c288b2f1284033243b2ae2530b71\": not found" Jul 6 23:45:53.614429 kubelet[2620]: I0706 23:45:53.614419 2620 scope.go:117] "RemoveContainer" containerID="cd19445dbcb57171e284ef347468dac615978377f8e1789eaa5bfc830560820d" Jul 6 23:45:53.614637 containerd[1511]: time="2025-07-06T23:45:53.614606864Z" level=error msg="ContainerStatus for \"cd19445dbcb57171e284ef347468dac615978377f8e1789eaa5bfc830560820d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cd19445dbcb57171e284ef347468dac615978377f8e1789eaa5bfc830560820d\": not found" Jul 6 23:45:53.614772 kubelet[2620]: E0706 23:45:53.614752 2620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cd19445dbcb57171e284ef347468dac615978377f8e1789eaa5bfc830560820d\": not found" containerID="cd19445dbcb57171e284ef347468dac615978377f8e1789eaa5bfc830560820d" Jul 6 23:45:53.614816 kubelet[2620]: I0706 23:45:53.614780 2620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cd19445dbcb57171e284ef347468dac615978377f8e1789eaa5bfc830560820d"} err="failed to get container status \"cd19445dbcb57171e284ef347468dac615978377f8e1789eaa5bfc830560820d\": rpc error: code = NotFound desc = an error occurred when try to find container \"cd19445dbcb57171e284ef347468dac615978377f8e1789eaa5bfc830560820d\": not found" Jul 6 23:45:53.614816 kubelet[2620]: I0706 23:45:53.614797 2620 scope.go:117] "RemoveContainer" containerID="fd4c9ad713c42a1dfdaa9f9e5f6bef2a8ecc42250b7962b97c3a8ec017809044" Jul 6 23:45:53.616903 containerd[1511]: time="2025-07-06T23:45:53.616853005Z" level=error msg="ContainerStatus for \"fd4c9ad713c42a1dfdaa9f9e5f6bef2a8ecc42250b7962b97c3a8ec017809044\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fd4c9ad713c42a1dfdaa9f9e5f6bef2a8ecc42250b7962b97c3a8ec017809044\": not found" Jul 6 23:45:53.617109 kubelet[2620]: E0706 23:45:53.617033 2620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fd4c9ad713c42a1dfdaa9f9e5f6bef2a8ecc42250b7962b97c3a8ec017809044\": not found" containerID="fd4c9ad713c42a1dfdaa9f9e5f6bef2a8ecc42250b7962b97c3a8ec017809044" Jul 6 23:45:53.617109 kubelet[2620]: I0706 23:45:53.617056 2620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fd4c9ad713c42a1dfdaa9f9e5f6bef2a8ecc42250b7962b97c3a8ec017809044"} err="failed to get container status \"fd4c9ad713c42a1dfdaa9f9e5f6bef2a8ecc42250b7962b97c3a8ec017809044\": rpc error: code = NotFound desc = an error occurred when try to find container \"fd4c9ad713c42a1dfdaa9f9e5f6bef2a8ecc42250b7962b97c3a8ec017809044\": not found" Jul 6 23:45:53.622381 kubelet[2620]: I0706 23:45:53.622349 2620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3704ddd3-4fa6-40db-9488-8da98e53077c-host-proc-sys-net\") pod \"3704ddd3-4fa6-40db-9488-8da98e53077c\" (UID: \"3704ddd3-4fa6-40db-9488-8da98e53077c\") " Jul 6 23:45:53.622522 kubelet[2620]: I0706 23:45:53.622388 2620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3704ddd3-4fa6-40db-9488-8da98e53077c-lib-modules\") pod \"3704ddd3-4fa6-40db-9488-8da98e53077c\" (UID: \"3704ddd3-4fa6-40db-9488-8da98e53077c\") " Jul 6 23:45:53.622522 kubelet[2620]: I0706 23:45:53.622412 2620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/174a76bf-d4fa-4d7d-b8f8-15c25a927459-cilium-config-path\") pod \"174a76bf-d4fa-4d7d-b8f8-15c25a927459\" (UID: \"174a76bf-d4fa-4d7d-b8f8-15c25a927459\") " Jul 6 23:45:53.622522 kubelet[2620]: I0706 23:45:53.622429 2620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3704ddd3-4fa6-40db-9488-8da98e53077c-cilium-cgroup\") pod \"3704ddd3-4fa6-40db-9488-8da98e53077c\" (UID: \"3704ddd3-4fa6-40db-9488-8da98e53077c\") " Jul 6 23:45:53.622522 kubelet[2620]: I0706 23:45:53.622462 2620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3704ddd3-4fa6-40db-9488-8da98e53077c-clustermesh-secrets\") pod \"3704ddd3-4fa6-40db-9488-8da98e53077c\" (UID: \"3704ddd3-4fa6-40db-9488-8da98e53077c\") " Jul 6 23:45:53.622522 kubelet[2620]: I0706 23:45:53.622480 2620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpkmv\" (UniqueName: \"kubernetes.io/projected/174a76bf-d4fa-4d7d-b8f8-15c25a927459-kube-api-access-xpkmv\") pod \"174a76bf-d4fa-4d7d-b8f8-15c25a927459\" (UID: \"174a76bf-d4fa-4d7d-b8f8-15c25a927459\") " Jul 6 23:45:53.622522 kubelet[2620]: I0706 23:45:53.622523 2620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3704ddd3-4fa6-40db-9488-8da98e53077c-xtables-lock\") pod \"3704ddd3-4fa6-40db-9488-8da98e53077c\" (UID: \"3704ddd3-4fa6-40db-9488-8da98e53077c\") " Jul 6 23:45:53.623507 kubelet[2620]: I0706 23:45:53.622538 2620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3704ddd3-4fa6-40db-9488-8da98e53077c-cni-path\") pod \"3704ddd3-4fa6-40db-9488-8da98e53077c\" (UID: \"3704ddd3-4fa6-40db-9488-8da98e53077c\") " Jul 6 23:45:53.623507 kubelet[2620]: I0706 23:45:53.622555 2620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3704ddd3-4fa6-40db-9488-8da98e53077c-hostproc\") pod \"3704ddd3-4fa6-40db-9488-8da98e53077c\" (UID: \"3704ddd3-4fa6-40db-9488-8da98e53077c\") " Jul 6 23:45:53.623507 kubelet[2620]: I0706 23:45:53.622588 2620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zth4\" (UniqueName: \"kubernetes.io/projected/3704ddd3-4fa6-40db-9488-8da98e53077c-kube-api-access-5zth4\") pod \"3704ddd3-4fa6-40db-9488-8da98e53077c\" (UID: \"3704ddd3-4fa6-40db-9488-8da98e53077c\") " Jul 6 23:45:53.623507 kubelet[2620]: I0706 23:45:53.622603 2620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3704ddd3-4fa6-40db-9488-8da98e53077c-cilium-run\") pod \"3704ddd3-4fa6-40db-9488-8da98e53077c\" (UID: \"3704ddd3-4fa6-40db-9488-8da98e53077c\") " Jul 6 23:45:53.623507 kubelet[2620]: I0706 23:45:53.622619 2620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3704ddd3-4fa6-40db-9488-8da98e53077c-host-proc-sys-kernel\") pod \"3704ddd3-4fa6-40db-9488-8da98e53077c\" (UID: \"3704ddd3-4fa6-40db-9488-8da98e53077c\") " Jul 6 23:45:53.623507 kubelet[2620]: I0706 23:45:53.622635 2620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3704ddd3-4fa6-40db-9488-8da98e53077c-cilium-config-path\") pod \"3704ddd3-4fa6-40db-9488-8da98e53077c\" (UID: \"3704ddd3-4fa6-40db-9488-8da98e53077c\") " Jul 6 23:45:53.623665 kubelet[2620]: I0706 23:45:53.622652 2620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3704ddd3-4fa6-40db-9488-8da98e53077c-hubble-tls\") pod \"3704ddd3-4fa6-40db-9488-8da98e53077c\" (UID: \"3704ddd3-4fa6-40db-9488-8da98e53077c\") " Jul 6 23:45:53.623665 kubelet[2620]: I0706 23:45:53.622667 2620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3704ddd3-4fa6-40db-9488-8da98e53077c-etc-cni-netd\") pod \"3704ddd3-4fa6-40db-9488-8da98e53077c\" (UID: \"3704ddd3-4fa6-40db-9488-8da98e53077c\") " Jul 6 23:45:53.623665 kubelet[2620]: I0706 23:45:53.622681 2620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3704ddd3-4fa6-40db-9488-8da98e53077c-bpf-maps\") pod \"3704ddd3-4fa6-40db-9488-8da98e53077c\" (UID: \"3704ddd3-4fa6-40db-9488-8da98e53077c\") " Jul 6 23:45:53.632683 kubelet[2620]: I0706 23:45:53.632593 2620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3704ddd3-4fa6-40db-9488-8da98e53077c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3704ddd3-4fa6-40db-9488-8da98e53077c" (UID: "3704ddd3-4fa6-40db-9488-8da98e53077c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:45:53.632902 kubelet[2620]: I0706 23:45:53.632682 2620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3704ddd3-4fa6-40db-9488-8da98e53077c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3704ddd3-4fa6-40db-9488-8da98e53077c" (UID: "3704ddd3-4fa6-40db-9488-8da98e53077c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:45:53.633260 kubelet[2620]: I0706 23:45:53.633212 2620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3704ddd3-4fa6-40db-9488-8da98e53077c-hostproc" (OuterVolumeSpecName: "hostproc") pod "3704ddd3-4fa6-40db-9488-8da98e53077c" (UID: "3704ddd3-4fa6-40db-9488-8da98e53077c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:45:53.633748 kubelet[2620]: I0706 23:45:53.633662 2620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3704ddd3-4fa6-40db-9488-8da98e53077c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3704ddd3-4fa6-40db-9488-8da98e53077c" (UID: "3704ddd3-4fa6-40db-9488-8da98e53077c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:45:53.634656 kubelet[2620]: I0706 23:45:53.634619 2620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3704ddd3-4fa6-40db-9488-8da98e53077c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3704ddd3-4fa6-40db-9488-8da98e53077c" (UID: "3704ddd3-4fa6-40db-9488-8da98e53077c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 6 23:45:53.634717 kubelet[2620]: I0706 23:45:53.634686 2620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3704ddd3-4fa6-40db-9488-8da98e53077c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3704ddd3-4fa6-40db-9488-8da98e53077c" (UID: "3704ddd3-4fa6-40db-9488-8da98e53077c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:45:53.634717 kubelet[2620]: I0706 23:45:53.634703 2620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3704ddd3-4fa6-40db-9488-8da98e53077c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3704ddd3-4fa6-40db-9488-8da98e53077c" (UID: "3704ddd3-4fa6-40db-9488-8da98e53077c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:45:53.635596 kubelet[2620]: I0706 23:45:53.635475 2620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/174a76bf-d4fa-4d7d-b8f8-15c25a927459-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "174a76bf-d4fa-4d7d-b8f8-15c25a927459" (UID: "174a76bf-d4fa-4d7d-b8f8-15c25a927459"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 6 23:45:53.635596 kubelet[2620]: I0706 23:45:53.635529 2620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3704ddd3-4fa6-40db-9488-8da98e53077c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3704ddd3-4fa6-40db-9488-8da98e53077c" (UID: "3704ddd3-4fa6-40db-9488-8da98e53077c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:45:53.635596 kubelet[2620]: I0706 23:45:53.635546 2620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3704ddd3-4fa6-40db-9488-8da98e53077c-cni-path" (OuterVolumeSpecName: "cni-path") pod "3704ddd3-4fa6-40db-9488-8da98e53077c" (UID: "3704ddd3-4fa6-40db-9488-8da98e53077c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:45:53.635723 kubelet[2620]: I0706 23:45:53.635562 2620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3704ddd3-4fa6-40db-9488-8da98e53077c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3704ddd3-4fa6-40db-9488-8da98e53077c" (UID: "3704ddd3-4fa6-40db-9488-8da98e53077c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:45:53.637158 kubelet[2620]: I0706 23:45:53.637110 2620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3704ddd3-4fa6-40db-9488-8da98e53077c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3704ddd3-4fa6-40db-9488-8da98e53077c" (UID: "3704ddd3-4fa6-40db-9488-8da98e53077c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 6 23:45:53.637247 kubelet[2620]: I0706 23:45:53.637183 2620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3704ddd3-4fa6-40db-9488-8da98e53077c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3704ddd3-4fa6-40db-9488-8da98e53077c" (UID: "3704ddd3-4fa6-40db-9488-8da98e53077c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:45:53.637717 kubelet[2620]: I0706 23:45:53.637579 2620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3704ddd3-4fa6-40db-9488-8da98e53077c-kube-api-access-5zth4" (OuterVolumeSpecName: "kube-api-access-5zth4") pod "3704ddd3-4fa6-40db-9488-8da98e53077c" (UID: "3704ddd3-4fa6-40db-9488-8da98e53077c"). InnerVolumeSpecName "kube-api-access-5zth4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 6 23:45:53.638098 kubelet[2620]: I0706 23:45:53.638052 2620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/174a76bf-d4fa-4d7d-b8f8-15c25a927459-kube-api-access-xpkmv" (OuterVolumeSpecName: "kube-api-access-xpkmv") pod "174a76bf-d4fa-4d7d-b8f8-15c25a927459" (UID: "174a76bf-d4fa-4d7d-b8f8-15c25a927459"). InnerVolumeSpecName "kube-api-access-xpkmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 6 23:45:53.638397 kubelet[2620]: I0706 23:45:53.638367 2620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3704ddd3-4fa6-40db-9488-8da98e53077c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3704ddd3-4fa6-40db-9488-8da98e53077c" (UID: "3704ddd3-4fa6-40db-9488-8da98e53077c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 6 23:45:53.723621 kubelet[2620]: I0706 23:45:53.723555 2620 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3704ddd3-4fa6-40db-9488-8da98e53077c-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 6 23:45:53.723621 kubelet[2620]: I0706 23:45:53.723611 2620 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3704ddd3-4fa6-40db-9488-8da98e53077c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 6 23:45:53.723621 kubelet[2620]: I0706 23:45:53.723622 2620 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3704ddd3-4fa6-40db-9488-8da98e53077c-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 6 23:45:53.723621 kubelet[2620]: I0706 23:45:53.723638 2620 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3704ddd3-4fa6-40db-9488-8da98e53077c-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 6 23:45:53.723833 kubelet[2620]: I0706 23:45:53.723647 2620 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3704ddd3-4fa6-40db-9488-8da98e53077c-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 6 23:45:53.723833 kubelet[2620]: I0706 23:45:53.723655 2620 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3704ddd3-4fa6-40db-9488-8da98e53077c-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 6 23:45:53.723833 kubelet[2620]: I0706 23:45:53.723663 2620 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3704ddd3-4fa6-40db-9488-8da98e53077c-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 6 23:45:53.723833 kubelet[2620]: I0706 23:45:53.723670 2620 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/174a76bf-d4fa-4d7d-b8f8-15c25a927459-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 6 23:45:53.723833 kubelet[2620]: I0706 23:45:53.723678 2620 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3704ddd3-4fa6-40db-9488-8da98e53077c-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 6 23:45:53.723833 kubelet[2620]: I0706 23:45:53.723685 2620 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3704ddd3-4fa6-40db-9488-8da98e53077c-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 6 23:45:53.723833 kubelet[2620]: I0706 23:45:53.723692 2620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpkmv\" (UniqueName: \"kubernetes.io/projected/174a76bf-d4fa-4d7d-b8f8-15c25a927459-kube-api-access-xpkmv\") on node \"localhost\" DevicePath \"\"" Jul 6 23:45:53.723833 kubelet[2620]: I0706 23:45:53.723701 2620 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3704ddd3-4fa6-40db-9488-8da98e53077c-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 6 23:45:53.724008 kubelet[2620]: I0706 23:45:53.723708 2620 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3704ddd3-4fa6-40db-9488-8da98e53077c-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 6 23:45:53.724008 kubelet[2620]: I0706 23:45:53.723716 2620 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3704ddd3-4fa6-40db-9488-8da98e53077c-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 6 23:45:53.724008 kubelet[2620]: I0706 23:45:53.723724 2620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zth4\" (UniqueName: \"kubernetes.io/projected/3704ddd3-4fa6-40db-9488-8da98e53077c-kube-api-access-5zth4\") on node \"localhost\" DevicePath \"\"" Jul 6 23:45:53.724008 kubelet[2620]: I0706 23:45:53.723731 2620 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3704ddd3-4fa6-40db-9488-8da98e53077c-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 6 23:45:53.841988 systemd[1]: Removed slice kubepods-besteffort-pod174a76bf_d4fa_4d7d_b8f8_15c25a927459.slice - libcontainer container kubepods-besteffort-pod174a76bf_d4fa_4d7d_b8f8_15c25a927459.slice. Jul 6 23:45:53.849177 systemd[1]: Removed slice kubepods-burstable-pod3704ddd3_4fa6_40db_9488_8da98e53077c.slice - libcontainer container kubepods-burstable-pod3704ddd3_4fa6_40db_9488_8da98e53077c.slice. Jul 6 23:45:53.849272 systemd[1]: kubepods-burstable-pod3704ddd3_4fa6_40db_9488_8da98e53077c.slice: Consumed 7.388s CPU time, 121.4M memory peak, 176K read from disk, 12.9M written to disk. Jul 6 23:45:54.298512 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4538a34f5408c526df6d714df0d1216b8d9a7dec9f81186e792cd73d62f98ac9-shm.mount: Deactivated successfully. Jul 6 23:45:54.298628 systemd[1]: var-lib-kubelet-pods-174a76bf\x2dd4fa\x2d4d7d\x2db8f8\x2d15c25a927459-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxpkmv.mount: Deactivated successfully. Jul 6 23:45:54.298682 systemd[1]: var-lib-kubelet-pods-3704ddd3\x2d4fa6\x2d40db\x2d9488\x2d8da98e53077c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5zth4.mount: Deactivated successfully. Jul 6 23:45:54.298734 systemd[1]: var-lib-kubelet-pods-3704ddd3\x2d4fa6\x2d40db\x2d9488\x2d8da98e53077c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 6 23:45:54.298797 systemd[1]: var-lib-kubelet-pods-3704ddd3\x2d4fa6\x2d40db\x2d9488\x2d8da98e53077c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 6 23:45:54.324341 kubelet[2620]: I0706 23:45:54.324285 2620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="174a76bf-d4fa-4d7d-b8f8-15c25a927459" path="/var/lib/kubelet/pods/174a76bf-d4fa-4d7d-b8f8-15c25a927459/volumes" Jul 6 23:45:54.325025 kubelet[2620]: I0706 23:45:54.325002 2620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3704ddd3-4fa6-40db-9488-8da98e53077c" path="/var/lib/kubelet/pods/3704ddd3-4fa6-40db-9488-8da98e53077c/volumes" Jul 6 23:45:54.384107 kubelet[2620]: E0706 23:45:54.384059 2620 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 6 23:45:55.221712 sshd[4219]: Connection closed by 10.0.0.1 port 41340 Jul 6 23:45:55.222284 sshd-session[4217]: pam_unix(sshd:session): session closed for user core Jul 6 23:45:55.238699 systemd[1]: sshd@22-10.0.0.128:22-10.0.0.1:41340.service: Deactivated successfully. Jul 6 23:45:55.241176 systemd[1]: session-23.scope: Deactivated successfully. Jul 6 23:45:55.241478 systemd[1]: session-23.scope: Consumed 1.871s CPU time, 26.4M memory peak. Jul 6 23:45:55.244340 systemd-logind[1485]: Session 23 logged out. Waiting for processes to exit. Jul 6 23:45:55.246637 systemd[1]: Started sshd@23-10.0.0.128:22-10.0.0.1:43978.service - OpenSSH per-connection server daemon (10.0.0.1:43978). Jul 6 23:45:55.248381 systemd-logind[1485]: Removed session 23. Jul 6 23:45:55.308457 sshd[4371]: Accepted publickey for core from 10.0.0.1 port 43978 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:45:55.310112 sshd-session[4371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:45:55.314514 systemd-logind[1485]: New session 24 of user core. Jul 6 23:45:55.323175 kubelet[2620]: E0706 23:45:55.322719 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:45:55.323799 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 6 23:45:56.087033 kubelet[2620]: I0706 23:45:56.086986 2620 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-06T23:45:56Z","lastTransitionTime":"2025-07-06T23:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 6 23:45:56.695265 sshd[4373]: Connection closed by 10.0.0.1 port 43978 Jul 6 23:45:56.694644 sshd-session[4371]: pam_unix(sshd:session): session closed for user core Jul 6 23:45:56.711715 systemd[1]: sshd@23-10.0.0.128:22-10.0.0.1:43978.service: Deactivated successfully. Jul 6 23:45:56.714877 systemd[1]: session-24.scope: Deactivated successfully. Jul 6 23:45:56.715150 systemd[1]: session-24.scope: Consumed 1.206s CPU time, 24.4M memory peak. Jul 6 23:45:56.716218 systemd-logind[1485]: Session 24 logged out. Waiting for processes to exit. Jul 6 23:45:56.720854 systemd[1]: Started sshd@24-10.0.0.128:22-10.0.0.1:43982.service - OpenSSH per-connection server daemon (10.0.0.1:43982). Jul 6 23:45:56.724543 kubelet[2620]: E0706 23:45:56.724433 2620 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3704ddd3-4fa6-40db-9488-8da98e53077c" containerName="mount-cgroup" Jul 6 23:45:56.724543 kubelet[2620]: E0706 23:45:56.724466 2620 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3704ddd3-4fa6-40db-9488-8da98e53077c" containerName="apply-sysctl-overwrites" Jul 6 23:45:56.724543 kubelet[2620]: E0706 23:45:56.724474 2620 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3704ddd3-4fa6-40db-9488-8da98e53077c" containerName="clean-cilium-state" Jul 6 23:45:56.724543 kubelet[2620]: E0706 23:45:56.724481 2620 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3704ddd3-4fa6-40db-9488-8da98e53077c" containerName="cilium-agent" Jul 6 23:45:56.724543 kubelet[2620]: E0706 23:45:56.724489 2620 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3704ddd3-4fa6-40db-9488-8da98e53077c" containerName="mount-bpf-fs" Jul 6 23:45:56.724543 kubelet[2620]: E0706 23:45:56.724495 2620 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="174a76bf-d4fa-4d7d-b8f8-15c25a927459" containerName="cilium-operator" Jul 6 23:45:56.724543 kubelet[2620]: I0706 23:45:56.724521 2620 memory_manager.go:354] "RemoveStaleState removing state" podUID="3704ddd3-4fa6-40db-9488-8da98e53077c" containerName="cilium-agent" Jul 6 23:45:56.724543 kubelet[2620]: I0706 23:45:56.724526 2620 memory_manager.go:354] "RemoveStaleState removing state" podUID="174a76bf-d4fa-4d7d-b8f8-15c25a927459" containerName="cilium-operator" Jul 6 23:45:56.725753 systemd-logind[1485]: Removed session 24. Jul 6 23:45:56.748927 systemd[1]: Created slice kubepods-burstable-podd435d8de_ebbc_4b96_a088_a1f4cbd494d8.slice - libcontainer container kubepods-burstable-podd435d8de_ebbc_4b96_a088_a1f4cbd494d8.slice. Jul 6 23:45:56.785128 sshd[4385]: Accepted publickey for core from 10.0.0.1 port 43982 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:45:56.786442 sshd-session[4385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:45:56.790483 systemd-logind[1485]: New session 25 of user core. Jul 6 23:45:56.796755 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 6 23:45:56.841033 kubelet[2620]: I0706 23:45:56.840980 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d435d8de-ebbc-4b96-a088-a1f4cbd494d8-cilium-run\") pod \"cilium-79smx\" (UID: \"d435d8de-ebbc-4b96-a088-a1f4cbd494d8\") " pod="kube-system/cilium-79smx" Jul 6 23:45:56.841033 kubelet[2620]: I0706 23:45:56.841027 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d435d8de-ebbc-4b96-a088-a1f4cbd494d8-lib-modules\") pod \"cilium-79smx\" (UID: \"d435d8de-ebbc-4b96-a088-a1f4cbd494d8\") " pod="kube-system/cilium-79smx" Jul 6 23:45:56.841033 kubelet[2620]: I0706 23:45:56.841045 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d435d8de-ebbc-4b96-a088-a1f4cbd494d8-cilium-ipsec-secrets\") pod \"cilium-79smx\" (UID: \"d435d8de-ebbc-4b96-a088-a1f4cbd494d8\") " pod="kube-system/cilium-79smx" Jul 6 23:45:56.841206 kubelet[2620]: I0706 23:45:56.841063 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d435d8de-ebbc-4b96-a088-a1f4cbd494d8-bpf-maps\") pod \"cilium-79smx\" (UID: \"d435d8de-ebbc-4b96-a088-a1f4cbd494d8\") " pod="kube-system/cilium-79smx" Jul 6 23:45:56.841206 kubelet[2620]: I0706 23:45:56.841079 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d435d8de-ebbc-4b96-a088-a1f4cbd494d8-cilium-config-path\") pod \"cilium-79smx\" (UID: \"d435d8de-ebbc-4b96-a088-a1f4cbd494d8\") " pod="kube-system/cilium-79smx" Jul 6 23:45:56.841206 kubelet[2620]: I0706 23:45:56.841100 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d435d8de-ebbc-4b96-a088-a1f4cbd494d8-clustermesh-secrets\") pod \"cilium-79smx\" (UID: \"d435d8de-ebbc-4b96-a088-a1f4cbd494d8\") " pod="kube-system/cilium-79smx" Jul 6 23:45:56.841206 kubelet[2620]: I0706 23:45:56.841117 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d435d8de-ebbc-4b96-a088-a1f4cbd494d8-cilium-cgroup\") pod \"cilium-79smx\" (UID: \"d435d8de-ebbc-4b96-a088-a1f4cbd494d8\") " pod="kube-system/cilium-79smx" Jul 6 23:45:56.841206 kubelet[2620]: I0706 23:45:56.841133 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d435d8de-ebbc-4b96-a088-a1f4cbd494d8-cni-path\") pod \"cilium-79smx\" (UID: \"d435d8de-ebbc-4b96-a088-a1f4cbd494d8\") " pod="kube-system/cilium-79smx" Jul 6 23:45:56.841206 kubelet[2620]: I0706 23:45:56.841149 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d435d8de-ebbc-4b96-a088-a1f4cbd494d8-hubble-tls\") pod \"cilium-79smx\" (UID: \"d435d8de-ebbc-4b96-a088-a1f4cbd494d8\") " pod="kube-system/cilium-79smx" Jul 6 23:45:56.841347 kubelet[2620]: I0706 23:45:56.841165 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxhr6\" (UniqueName: \"kubernetes.io/projected/d435d8de-ebbc-4b96-a088-a1f4cbd494d8-kube-api-access-gxhr6\") pod \"cilium-79smx\" (UID: \"d435d8de-ebbc-4b96-a088-a1f4cbd494d8\") " pod="kube-system/cilium-79smx" Jul 6 23:45:56.841347 kubelet[2620]: I0706 23:45:56.841185 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d435d8de-ebbc-4b96-a088-a1f4cbd494d8-hostproc\") pod \"cilium-79smx\" (UID: \"d435d8de-ebbc-4b96-a088-a1f4cbd494d8\") " pod="kube-system/cilium-79smx" Jul 6 23:45:56.841347 kubelet[2620]: I0706 23:45:56.841201 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d435d8de-ebbc-4b96-a088-a1f4cbd494d8-xtables-lock\") pod \"cilium-79smx\" (UID: \"d435d8de-ebbc-4b96-a088-a1f4cbd494d8\") " pod="kube-system/cilium-79smx" Jul 6 23:45:56.841347 kubelet[2620]: I0706 23:45:56.841219 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d435d8de-ebbc-4b96-a088-a1f4cbd494d8-host-proc-sys-kernel\") pod \"cilium-79smx\" (UID: \"d435d8de-ebbc-4b96-a088-a1f4cbd494d8\") " pod="kube-system/cilium-79smx" Jul 6 23:45:56.841347 kubelet[2620]: I0706 23:45:56.841235 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d435d8de-ebbc-4b96-a088-a1f4cbd494d8-host-proc-sys-net\") pod \"cilium-79smx\" (UID: \"d435d8de-ebbc-4b96-a088-a1f4cbd494d8\") " pod="kube-system/cilium-79smx" Jul 6 23:45:56.841347 kubelet[2620]: I0706 23:45:56.841254 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d435d8de-ebbc-4b96-a088-a1f4cbd494d8-etc-cni-netd\") pod \"cilium-79smx\" (UID: \"d435d8de-ebbc-4b96-a088-a1f4cbd494d8\") " pod="kube-system/cilium-79smx" Jul 6 23:45:56.846377 sshd[4387]: Connection closed by 10.0.0.1 port 43982 Jul 6 23:45:56.846803 sshd-session[4385]: pam_unix(sshd:session): session closed for user core Jul 6 23:45:56.860055 systemd[1]: sshd@24-10.0.0.128:22-10.0.0.1:43982.service: Deactivated successfully. Jul 6 23:45:56.861866 systemd[1]: session-25.scope: Deactivated successfully. Jul 6 23:45:56.864117 systemd-logind[1485]: Session 25 logged out. Waiting for processes to exit. Jul 6 23:45:56.867336 systemd[1]: Started sshd@25-10.0.0.128:22-10.0.0.1:43988.service - OpenSSH per-connection server daemon (10.0.0.1:43988). Jul 6 23:45:56.870647 systemd-logind[1485]: Removed session 25. Jul 6 23:45:56.923611 sshd[4394]: Accepted publickey for core from 10.0.0.1 port 43988 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:45:56.925031 sshd-session[4394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:45:56.930372 systemd-logind[1485]: New session 26 of user core. Jul 6 23:45:56.939785 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 6 23:45:57.055061 kubelet[2620]: E0706 23:45:57.054642 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:45:57.056775 containerd[1511]: time="2025-07-06T23:45:57.056404050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-79smx,Uid:d435d8de-ebbc-4b96-a088-a1f4cbd494d8,Namespace:kube-system,Attempt:0,}" Jul 6 23:45:57.079597 containerd[1511]: time="2025-07-06T23:45:57.079070781Z" level=info msg="connecting to shim 6dc493baaaa8700a2759baed6a41eb515b5af838c562107a0b91084516fc0195" address="unix:///run/containerd/s/8026261e5a09051cf436611efc3e9a03433d6404a30a196aea707a74c916f735" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:45:57.103805 systemd[1]: Started cri-containerd-6dc493baaaa8700a2759baed6a41eb515b5af838c562107a0b91084516fc0195.scope - libcontainer container 6dc493baaaa8700a2759baed6a41eb515b5af838c562107a0b91084516fc0195. Jul 6 23:45:57.130847 containerd[1511]: time="2025-07-06T23:45:57.130798431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-79smx,Uid:d435d8de-ebbc-4b96-a088-a1f4cbd494d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"6dc493baaaa8700a2759baed6a41eb515b5af838c562107a0b91084516fc0195\"" Jul 6 23:45:57.131482 kubelet[2620]: E0706 23:45:57.131461 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:45:57.154802 containerd[1511]: time="2025-07-06T23:45:57.154762678Z" level=info msg="CreateContainer within sandbox \"6dc493baaaa8700a2759baed6a41eb515b5af838c562107a0b91084516fc0195\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 6 23:45:57.160496 containerd[1511]: time="2025-07-06T23:45:57.160443851Z" level=info msg="Container d32856d4d807591f0f71f2b4d5215a0936a69dbbe866813b1f65535efbd11a10: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:45:57.166123 containerd[1511]: time="2025-07-06T23:45:57.166070145Z" level=info msg="CreateContainer within sandbox \"6dc493baaaa8700a2759baed6a41eb515b5af838c562107a0b91084516fc0195\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d32856d4d807591f0f71f2b4d5215a0936a69dbbe866813b1f65535efbd11a10\"" Jul 6 23:45:57.166658 containerd[1511]: time="2025-07-06T23:45:57.166633886Z" level=info msg="StartContainer for \"d32856d4d807591f0f71f2b4d5215a0936a69dbbe866813b1f65535efbd11a10\"" Jul 6 23:45:57.167772 containerd[1511]: time="2025-07-06T23:45:57.167722570Z" level=info msg="connecting to shim d32856d4d807591f0f71f2b4d5215a0936a69dbbe866813b1f65535efbd11a10" address="unix:///run/containerd/s/8026261e5a09051cf436611efc3e9a03433d6404a30a196aea707a74c916f735" protocol=ttrpc version=3 Jul 6 23:45:57.191783 systemd[1]: Started cri-containerd-d32856d4d807591f0f71f2b4d5215a0936a69dbbe866813b1f65535efbd11a10.scope - libcontainer container d32856d4d807591f0f71f2b4d5215a0936a69dbbe866813b1f65535efbd11a10. Jul 6 23:45:57.218494 containerd[1511]: time="2025-07-06T23:45:57.218452173Z" level=info msg="StartContainer for \"d32856d4d807591f0f71f2b4d5215a0936a69dbbe866813b1f65535efbd11a10\" returns successfully" Jul 6 23:45:57.237652 systemd[1]: cri-containerd-d32856d4d807591f0f71f2b4d5215a0936a69dbbe866813b1f65535efbd11a10.scope: Deactivated successfully. Jul 6 23:45:57.238792 containerd[1511]: time="2025-07-06T23:45:57.238738142Z" level=info msg="received exit event container_id:\"d32856d4d807591f0f71f2b4d5215a0936a69dbbe866813b1f65535efbd11a10\" id:\"d32856d4d807591f0f71f2b4d5215a0936a69dbbe866813b1f65535efbd11a10\" pid:4466 exited_at:{seconds:1751845557 nanos:238284877}" Jul 6 23:45:57.239003 containerd[1511]: time="2025-07-06T23:45:57.238838739Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d32856d4d807591f0f71f2b4d5215a0936a69dbbe866813b1f65535efbd11a10\" id:\"d32856d4d807591f0f71f2b4d5215a0936a69dbbe866813b1f65535efbd11a10\" pid:4466 exited_at:{seconds:1751845557 nanos:238284877}" Jul 6 23:45:57.555467 kubelet[2620]: E0706 23:45:57.555410 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:45:57.558630 containerd[1511]: time="2025-07-06T23:45:57.558213779Z" level=info msg="CreateContainer within sandbox \"6dc493baaaa8700a2759baed6a41eb515b5af838c562107a0b91084516fc0195\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 6 23:45:57.565041 containerd[1511]: time="2025-07-06T23:45:57.564999435Z" level=info msg="Container c3fb7f12d62d409e8e82913ecc0dd4ff31a7c7750ccf85a2ab537c79df540dc2: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:45:57.575032 containerd[1511]: time="2025-07-06T23:45:57.574983585Z" level=info msg="CreateContainer within sandbox \"6dc493baaaa8700a2759baed6a41eb515b5af838c562107a0b91084516fc0195\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c3fb7f12d62d409e8e82913ecc0dd4ff31a7c7750ccf85a2ab537c79df540dc2\"" Jul 6 23:45:57.577283 containerd[1511]: time="2025-07-06T23:45:57.576724527Z" level=info msg="StartContainer for \"c3fb7f12d62d409e8e82913ecc0dd4ff31a7c7750ccf85a2ab537c79df540dc2\"" Jul 6 23:45:57.580168 containerd[1511]: time="2025-07-06T23:45:57.580108536Z" level=info msg="connecting to shim c3fb7f12d62d409e8e82913ecc0dd4ff31a7c7750ccf85a2ab537c79df540dc2" address="unix:///run/containerd/s/8026261e5a09051cf436611efc3e9a03433d6404a30a196aea707a74c916f735" protocol=ttrpc version=3 Jul 6 23:45:57.601765 systemd[1]: Started cri-containerd-c3fb7f12d62d409e8e82913ecc0dd4ff31a7c7750ccf85a2ab537c79df540dc2.scope - libcontainer container c3fb7f12d62d409e8e82913ecc0dd4ff31a7c7750ccf85a2ab537c79df540dc2. Jul 6 23:45:57.627025 containerd[1511]: time="2025-07-06T23:45:57.626962306Z" level=info msg="StartContainer for \"c3fb7f12d62d409e8e82913ecc0dd4ff31a7c7750ccf85a2ab537c79df540dc2\" returns successfully" Jul 6 23:45:57.657183 systemd[1]: cri-containerd-c3fb7f12d62d409e8e82913ecc0dd4ff31a7c7750ccf85a2ab537c79df540dc2.scope: Deactivated successfully. Jul 6 23:45:57.658252 containerd[1511]: time="2025-07-06T23:45:57.657854845Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c3fb7f12d62d409e8e82913ecc0dd4ff31a7c7750ccf85a2ab537c79df540dc2\" id:\"c3fb7f12d62d409e8e82913ecc0dd4ff31a7c7750ccf85a2ab537c79df540dc2\" pid:4510 exited_at:{seconds:1751845557 nanos:657320583}" Jul 6 23:45:57.658704 containerd[1511]: time="2025-07-06T23:45:57.658603780Z" level=info msg="received exit event container_id:\"c3fb7f12d62d409e8e82913ecc0dd4ff31a7c7750ccf85a2ab537c79df540dc2\" id:\"c3fb7f12d62d409e8e82913ecc0dd4ff31a7c7750ccf85a2ab537c79df540dc2\" pid:4510 exited_at:{seconds:1751845557 nanos:657320583}" Jul 6 23:45:58.559975 kubelet[2620]: E0706 23:45:58.559910 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:45:58.563824 containerd[1511]: time="2025-07-06T23:45:58.563693199Z" level=info msg="CreateContainer within sandbox \"6dc493baaaa8700a2759baed6a41eb515b5af838c562107a0b91084516fc0195\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 6 23:45:58.580722 containerd[1511]: time="2025-07-06T23:45:58.580549645Z" level=info msg="Container 9e8d8c08c49a599c44a9a781cb3ad2f0a3e1e948d2343ab126c1d1c2d23b3957: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:45:58.588954 containerd[1511]: time="2025-07-06T23:45:58.588900230Z" level=info msg="CreateContainer within sandbox \"6dc493baaaa8700a2759baed6a41eb515b5af838c562107a0b91084516fc0195\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9e8d8c08c49a599c44a9a781cb3ad2f0a3e1e948d2343ab126c1d1c2d23b3957\"" Jul 6 23:45:58.589433 containerd[1511]: time="2025-07-06T23:45:58.589408454Z" level=info msg="StartContainer for \"9e8d8c08c49a599c44a9a781cb3ad2f0a3e1e948d2343ab126c1d1c2d23b3957\"" Jul 6 23:45:58.591135 containerd[1511]: time="2025-07-06T23:45:58.591103483Z" level=info msg="connecting to shim 9e8d8c08c49a599c44a9a781cb3ad2f0a3e1e948d2343ab126c1d1c2d23b3957" address="unix:///run/containerd/s/8026261e5a09051cf436611efc3e9a03433d6404a30a196aea707a74c916f735" protocol=ttrpc version=3 Jul 6 23:45:58.618812 systemd[1]: Started cri-containerd-9e8d8c08c49a599c44a9a781cb3ad2f0a3e1e948d2343ab126c1d1c2d23b3957.scope - libcontainer container 9e8d8c08c49a599c44a9a781cb3ad2f0a3e1e948d2343ab126c1d1c2d23b3957. Jul 6 23:45:58.658218 systemd[1]: cri-containerd-9e8d8c08c49a599c44a9a781cb3ad2f0a3e1e948d2343ab126c1d1c2d23b3957.scope: Deactivated successfully. Jul 6 23:45:58.660045 containerd[1511]: time="2025-07-06T23:45:58.660013021Z" level=info msg="StartContainer for \"9e8d8c08c49a599c44a9a781cb3ad2f0a3e1e948d2343ab126c1d1c2d23b3957\" returns successfully" Jul 6 23:45:58.660200 containerd[1511]: time="2025-07-06T23:45:58.660176816Z" level=info msg="received exit event container_id:\"9e8d8c08c49a599c44a9a781cb3ad2f0a3e1e948d2343ab126c1d1c2d23b3957\" id:\"9e8d8c08c49a599c44a9a781cb3ad2f0a3e1e948d2343ab126c1d1c2d23b3957\" pid:4556 exited_at:{seconds:1751845558 nanos:659969143}" Jul 6 23:45:58.660421 containerd[1511]: time="2025-07-06T23:45:58.660284413Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9e8d8c08c49a599c44a9a781cb3ad2f0a3e1e948d2343ab126c1d1c2d23b3957\" id:\"9e8d8c08c49a599c44a9a781cb3ad2f0a3e1e948d2343ab126c1d1c2d23b3957\" pid:4556 exited_at:{seconds:1751845558 nanos:659969143}" Jul 6 23:45:58.685819 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e8d8c08c49a599c44a9a781cb3ad2f0a3e1e948d2343ab126c1d1c2d23b3957-rootfs.mount: Deactivated successfully. Jul 6 23:45:59.385520 kubelet[2620]: E0706 23:45:59.385483 2620 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 6 23:45:59.567264 kubelet[2620]: E0706 23:45:59.567189 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:45:59.571457 containerd[1511]: time="2025-07-06T23:45:59.571406202Z" level=info msg="CreateContainer within sandbox \"6dc493baaaa8700a2759baed6a41eb515b5af838c562107a0b91084516fc0195\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 6 23:45:59.590614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2955422552.mount: Deactivated successfully. Jul 6 23:45:59.592059 containerd[1511]: time="2025-07-06T23:45:59.591908388Z" level=info msg="Container 0c1befb94df756545ce1e53614d85ce57ffee95b775805817d9fb758812539d4: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:45:59.600986 containerd[1511]: time="2025-07-06T23:45:59.600931015Z" level=info msg="CreateContainer within sandbox \"6dc493baaaa8700a2759baed6a41eb515b5af838c562107a0b91084516fc0195\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0c1befb94df756545ce1e53614d85ce57ffee95b775805817d9fb758812539d4\"" Jul 6 23:45:59.601594 containerd[1511]: time="2025-07-06T23:45:59.601458001Z" level=info msg="StartContainer for \"0c1befb94df756545ce1e53614d85ce57ffee95b775805817d9fb758812539d4\"" Jul 6 23:45:59.602721 containerd[1511]: time="2025-07-06T23:45:59.602682766Z" level=info msg="connecting to shim 0c1befb94df756545ce1e53614d85ce57ffee95b775805817d9fb758812539d4" address="unix:///run/containerd/s/8026261e5a09051cf436611efc3e9a03433d6404a30a196aea707a74c916f735" protocol=ttrpc version=3 Jul 6 23:45:59.626806 systemd[1]: Started cri-containerd-0c1befb94df756545ce1e53614d85ce57ffee95b775805817d9fb758812539d4.scope - libcontainer container 0c1befb94df756545ce1e53614d85ce57ffee95b775805817d9fb758812539d4. Jul 6 23:45:59.652599 systemd[1]: cri-containerd-0c1befb94df756545ce1e53614d85ce57ffee95b775805817d9fb758812539d4.scope: Deactivated successfully. Jul 6 23:45:59.655139 containerd[1511]: time="2025-07-06T23:45:59.655052340Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0c1befb94df756545ce1e53614d85ce57ffee95b775805817d9fb758812539d4\" id:\"0c1befb94df756545ce1e53614d85ce57ffee95b775805817d9fb758812539d4\" pid:4598 exited_at:{seconds:1751845559 nanos:654556833}" Jul 6 23:45:59.657064 containerd[1511]: time="2025-07-06T23:45:59.656806330Z" level=info msg="received exit event container_id:\"0c1befb94df756545ce1e53614d85ce57ffee95b775805817d9fb758812539d4\" id:\"0c1befb94df756545ce1e53614d85ce57ffee95b775805817d9fb758812539d4\" pid:4598 exited_at:{seconds:1751845559 nanos:654556833}" Jul 6 23:45:59.665309 containerd[1511]: time="2025-07-06T23:45:59.665259134Z" level=info msg="StartContainer for \"0c1befb94df756545ce1e53614d85ce57ffee95b775805817d9fb758812539d4\" returns successfully" Jul 6 23:46:00.572398 kubelet[2620]: E0706 23:46:00.572270 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:00.575728 containerd[1511]: time="2025-07-06T23:46:00.575528457Z" level=info msg="CreateContainer within sandbox \"6dc493baaaa8700a2759baed6a41eb515b5af838c562107a0b91084516fc0195\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 6 23:46:00.584885 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c1befb94df756545ce1e53614d85ce57ffee95b775805817d9fb758812539d4-rootfs.mount: Deactivated successfully. Jul 6 23:46:00.585435 containerd[1511]: time="2025-07-06T23:46:00.585380445Z" level=info msg="Container cb325b9fd323a2f2b71ee49568988f69d8f3f3a52a1a825358bdfce3f7cf7c55: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:46:00.594632 containerd[1511]: time="2025-07-06T23:46:00.594588010Z" level=info msg="CreateContainer within sandbox \"6dc493baaaa8700a2759baed6a41eb515b5af838c562107a0b91084516fc0195\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cb325b9fd323a2f2b71ee49568988f69d8f3f3a52a1a825358bdfce3f7cf7c55\"" Jul 6 23:46:00.595255 containerd[1511]: time="2025-07-06T23:46:00.595220233Z" level=info msg="StartContainer for \"cb325b9fd323a2f2b71ee49568988f69d8f3f3a52a1a825358bdfce3f7cf7c55\"" Jul 6 23:46:00.596226 containerd[1511]: time="2025-07-06T23:46:00.596184009Z" level=info msg="connecting to shim cb325b9fd323a2f2b71ee49568988f69d8f3f3a52a1a825358bdfce3f7cf7c55" address="unix:///run/containerd/s/8026261e5a09051cf436611efc3e9a03433d6404a30a196aea707a74c916f735" protocol=ttrpc version=3 Jul 6 23:46:00.621852 systemd[1]: Started cri-containerd-cb325b9fd323a2f2b71ee49568988f69d8f3f3a52a1a825358bdfce3f7cf7c55.scope - libcontainer container cb325b9fd323a2f2b71ee49568988f69d8f3f3a52a1a825358bdfce3f7cf7c55. Jul 6 23:46:00.652545 containerd[1511]: time="2025-07-06T23:46:00.652503727Z" level=info msg="StartContainer for \"cb325b9fd323a2f2b71ee49568988f69d8f3f3a52a1a825358bdfce3f7cf7c55\" returns successfully" Jul 6 23:46:00.707443 containerd[1511]: time="2025-07-06T23:46:00.707378442Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cb325b9fd323a2f2b71ee49568988f69d8f3f3a52a1a825358bdfce3f7cf7c55\" id:\"07875a7ccd38a68548af5ce1cd9b620ba4587492ab39fc629ebc0d37d5a83916\" pid:4664 exited_at:{seconds:1751845560 nanos:707106289}" Jul 6 23:46:00.955598 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 6 23:46:01.579007 kubelet[2620]: E0706 23:46:01.578972 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:03.055833 kubelet[2620]: E0706 23:46:03.055731 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:03.362903 containerd[1511]: time="2025-07-06T23:46:03.362772763Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cb325b9fd323a2f2b71ee49568988f69d8f3f3a52a1a825358bdfce3f7cf7c55\" id:\"94e45c8a43da137c76dd3e8d74239e6f549158031aeec9ae73dbc52edef794ff\" pid:4990 exit_status:1 exited_at:{seconds:1751845563 nanos:362371810}" Jul 6 23:46:04.051870 systemd-networkd[1434]: lxc_health: Link UP Jul 6 23:46:04.052702 systemd-networkd[1434]: lxc_health: Gained carrier Jul 6 23:46:05.056789 kubelet[2620]: E0706 23:46:05.056686 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:05.084965 kubelet[2620]: I0706 23:46:05.083808 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-79smx" podStartSLOduration=9.083789205 podStartE2EDuration="9.083789205s" podCreationTimestamp="2025-07-06 23:45:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:46:01.792247835 +0000 UTC m=+87.553998225" watchObservedRunningTime="2025-07-06 23:46:05.083789205 +0000 UTC m=+90.845539555" Jul 6 23:46:05.502548 containerd[1511]: time="2025-07-06T23:46:05.502343638Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cb325b9fd323a2f2b71ee49568988f69d8f3f3a52a1a825358bdfce3f7cf7c55\" id:\"aafc6730184cd90ad38e808a6cc2f539bfbd10d9ec7d6dcfd8a039cef291d88a\" pid:5209 exited_at:{seconds:1751845565 nanos:501863965}" Jul 6 23:46:05.585305 kubelet[2620]: E0706 23:46:05.585256 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:05.650834 systemd-networkd[1434]: lxc_health: Gained IPv6LL Jul 6 23:46:06.587211 kubelet[2620]: E0706 23:46:06.587173 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:07.631333 containerd[1511]: time="2025-07-06T23:46:07.629827587Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cb325b9fd323a2f2b71ee49568988f69d8f3f3a52a1a825358bdfce3f7cf7c55\" id:\"853961e33f61aff4d8b32ea97aba5cc9b9f68e8d6c10e4e37150d0e9b8d866c7\" pid:5236 exited_at:{seconds:1751845567 nanos:629352272}" Jul 6 23:46:09.756040 containerd[1511]: time="2025-07-06T23:46:09.755992790Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cb325b9fd323a2f2b71ee49568988f69d8f3f3a52a1a825358bdfce3f7cf7c55\" id:\"2523bf9cefc2cfb9ad31b2827e6ad77c644a46cbf1c15e9de814994389cfd819\" pid:5267 exited_at:{seconds:1751845569 nanos:755310235}" Jul 6 23:46:09.770770 sshd[4396]: Connection closed by 10.0.0.1 port 43988 Jul 6 23:46:09.771738 sshd-session[4394]: pam_unix(sshd:session): session closed for user core Jul 6 23:46:09.776244 systemd-logind[1485]: Session 26 logged out. Waiting for processes to exit. Jul 6 23:46:09.776384 systemd[1]: sshd@25-10.0.0.128:22-10.0.0.1:43988.service: Deactivated successfully. Jul 6 23:46:09.778917 systemd[1]: session-26.scope: Deactivated successfully. Jul 6 23:46:09.780357 systemd-logind[1485]: Removed session 26.