Aug 12 23:51:53.134409 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Aug 12 23:51:53.134435 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Aug 12 22:21:53 -00 2025 Aug 12 23:51:53.134445 kernel: KASLR enabled Aug 12 23:51:53.134451 kernel: efi: EFI v2.7 by EDK II Aug 12 23:51:53.134457 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Aug 12 23:51:53.134462 kernel: random: crng init done Aug 12 23:51:53.134470 kernel: ACPI: Early table checksum verification disabled Aug 12 23:51:53.134476 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Aug 12 23:51:53.134483 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Aug 12 23:51:53.134491 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:51:53.134497 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:51:53.134503 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:51:53.134509 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:51:53.134516 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:51:53.134523 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:51:53.134532 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:51:53.134538 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:51:53.134545 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:51:53.134559 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Aug 12 23:51:53.134568 kernel: NUMA: Failed to initialise from firmware Aug 12 23:51:53.134576 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Aug 12 23:51:53.134583 kernel: NUMA: NODE_DATA [mem 0xdc959800-0xdc95efff] Aug 12 23:51:53.134589 kernel: Zone ranges: Aug 12 23:51:53.134596 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Aug 12 23:51:53.134602 kernel: DMA32 empty Aug 12 23:51:53.134610 kernel: Normal empty Aug 12 23:51:53.134617 kernel: Movable zone start for each node Aug 12 23:51:53.134623 kernel: Early memory node ranges Aug 12 23:51:53.134630 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Aug 12 23:51:53.134637 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Aug 12 23:51:53.134643 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Aug 12 23:51:53.134650 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Aug 12 23:51:53.134656 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Aug 12 23:51:53.134663 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Aug 12 23:51:53.134670 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Aug 12 23:51:53.134676 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Aug 12 23:51:53.134683 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Aug 12 23:51:53.134691 kernel: psci: probing for conduit method from ACPI. Aug 12 23:51:53.134697 kernel: psci: PSCIv1.1 detected in firmware. Aug 12 23:51:53.134704 kernel: psci: Using standard PSCI v0.2 function IDs Aug 12 23:51:53.134713 kernel: psci: Trusted OS migration not required Aug 12 23:51:53.134720 kernel: psci: SMC Calling Convention v1.1 Aug 12 23:51:53.134727 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Aug 12 23:51:53.134736 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Aug 12 23:51:53.134744 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Aug 12 23:51:53.134751 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Aug 12 23:51:53.134758 kernel: Detected PIPT I-cache on CPU0 Aug 12 23:51:53.134765 kernel: CPU features: detected: GIC system register CPU interface Aug 12 23:51:53.134772 kernel: CPU features: detected: Hardware dirty bit management Aug 12 23:51:53.134779 kernel: CPU features: detected: Spectre-v4 Aug 12 23:51:53.134786 kernel: CPU features: detected: Spectre-BHB Aug 12 23:51:53.134899 kernel: CPU features: kernel page table isolation forced ON by KASLR Aug 12 23:51:53.134907 kernel: CPU features: detected: Kernel page table isolation (KPTI) Aug 12 23:51:53.134917 kernel: CPU features: detected: ARM erratum 1418040 Aug 12 23:51:53.134924 kernel: CPU features: detected: SSBS not fully self-synchronizing Aug 12 23:51:53.134931 kernel: alternatives: applying boot alternatives Aug 12 23:51:53.134940 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2f9df6e9e6c671c457040a64675390bbff42294b08c628cd2dc472ed8120146a Aug 12 23:51:53.134947 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 12 23:51:53.134955 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 12 23:51:53.134962 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 12 23:51:53.134969 kernel: Fallback order for Node 0: 0 Aug 12 23:51:53.134976 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Aug 12 23:51:53.134983 kernel: Policy zone: DMA Aug 12 23:51:53.134990 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 12 23:51:53.134999 kernel: software IO TLB: area num 4. Aug 12 23:51:53.135006 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Aug 12 23:51:53.135014 kernel: Memory: 2386408K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185880K reserved, 0K cma-reserved) Aug 12 23:51:53.135021 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 12 23:51:53.135029 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 12 23:51:53.135036 kernel: rcu: RCU event tracing is enabled. Aug 12 23:51:53.135043 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 12 23:51:53.135050 kernel: Trampoline variant of Tasks RCU enabled. Aug 12 23:51:53.135057 kernel: Tracing variant of Tasks RCU enabled. Aug 12 23:51:53.135065 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 12 23:51:53.135072 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 12 23:51:53.135080 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 12 23:51:53.135088 kernel: GICv3: 256 SPIs implemented Aug 12 23:51:53.135094 kernel: GICv3: 0 Extended SPIs implemented Aug 12 23:51:53.135101 kernel: Root IRQ handler: gic_handle_irq Aug 12 23:51:53.135108 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Aug 12 23:51:53.135115 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Aug 12 23:51:53.135123 kernel: ITS [mem 0x08080000-0x0809ffff] Aug 12 23:51:53.135130 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Aug 12 23:51:53.135137 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Aug 12 23:51:53.135144 kernel: GICv3: using LPI property table @0x00000000400f0000 Aug 12 23:51:53.135151 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Aug 12 23:51:53.135158 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 12 23:51:53.135167 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 12 23:51:53.135174 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Aug 12 23:51:53.135181 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Aug 12 23:51:53.135188 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Aug 12 23:51:53.135195 kernel: arm-pv: using stolen time PV Aug 12 23:51:53.135202 kernel: Console: colour dummy device 80x25 Aug 12 23:51:53.135210 kernel: ACPI: Core revision 20230628 Aug 12 23:51:53.135217 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Aug 12 23:51:53.135224 kernel: pid_max: default: 32768 minimum: 301 Aug 12 23:51:53.135231 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 12 23:51:53.135241 kernel: landlock: Up and running. Aug 12 23:51:53.135248 kernel: SELinux: Initializing. Aug 12 23:51:53.135255 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 12 23:51:53.135262 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 12 23:51:53.135270 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 12 23:51:53.135278 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 12 23:51:53.135286 kernel: rcu: Hierarchical SRCU implementation. Aug 12 23:51:53.135295 kernel: rcu: Max phase no-delay instances is 400. Aug 12 23:51:53.135304 kernel: Platform MSI: ITS@0x8080000 domain created Aug 12 23:51:53.135314 kernel: PCI/MSI: ITS@0x8080000 domain created Aug 12 23:51:53.135322 kernel: Remapping and enabling EFI services. Aug 12 23:51:53.135329 kernel: smp: Bringing up secondary CPUs ... Aug 12 23:51:53.135336 kernel: Detected PIPT I-cache on CPU1 Aug 12 23:51:53.135343 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Aug 12 23:51:53.135351 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Aug 12 23:51:53.135358 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 12 23:51:53.135365 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Aug 12 23:51:53.135372 kernel: Detected PIPT I-cache on CPU2 Aug 12 23:51:53.135381 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Aug 12 23:51:53.135392 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Aug 12 23:51:53.135400 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 12 23:51:53.135414 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Aug 12 23:51:53.135425 kernel: Detected PIPT I-cache on CPU3 Aug 12 23:51:53.135432 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Aug 12 23:51:53.135440 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Aug 12 23:51:53.135448 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 12 23:51:53.135455 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Aug 12 23:51:53.135463 kernel: smp: Brought up 1 node, 4 CPUs Aug 12 23:51:53.135472 kernel: SMP: Total of 4 processors activated. Aug 12 23:51:53.135480 kernel: CPU features: detected: 32-bit EL0 Support Aug 12 23:51:53.135488 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Aug 12 23:51:53.135496 kernel: CPU features: detected: Common not Private translations Aug 12 23:51:53.135504 kernel: CPU features: detected: CRC32 instructions Aug 12 23:51:53.135511 kernel: CPU features: detected: Enhanced Virtualization Traps Aug 12 23:51:53.135519 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Aug 12 23:51:53.135527 kernel: CPU features: detected: LSE atomic instructions Aug 12 23:51:53.135536 kernel: CPU features: detected: Privileged Access Never Aug 12 23:51:53.135543 kernel: CPU features: detected: RAS Extension Support Aug 12 23:51:53.135558 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Aug 12 23:51:53.135566 kernel: CPU: All CPU(s) started at EL1 Aug 12 23:51:53.135574 kernel: alternatives: applying system-wide alternatives Aug 12 23:51:53.135581 kernel: devtmpfs: initialized Aug 12 23:51:53.135589 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 12 23:51:53.135597 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 12 23:51:53.135605 kernel: pinctrl core: initialized pinctrl subsystem Aug 12 23:51:53.135615 kernel: SMBIOS 3.0.0 present. Aug 12 23:51:53.135622 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Aug 12 23:51:53.135630 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 12 23:51:53.135638 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 12 23:51:53.135645 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 12 23:51:53.135653 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 12 23:51:53.135661 kernel: audit: initializing netlink subsys (disabled) Aug 12 23:51:53.135669 kernel: audit: type=2000 audit(0.027:1): state=initialized audit_enabled=0 res=1 Aug 12 23:51:53.135678 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 12 23:51:53.135686 kernel: cpuidle: using governor menu Aug 12 23:51:53.135693 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 12 23:51:53.135701 kernel: ASID allocator initialised with 32768 entries Aug 12 23:51:53.135709 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 12 23:51:53.135716 kernel: Serial: AMBA PL011 UART driver Aug 12 23:51:53.135724 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Aug 12 23:51:53.135732 kernel: Modules: 0 pages in range for non-PLT usage Aug 12 23:51:53.135740 kernel: Modules: 509008 pages in range for PLT usage Aug 12 23:51:53.135748 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 12 23:51:53.135757 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Aug 12 23:51:53.135765 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Aug 12 23:51:53.135772 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Aug 12 23:51:53.135780 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 12 23:51:53.135794 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Aug 12 23:51:53.135802 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Aug 12 23:51:53.135810 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Aug 12 23:51:53.135817 kernel: ACPI: Added _OSI(Module Device) Aug 12 23:51:53.135825 kernel: ACPI: Added _OSI(Processor Device) Aug 12 23:51:53.135835 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 12 23:51:53.135842 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 12 23:51:53.135850 kernel: ACPI: Interpreter enabled Aug 12 23:51:53.135857 kernel: ACPI: Using GIC for interrupt routing Aug 12 23:51:53.135864 kernel: ACPI: MCFG table detected, 1 entries Aug 12 23:51:53.135872 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Aug 12 23:51:53.135880 kernel: printk: console [ttyAMA0] enabled Aug 12 23:51:53.135887 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 12 23:51:53.136069 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 12 23:51:53.136157 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 12 23:51:53.136227 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 12 23:51:53.136300 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Aug 12 23:51:53.136368 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Aug 12 23:51:53.136379 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Aug 12 23:51:53.136387 kernel: PCI host bridge to bus 0000:00 Aug 12 23:51:53.136468 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Aug 12 23:51:53.136538 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Aug 12 23:51:53.136612 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Aug 12 23:51:53.136677 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 12 23:51:53.136762 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Aug 12 23:51:53.136878 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Aug 12 23:51:53.136953 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Aug 12 23:51:53.137028 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Aug 12 23:51:53.137099 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Aug 12 23:51:53.137170 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Aug 12 23:51:53.137247 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Aug 12 23:51:53.137317 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Aug 12 23:51:53.137381 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Aug 12 23:51:53.137442 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Aug 12 23:51:53.137505 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Aug 12 23:51:53.137515 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Aug 12 23:51:53.137523 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Aug 12 23:51:53.137531 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Aug 12 23:51:53.137538 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Aug 12 23:51:53.137546 kernel: iommu: Default domain type: Translated Aug 12 23:51:53.137561 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 12 23:51:53.137569 kernel: efivars: Registered efivars operations Aug 12 23:51:53.137579 kernel: vgaarb: loaded Aug 12 23:51:53.137587 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 12 23:51:53.137594 kernel: VFS: Disk quotas dquot_6.6.0 Aug 12 23:51:53.137602 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 12 23:51:53.137609 kernel: pnp: PnP ACPI init Aug 12 23:51:53.137689 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Aug 12 23:51:53.137701 kernel: pnp: PnP ACPI: found 1 devices Aug 12 23:51:53.137709 kernel: NET: Registered PF_INET protocol family Aug 12 23:51:53.137716 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 12 23:51:53.137726 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 12 23:51:53.137734 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 12 23:51:53.137742 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 12 23:51:53.137750 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 12 23:51:53.137757 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 12 23:51:53.137765 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 12 23:51:53.137773 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 12 23:51:53.137781 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 12 23:51:53.137802 kernel: PCI: CLS 0 bytes, default 64 Aug 12 23:51:53.137825 kernel: kvm [1]: HYP mode not available Aug 12 23:51:53.137833 kernel: Initialise system trusted keyrings Aug 12 23:51:53.137841 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 12 23:51:53.137849 kernel: Key type asymmetric registered Aug 12 23:51:53.137857 kernel: Asymmetric key parser 'x509' registered Aug 12 23:51:53.137865 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 12 23:51:53.137872 kernel: io scheduler mq-deadline registered Aug 12 23:51:53.137882 kernel: io scheduler kyber registered Aug 12 23:51:53.137890 kernel: io scheduler bfq registered Aug 12 23:51:53.137901 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Aug 12 23:51:53.137908 kernel: ACPI: button: Power Button [PWRB] Aug 12 23:51:53.137917 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Aug 12 23:51:53.138008 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Aug 12 23:51:53.138020 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 12 23:51:53.138028 kernel: thunder_xcv, ver 1.0 Aug 12 23:51:53.138036 kernel: thunder_bgx, ver 1.0 Aug 12 23:51:53.138062 kernel: nicpf, ver 1.0 Aug 12 23:51:53.138070 kernel: nicvf, ver 1.0 Aug 12 23:51:53.138163 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 12 23:51:53.138231 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-08-12T23:51:52 UTC (1755042712) Aug 12 23:51:53.138242 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 12 23:51:53.138250 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Aug 12 23:51:53.138258 kernel: watchdog: Delayed init of the lockup detector failed: -19 Aug 12 23:51:53.138265 kernel: watchdog: Hard watchdog permanently disabled Aug 12 23:51:53.138273 kernel: NET: Registered PF_INET6 protocol family Aug 12 23:51:53.138285 kernel: Segment Routing with IPv6 Aug 12 23:51:53.138295 kernel: In-situ OAM (IOAM) with IPv6 Aug 12 23:51:53.138303 kernel: NET: Registered PF_PACKET protocol family Aug 12 23:51:53.138310 kernel: Key type dns_resolver registered Aug 12 23:51:53.138318 kernel: registered taskstats version 1 Aug 12 23:51:53.138325 kernel: Loading compiled-in X.509 certificates Aug 12 23:51:53.138333 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 7263800c6d21650660e2b030c1023dce09b1e8b6' Aug 12 23:51:53.138344 kernel: Key type .fscrypt registered Aug 12 23:51:53.138351 kernel: Key type fscrypt-provisioning registered Aug 12 23:51:53.138359 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 12 23:51:53.138370 kernel: ima: Allocated hash algorithm: sha1 Aug 12 23:51:53.138378 kernel: ima: No architecture policies found Aug 12 23:51:53.138389 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 12 23:51:53.138397 kernel: clk: Disabling unused clocks Aug 12 23:51:53.138404 kernel: Freeing unused kernel memory: 39424K Aug 12 23:51:53.138412 kernel: Run /init as init process Aug 12 23:51:53.138419 kernel: with arguments: Aug 12 23:51:53.138427 kernel: /init Aug 12 23:51:53.138434 kernel: with environment: Aug 12 23:51:53.138445 kernel: HOME=/ Aug 12 23:51:53.138452 kernel: TERM=linux Aug 12 23:51:53.138460 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 12 23:51:53.138470 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 12 23:51:53.138480 systemd[1]: Detected virtualization kvm. Aug 12 23:51:53.138489 systemd[1]: Detected architecture arm64. Aug 12 23:51:53.138497 systemd[1]: Running in initrd. Aug 12 23:51:53.138511 systemd[1]: No hostname configured, using default hostname. Aug 12 23:51:53.138524 systemd[1]: Hostname set to . Aug 12 23:51:53.138532 systemd[1]: Initializing machine ID from VM UUID. Aug 12 23:51:53.138540 systemd[1]: Queued start job for default target initrd.target. Aug 12 23:51:53.138549 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 12 23:51:53.138566 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 12 23:51:53.138575 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 12 23:51:53.138583 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 12 23:51:53.138594 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 12 23:51:53.138603 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 12 23:51:53.138612 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 12 23:51:53.138621 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 12 23:51:53.138629 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 12 23:51:53.138637 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 12 23:51:53.138646 systemd[1]: Reached target paths.target - Path Units. Aug 12 23:51:53.138656 systemd[1]: Reached target slices.target - Slice Units. Aug 12 23:51:53.138664 systemd[1]: Reached target swap.target - Swaps. Aug 12 23:51:53.138672 systemd[1]: Reached target timers.target - Timer Units. Aug 12 23:51:53.138680 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 12 23:51:53.138689 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 12 23:51:53.138697 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 12 23:51:53.138705 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 12 23:51:53.138713 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 12 23:51:53.138721 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 12 23:51:53.138731 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 12 23:51:53.138739 systemd[1]: Reached target sockets.target - Socket Units. Aug 12 23:51:53.138747 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 12 23:51:53.138756 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 12 23:51:53.138764 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 12 23:51:53.138772 systemd[1]: Starting systemd-fsck-usr.service... Aug 12 23:51:53.138780 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 12 23:51:53.138802 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 12 23:51:53.138813 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:51:53.138822 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 12 23:51:53.138830 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 12 23:51:53.138838 systemd[1]: Finished systemd-fsck-usr.service. Aug 12 23:51:53.138847 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 12 23:51:53.138857 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 12 23:51:53.138865 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 12 23:51:53.138901 systemd-journald[234]: Collecting audit messages is disabled. Aug 12 23:51:53.138923 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:51:53.138932 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 12 23:51:53.138941 systemd-journald[234]: Journal started Aug 12 23:51:53.138961 systemd-journald[234]: Runtime Journal (/run/log/journal/ca3a34747a5247eeaafee50eb20ff46e) is 5.9M, max 47.3M, 41.4M free. Aug 12 23:51:53.123416 systemd-modules-load[238]: Inserted module 'overlay' Aug 12 23:51:53.142325 systemd-modules-load[238]: Inserted module 'br_netfilter' Aug 12 23:51:53.143226 kernel: Bridge firewalling registered Aug 12 23:51:53.145220 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 12 23:51:53.146827 systemd[1]: Started systemd-journald.service - Journal Service. Aug 12 23:51:53.148043 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 12 23:51:53.149081 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 12 23:51:53.154779 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 12 23:51:53.156743 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 12 23:51:53.169938 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:51:53.173928 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 12 23:51:53.175207 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 12 23:51:53.190015 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 12 23:51:53.192310 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 12 23:51:53.203758 dracut-cmdline[276]: dracut-dracut-053 Aug 12 23:51:53.206925 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2f9df6e9e6c671c457040a64675390bbff42294b08c628cd2dc472ed8120146a Aug 12 23:51:53.226493 systemd-resolved[278]: Positive Trust Anchors: Aug 12 23:51:53.226512 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 12 23:51:53.226545 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 12 23:51:53.233383 systemd-resolved[278]: Defaulting to hostname 'linux'. Aug 12 23:51:53.234651 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 12 23:51:53.235660 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 12 23:51:53.314836 kernel: SCSI subsystem initialized Aug 12 23:51:53.321861 kernel: Loading iSCSI transport class v2.0-870. Aug 12 23:51:53.332267 kernel: iscsi: registered transport (tcp) Aug 12 23:51:53.350823 kernel: iscsi: registered transport (qla4xxx) Aug 12 23:51:53.350892 kernel: QLogic iSCSI HBA Driver Aug 12 23:51:53.409549 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 12 23:51:53.419031 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 12 23:51:53.437051 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 12 23:51:53.438719 kernel: device-mapper: uevent: version 1.0.3 Aug 12 23:51:53.438756 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 12 23:51:53.494822 kernel: raid6: neonx8 gen() 15751 MB/s Aug 12 23:51:53.511811 kernel: raid6: neonx4 gen() 15641 MB/s Aug 12 23:51:53.528814 kernel: raid6: neonx2 gen() 13001 MB/s Aug 12 23:51:53.545809 kernel: raid6: neonx1 gen() 10473 MB/s Aug 12 23:51:53.562808 kernel: raid6: int64x8 gen() 6855 MB/s Aug 12 23:51:53.579817 kernel: raid6: int64x4 gen() 6874 MB/s Aug 12 23:51:53.596809 kernel: raid6: int64x2 gen() 5893 MB/s Aug 12 23:51:53.613808 kernel: raid6: int64x1 gen() 5046 MB/s Aug 12 23:51:53.613822 kernel: raid6: using algorithm neonx8 gen() 15751 MB/s Aug 12 23:51:53.630824 kernel: raid6: .... xor() 11915 MB/s, rmw enabled Aug 12 23:51:53.630838 kernel: raid6: using neon recovery algorithm Aug 12 23:51:53.640044 kernel: xor: measuring software checksum speed Aug 12 23:51:53.640071 kernel: 8regs : 19299 MB/sec Aug 12 23:51:53.641146 kernel: 32regs : 19674 MB/sec Aug 12 23:51:53.641160 kernel: arm64_neon : 27052 MB/sec Aug 12 23:51:53.641170 kernel: xor: using function: arm64_neon (27052 MB/sec) Aug 12 23:51:53.702822 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 12 23:51:53.719401 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 12 23:51:53.733296 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 12 23:51:53.763705 systemd-udevd[462]: Using default interface naming scheme 'v255'. Aug 12 23:51:53.767109 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 12 23:51:53.775079 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 12 23:51:53.794155 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation Aug 12 23:51:53.828804 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 12 23:51:53.839026 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 12 23:51:53.888627 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 12 23:51:53.910115 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 12 23:51:53.926078 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 12 23:51:53.928494 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 12 23:51:53.929584 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 12 23:51:53.931265 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 12 23:51:53.940046 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 12 23:51:53.956824 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Aug 12 23:51:53.959879 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 12 23:51:53.961887 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 12 23:51:53.966267 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 12 23:51:53.966299 kernel: GPT:9289727 != 19775487 Aug 12 23:51:53.966310 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 12 23:51:53.966319 kernel: GPT:9289727 != 19775487 Aug 12 23:51:53.967524 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 12 23:51:53.967582 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 12 23:51:53.967872 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 12 23:51:53.967995 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 12 23:51:53.970714 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 12 23:51:53.971666 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 12 23:51:53.971811 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:51:53.973824 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:51:53.980052 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:51:53.989813 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (508) Aug 12 23:51:53.991838 kernel: BTRFS: device fsid 03408483-5051-409a-aab4-4e6d5027e982 devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (515) Aug 12 23:51:53.992827 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 12 23:51:53.994997 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:51:54.002226 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 12 23:51:54.013156 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 12 23:51:54.016916 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 12 23:51:54.017864 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 12 23:51:54.038000 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 12 23:51:54.039713 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 12 23:51:54.044527 disk-uuid[552]: Primary Header is updated. Aug 12 23:51:54.044527 disk-uuid[552]: Secondary Entries is updated. Aug 12 23:51:54.044527 disk-uuid[552]: Secondary Header is updated. Aug 12 23:51:54.049824 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 12 23:51:54.064380 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 12 23:51:55.066844 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 12 23:51:55.067327 disk-uuid[553]: The operation has completed successfully. Aug 12 23:51:55.099628 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 12 23:51:55.099848 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 12 23:51:55.125026 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 12 23:51:55.129773 sh[574]: Success Aug 12 23:51:55.149820 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Aug 12 23:51:55.203459 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 12 23:51:55.205051 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 12 23:51:55.205777 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 12 23:51:55.223482 kernel: BTRFS info (device dm-0): first mount of filesystem 03408483-5051-409a-aab4-4e6d5027e982 Aug 12 23:51:55.223537 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Aug 12 23:51:55.223565 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 12 23:51:55.224285 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 12 23:51:55.225802 kernel: BTRFS info (device dm-0): using free space tree Aug 12 23:51:55.229333 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 12 23:51:55.230532 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 12 23:51:55.243014 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 12 23:51:55.244913 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 12 23:51:55.253106 kernel: BTRFS info (device vda6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 12 23:51:55.253165 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 12 23:51:55.253176 kernel: BTRFS info (device vda6): using free space tree Aug 12 23:51:55.256841 kernel: BTRFS info (device vda6): auto enabling async discard Aug 12 23:51:55.266497 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 12 23:51:55.267568 kernel: BTRFS info (device vda6): last unmount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 12 23:51:55.281740 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 12 23:51:55.287056 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 12 23:51:55.349193 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 12 23:51:55.360027 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 12 23:51:55.395279 systemd-networkd[764]: lo: Link UP Aug 12 23:51:55.395291 systemd-networkd[764]: lo: Gained carrier Aug 12 23:51:55.396052 systemd-networkd[764]: Enumeration completed Aug 12 23:51:55.396201 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 12 23:51:55.396631 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:51:55.396634 systemd-networkd[764]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 12 23:51:55.397786 systemd-networkd[764]: eth0: Link UP Aug 12 23:51:55.397798 systemd-networkd[764]: eth0: Gained carrier Aug 12 23:51:55.397807 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:51:55.400680 systemd[1]: Reached target network.target - Network. Aug 12 23:51:55.418860 systemd-networkd[764]: eth0: DHCPv4 address 10.0.0.6/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 12 23:51:55.420846 ignition[680]: Ignition 2.19.0 Aug 12 23:51:55.420857 ignition[680]: Stage: fetch-offline Aug 12 23:51:55.420894 ignition[680]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:51:55.420902 ignition[680]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:51:55.421112 ignition[680]: parsed url from cmdline: "" Aug 12 23:51:55.421115 ignition[680]: no config URL provided Aug 12 23:51:55.421119 ignition[680]: reading system config file "/usr/lib/ignition/user.ign" Aug 12 23:51:55.421126 ignition[680]: no config at "/usr/lib/ignition/user.ign" Aug 12 23:51:55.421148 ignition[680]: op(1): [started] loading QEMU firmware config module Aug 12 23:51:55.421152 ignition[680]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 12 23:51:55.440541 ignition[680]: op(1): [finished] loading QEMU firmware config module Aug 12 23:51:55.479597 ignition[680]: parsing config with SHA512: 8012929f42a55a007ae1f98bbbd61c93c18ae32ff0674a06f0a2d5a958b14599794b27ed340859e84a9f904932a69a3649587f6d1ce3960418523dea2530946d Aug 12 23:51:55.486671 unknown[680]: fetched base config from "system" Aug 12 23:51:55.486683 unknown[680]: fetched user config from "qemu" Aug 12 23:51:55.487719 ignition[680]: fetch-offline: fetch-offline passed Aug 12 23:51:55.488003 ignition[680]: Ignition finished successfully Aug 12 23:51:55.489470 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 12 23:51:55.491278 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 12 23:51:55.503051 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 12 23:51:55.514572 ignition[773]: Ignition 2.19.0 Aug 12 23:51:55.514583 ignition[773]: Stage: kargs Aug 12 23:51:55.514771 ignition[773]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:51:55.514782 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:51:55.515696 ignition[773]: kargs: kargs passed Aug 12 23:51:55.515749 ignition[773]: Ignition finished successfully Aug 12 23:51:55.520764 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 12 23:51:55.530033 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 12 23:51:55.542566 ignition[780]: Ignition 2.19.0 Aug 12 23:51:55.542578 ignition[780]: Stage: disks Aug 12 23:51:55.542757 ignition[780]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:51:55.542767 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:51:55.543772 ignition[780]: disks: disks passed Aug 12 23:51:55.545822 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 12 23:51:55.543867 ignition[780]: Ignition finished successfully Aug 12 23:51:55.548869 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 12 23:51:55.549923 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 12 23:51:55.553558 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 12 23:51:55.555067 systemd[1]: Reached target sysinit.target - System Initialization. Aug 12 23:51:55.556737 systemd[1]: Reached target basic.target - Basic System. Aug 12 23:51:55.565996 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 12 23:51:55.579511 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 12 23:51:55.584401 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 12 23:51:55.594963 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 12 23:51:55.645855 kernel: EXT4-fs (vda9): mounted filesystem 128aec8b-f05d-48ed-8996-c9e8b21a7810 r/w with ordered data mode. Quota mode: none. Aug 12 23:51:55.646161 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 12 23:51:55.647278 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 12 23:51:55.659930 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 12 23:51:55.662007 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 12 23:51:55.662925 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 12 23:51:55.662975 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 12 23:51:55.663000 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 12 23:51:55.669036 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 12 23:51:55.671398 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 12 23:51:55.675881 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (798) Aug 12 23:51:55.675911 kernel: BTRFS info (device vda6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 12 23:51:55.677348 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 12 23:51:55.677373 kernel: BTRFS info (device vda6): using free space tree Aug 12 23:51:55.680801 kernel: BTRFS info (device vda6): auto enabling async discard Aug 12 23:51:55.682084 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 12 23:51:55.742763 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Aug 12 23:51:55.746855 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Aug 12 23:51:55.754413 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Aug 12 23:51:55.759490 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Aug 12 23:51:55.898251 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 12 23:51:55.910990 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 12 23:51:55.912563 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 12 23:51:55.917926 kernel: BTRFS info (device vda6): last unmount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 12 23:51:55.948812 ignition[912]: INFO : Ignition 2.19.0 Aug 12 23:51:55.948812 ignition[912]: INFO : Stage: mount Aug 12 23:51:55.948812 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 12 23:51:55.948812 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:51:55.948402 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 12 23:51:55.955520 ignition[912]: INFO : mount: mount passed Aug 12 23:51:55.955520 ignition[912]: INFO : Ignition finished successfully Aug 12 23:51:55.952777 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 12 23:51:55.965968 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 12 23:51:56.223032 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 12 23:51:56.239031 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 12 23:51:56.246814 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (925) Aug 12 23:51:56.249342 kernel: BTRFS info (device vda6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 12 23:51:56.249392 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 12 23:51:56.249403 kernel: BTRFS info (device vda6): using free space tree Aug 12 23:51:56.253820 kernel: BTRFS info (device vda6): auto enabling async discard Aug 12 23:51:56.255282 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 12 23:51:56.275849 ignition[942]: INFO : Ignition 2.19.0 Aug 12 23:51:56.275849 ignition[942]: INFO : Stage: files Aug 12 23:51:56.277166 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 12 23:51:56.277166 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:51:56.277166 ignition[942]: DEBUG : files: compiled without relabeling support, skipping Aug 12 23:51:56.279577 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 12 23:51:56.279577 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 12 23:51:56.283232 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 12 23:51:56.284475 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 12 23:51:56.284475 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 12 23:51:56.283850 unknown[942]: wrote ssh authorized keys file for user: core Aug 12 23:51:56.287576 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 12 23:51:56.287576 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Aug 12 23:51:56.517845 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 12 23:51:56.814901 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 12 23:51:56.814901 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 12 23:51:56.814901 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Aug 12 23:51:56.878962 systemd-networkd[764]: eth0: Gained IPv6LL Aug 12 23:51:57.036067 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 12 23:51:57.127824 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 12 23:51:57.127824 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 12 23:51:57.131226 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 12 23:51:57.131226 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 12 23:51:57.131226 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 12 23:51:57.131226 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 12 23:51:57.131226 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 12 23:51:57.131226 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 12 23:51:57.131226 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 12 23:51:57.131226 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 12 23:51:57.131226 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 12 23:51:57.131226 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 12 23:51:57.131226 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 12 23:51:57.131226 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 12 23:51:57.131226 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Aug 12 23:51:57.414382 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 12 23:51:57.755842 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 12 23:51:57.755842 ignition[942]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 12 23:51:57.758799 ignition[942]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 12 23:51:57.758799 ignition[942]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 12 23:51:57.758799 ignition[942]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 12 23:51:57.758799 ignition[942]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Aug 12 23:51:57.758799 ignition[942]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 12 23:51:57.758799 ignition[942]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 12 23:51:57.758799 ignition[942]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Aug 12 23:51:57.758799 ignition[942]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Aug 12 23:51:57.790831 ignition[942]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 12 23:51:57.795380 ignition[942]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 12 23:51:57.798077 ignition[942]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Aug 12 23:51:57.798077 ignition[942]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Aug 12 23:51:57.798077 ignition[942]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Aug 12 23:51:57.798077 ignition[942]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 12 23:51:57.798077 ignition[942]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 12 23:51:57.798077 ignition[942]: INFO : files: files passed Aug 12 23:51:57.798077 ignition[942]: INFO : Ignition finished successfully Aug 12 23:51:57.798723 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 12 23:51:57.808980 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 12 23:51:57.811236 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 12 23:51:57.812814 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 12 23:51:57.812906 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 12 23:51:57.819823 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory Aug 12 23:51:57.823569 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 12 23:51:57.823569 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 12 23:51:57.826649 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 12 23:51:57.827983 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 12 23:51:57.829437 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 12 23:51:57.834967 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 12 23:51:57.867647 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 12 23:51:57.867771 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 12 23:51:57.869728 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 12 23:51:57.871177 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 12 23:51:57.872675 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 12 23:51:57.873598 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 12 23:51:57.892700 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 12 23:51:57.899033 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 12 23:51:57.907907 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 12 23:51:57.908999 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 12 23:51:57.910705 systemd[1]: Stopped target timers.target - Timer Units. Aug 12 23:51:57.912212 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 12 23:51:57.912344 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 12 23:51:57.914501 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 12 23:51:57.916486 systemd[1]: Stopped target basic.target - Basic System. Aug 12 23:51:57.918007 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 12 23:51:57.919548 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 12 23:51:57.921327 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 12 23:51:57.922947 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 12 23:51:57.924483 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 12 23:51:57.925904 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 12 23:51:57.927594 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 12 23:51:57.928890 systemd[1]: Stopped target swap.target - Swaps. Aug 12 23:51:57.930082 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 12 23:51:57.930220 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 12 23:51:57.932161 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 12 23:51:57.933812 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 12 23:51:57.935373 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 12 23:51:57.938847 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 12 23:51:57.940943 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 12 23:51:57.941086 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 12 23:51:57.943362 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 12 23:51:57.943489 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 12 23:51:57.945288 systemd[1]: Stopped target paths.target - Path Units. Aug 12 23:51:57.946632 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 12 23:51:57.951859 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 12 23:51:57.952904 systemd[1]: Stopped target slices.target - Slice Units. Aug 12 23:51:57.954642 systemd[1]: Stopped target sockets.target - Socket Units. Aug 12 23:51:57.955992 systemd[1]: iscsid.socket: Deactivated successfully. Aug 12 23:51:57.956090 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 12 23:51:57.957325 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 12 23:51:57.957409 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 12 23:51:57.958766 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 12 23:51:57.958908 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 12 23:51:57.960374 systemd[1]: ignition-files.service: Deactivated successfully. Aug 12 23:51:57.960486 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 12 23:51:57.970035 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 12 23:51:57.970749 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 12 23:51:57.970898 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 12 23:51:57.973326 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 12 23:51:57.974649 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 12 23:51:57.974772 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 12 23:51:57.976466 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 12 23:51:57.976587 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 12 23:51:57.982553 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 12 23:51:57.983838 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 12 23:51:57.985931 ignition[998]: INFO : Ignition 2.19.0 Aug 12 23:51:57.985931 ignition[998]: INFO : Stage: umount Aug 12 23:51:57.985931 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 12 23:51:57.985931 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:51:57.985931 ignition[998]: INFO : umount: umount passed Aug 12 23:51:57.985931 ignition[998]: INFO : Ignition finished successfully Aug 12 23:51:57.988545 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 12 23:51:57.989823 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 12 23:51:57.992641 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 12 23:51:57.993053 systemd[1]: Stopped target network.target - Network. Aug 12 23:51:57.994335 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 12 23:51:57.994394 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 12 23:51:57.995840 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 12 23:51:57.995881 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 12 23:51:57.997277 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 12 23:51:57.997317 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 12 23:51:57.998568 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 12 23:51:57.998610 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 12 23:51:58.000258 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 12 23:51:58.001585 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 12 23:51:58.010845 systemd-networkd[764]: eth0: DHCPv6 lease lost Aug 12 23:51:58.012418 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 12 23:51:58.012565 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 12 23:51:58.014148 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 12 23:51:58.014176 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 12 23:51:58.023930 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 12 23:51:58.024804 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 12 23:51:58.024876 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 12 23:51:58.026878 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 12 23:51:58.029356 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 12 23:51:58.029456 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 12 23:51:58.034072 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 12 23:51:58.034161 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:51:58.035360 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 12 23:51:58.035408 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 12 23:51:58.036974 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 12 23:51:58.037076 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 12 23:51:58.038727 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 12 23:51:58.038870 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 12 23:51:58.041195 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 12 23:51:58.041286 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 12 23:51:58.043528 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 12 23:51:58.043597 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 12 23:51:58.044526 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 12 23:51:58.044565 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 12 23:51:58.045973 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 12 23:51:58.046024 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 12 23:51:58.048315 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 12 23:51:58.048367 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 12 23:51:58.050631 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 12 23:51:58.050682 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 12 23:51:58.068055 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 12 23:51:58.069051 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 12 23:51:58.069123 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 12 23:51:58.070735 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 12 23:51:58.070821 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:51:58.075364 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 12 23:51:58.075632 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 12 23:51:58.082971 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 12 23:51:58.083866 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 12 23:51:58.086387 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 12 23:51:58.087416 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 12 23:51:58.087492 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 12 23:51:58.107031 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 12 23:51:58.113814 systemd[1]: Switching root. Aug 12 23:51:58.143530 systemd-journald[234]: Journal stopped Aug 12 23:51:59.062819 systemd-journald[234]: Received SIGTERM from PID 1 (systemd). Aug 12 23:51:59.063010 kernel: SELinux: policy capability network_peer_controls=1 Aug 12 23:51:59.063038 kernel: SELinux: policy capability open_perms=1 Aug 12 23:51:59.063049 kernel: SELinux: policy capability extended_socket_class=1 Aug 12 23:51:59.063063 kernel: SELinux: policy capability always_check_network=0 Aug 12 23:51:59.063111 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 12 23:51:59.063124 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 12 23:51:59.063133 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 12 23:51:59.063143 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 12 23:51:59.063152 kernel: audit: type=1403 audit(1755042718.368:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 12 23:51:59.063198 systemd[1]: Successfully loaded SELinux policy in 33.384ms. Aug 12 23:51:59.063222 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.473ms. Aug 12 23:51:59.063234 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 12 23:51:59.063279 systemd[1]: Detected virtualization kvm. Aug 12 23:51:59.063294 systemd[1]: Detected architecture arm64. Aug 12 23:51:59.063305 systemd[1]: Detected first boot. Aug 12 23:51:59.063316 systemd[1]: Initializing machine ID from VM UUID. Aug 12 23:51:59.063326 zram_generator::config[1043]: No configuration found. Aug 12 23:51:59.063371 systemd[1]: Populated /etc with preset unit settings. Aug 12 23:51:59.063393 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 12 23:51:59.063404 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 12 23:51:59.063415 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 12 23:51:59.063464 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 12 23:51:59.063481 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 12 23:51:59.063493 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 12 23:51:59.063505 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 12 23:51:59.063558 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 12 23:51:59.063576 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 12 23:51:59.063588 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 12 23:51:59.063638 systemd[1]: Created slice user.slice - User and Session Slice. Aug 12 23:51:59.063654 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 12 23:51:59.063666 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 12 23:51:59.063677 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 12 23:51:59.063688 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 12 23:51:59.063737 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 12 23:51:59.063749 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 12 23:51:59.063764 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Aug 12 23:51:59.063775 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 12 23:51:59.063786 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 12 23:51:59.063848 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 12 23:51:59.063860 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 12 23:51:59.063872 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 12 23:51:59.063883 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 12 23:51:59.063901 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 12 23:51:59.063914 systemd[1]: Reached target slices.target - Slice Units. Aug 12 23:51:59.063926 systemd[1]: Reached target swap.target - Swaps. Aug 12 23:51:59.063936 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 12 23:51:59.063950 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 12 23:51:59.063961 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 12 23:51:59.063971 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 12 23:51:59.063982 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 12 23:51:59.063993 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 12 23:51:59.064003 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 12 23:51:59.064016 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 12 23:51:59.064028 systemd[1]: Mounting media.mount - External Media Directory... Aug 12 23:51:59.064039 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 12 23:51:59.064050 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 12 23:51:59.064061 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 12 23:51:59.064072 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 12 23:51:59.064084 systemd[1]: Reached target machines.target - Containers. Aug 12 23:51:59.064095 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 12 23:51:59.064106 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:51:59.064119 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 12 23:51:59.064130 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 12 23:51:59.064142 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 12 23:51:59.064153 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 12 23:51:59.064164 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 12 23:51:59.064174 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 12 23:51:59.064186 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 12 23:51:59.064197 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 12 23:51:59.064211 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 12 23:51:59.064221 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 12 23:51:59.064233 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 12 23:51:59.064243 systemd[1]: Stopped systemd-fsck-usr.service. Aug 12 23:51:59.064254 kernel: fuse: init (API version 7.39) Aug 12 23:51:59.064266 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 12 23:51:59.064279 kernel: loop: module loaded Aug 12 23:51:59.064290 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 12 23:51:59.064301 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 12 23:51:59.064315 kernel: ACPI: bus type drm_connector registered Aug 12 23:51:59.064327 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 12 23:51:59.064340 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 12 23:51:59.064351 systemd[1]: verity-setup.service: Deactivated successfully. Aug 12 23:51:59.064363 systemd[1]: Stopped verity-setup.service. Aug 12 23:51:59.064418 systemd-journald[1107]: Collecting audit messages is disabled. Aug 12 23:51:59.064442 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 12 23:51:59.064455 systemd-journald[1107]: Journal started Aug 12 23:51:59.064479 systemd-journald[1107]: Runtime Journal (/run/log/journal/ca3a34747a5247eeaafee50eb20ff46e) is 5.9M, max 47.3M, 41.4M free. Aug 12 23:51:58.864159 systemd[1]: Queued start job for default target multi-user.target. Aug 12 23:51:58.879739 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 12 23:51:58.880135 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 12 23:51:59.066493 systemd[1]: Started systemd-journald.service - Journal Service. Aug 12 23:51:59.067257 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 12 23:51:59.068236 systemd[1]: Mounted media.mount - External Media Directory. Aug 12 23:51:59.069127 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 12 23:51:59.070132 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 12 23:51:59.071136 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 12 23:51:59.072110 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 12 23:51:59.073311 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 12 23:51:59.074524 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 12 23:51:59.074678 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 12 23:51:59.076075 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:51:59.076268 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 12 23:51:59.077386 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 12 23:51:59.077543 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 12 23:51:59.078602 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:51:59.078740 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 12 23:51:59.080175 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 12 23:51:59.080305 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 12 23:51:59.081430 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:51:59.081583 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 12 23:51:59.082674 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 12 23:51:59.083981 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 12 23:51:59.085309 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 12 23:51:59.098588 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 12 23:51:59.106934 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 12 23:51:59.109023 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 12 23:51:59.109990 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 12 23:51:59.110039 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 12 23:51:59.111871 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 12 23:51:59.114054 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 12 23:51:59.116627 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 12 23:51:59.117730 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:51:59.119302 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 12 23:51:59.121380 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 12 23:51:59.122517 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 12 23:51:59.126009 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 12 23:51:59.127024 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 12 23:51:59.130120 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 12 23:51:59.134207 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 12 23:51:59.139906 systemd-journald[1107]: Time spent on flushing to /var/log/journal/ca3a34747a5247eeaafee50eb20ff46e is 42.888ms for 856 entries. Aug 12 23:51:59.139906 systemd-journald[1107]: System Journal (/var/log/journal/ca3a34747a5247eeaafee50eb20ff46e) is 8.0M, max 195.6M, 187.6M free. Aug 12 23:51:59.193200 systemd-journald[1107]: Received client request to flush runtime journal. Aug 12 23:51:59.193237 kernel: loop0: detected capacity change from 0 to 203944 Aug 12 23:51:59.193251 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 12 23:51:59.139033 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 12 23:51:59.143211 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 12 23:51:59.144827 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 12 23:51:59.146069 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 12 23:51:59.147439 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 12 23:51:59.148976 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 12 23:51:59.155831 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 12 23:51:59.166430 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 12 23:51:59.180483 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 12 23:51:59.192366 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:51:59.198072 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 12 23:51:59.209164 udevadm[1166]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 12 23:51:59.218142 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 12 23:51:59.219230 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 12 23:51:59.222884 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 12 23:51:59.233124 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 12 23:51:59.236820 kernel: loop1: detected capacity change from 0 to 114432 Aug 12 23:51:59.265760 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Aug 12 23:51:59.265778 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Aug 12 23:51:59.269885 kernel: loop2: detected capacity change from 0 to 114328 Aug 12 23:51:59.270459 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 12 23:51:59.308837 kernel: loop3: detected capacity change from 0 to 203944 Aug 12 23:51:59.319820 kernel: loop4: detected capacity change from 0 to 114432 Aug 12 23:51:59.328816 kernel: loop5: detected capacity change from 0 to 114328 Aug 12 23:51:59.334673 (sd-merge)[1180]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Aug 12 23:51:59.336397 (sd-merge)[1180]: Merged extensions into '/usr'. Aug 12 23:51:59.340864 systemd[1]: Reloading requested from client PID 1154 ('systemd-sysext') (unit systemd-sysext.service)... Aug 12 23:51:59.340884 systemd[1]: Reloading... Aug 12 23:51:59.394969 zram_generator::config[1206]: No configuration found. Aug 12 23:51:59.481845 ldconfig[1149]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 12 23:51:59.513859 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:51:59.552503 systemd[1]: Reloading finished in 211 ms. Aug 12 23:51:59.581085 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 12 23:51:59.583941 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 12 23:51:59.601120 systemd[1]: Starting ensure-sysext.service... Aug 12 23:51:59.603699 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 12 23:51:59.627574 systemd[1]: Reloading requested from client PID 1240 ('systemctl') (unit ensure-sysext.service)... Aug 12 23:51:59.627599 systemd[1]: Reloading... Aug 12 23:51:59.673753 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 12 23:51:59.674058 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 12 23:51:59.674745 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 12 23:51:59.675100 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Aug 12 23:51:59.675166 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Aug 12 23:51:59.678353 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Aug 12 23:51:59.678367 systemd-tmpfiles[1241]: Skipping /boot Aug 12 23:51:59.685901 zram_generator::config[1265]: No configuration found. Aug 12 23:51:59.696206 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Aug 12 23:51:59.696222 systemd-tmpfiles[1241]: Skipping /boot Aug 12 23:51:59.793273 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:51:59.833314 systemd[1]: Reloading finished in 205 ms. Aug 12 23:51:59.852819 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 12 23:51:59.865322 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 12 23:51:59.879538 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 12 23:51:59.882344 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 12 23:51:59.884679 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 12 23:51:59.889230 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 12 23:51:59.907181 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 12 23:51:59.915701 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 12 23:51:59.917922 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 12 23:51:59.923252 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:51:59.927083 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 12 23:51:59.935232 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 12 23:51:59.941153 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 12 23:51:59.942129 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:51:59.943664 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 12 23:51:59.947128 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 12 23:51:59.949130 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:51:59.949321 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 12 23:51:59.950759 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:51:59.952867 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 12 23:51:59.959605 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:51:59.962372 systemd-udevd[1315]: Using default interface naming scheme 'v255'. Aug 12 23:51:59.966326 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 12 23:51:59.968738 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 12 23:51:59.971259 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:51:59.973796 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 12 23:51:59.979209 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 12 23:51:59.983133 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:51:59.983297 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 12 23:51:59.987276 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:51:59.987438 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 12 23:51:59.993856 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 12 23:52:00.013457 systemd[1]: Finished ensure-sysext.service. Aug 12 23:52:00.014663 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 12 23:52:00.017841 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 12 23:52:00.035766 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:52:00.035935 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 12 23:52:00.043259 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:52:00.057060 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 12 23:52:00.059919 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 12 23:52:00.061819 augenrules[1368]: No rules Aug 12 23:52:00.069062 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 12 23:52:00.069998 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:52:00.071819 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 12 23:52:00.075873 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 12 23:52:00.077896 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 12 23:52:00.078624 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 12 23:52:00.082002 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:52:00.082908 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 12 23:52:00.085764 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Aug 12 23:52:00.097328 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1349) Aug 12 23:52:00.104204 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 12 23:52:00.106558 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 12 23:52:00.106781 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 12 23:52:00.111586 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:52:00.111785 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 12 23:52:00.115361 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 12 23:52:00.133484 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 12 23:52:00.144043 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 12 23:52:00.184750 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 12 23:52:00.186300 systemd[1]: Reached target time-set.target - System Time Set. Aug 12 23:52:00.204386 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 12 23:52:00.205514 systemd-networkd[1379]: lo: Link UP Aug 12 23:52:00.205825 systemd-networkd[1379]: lo: Gained carrier Aug 12 23:52:00.206677 systemd-networkd[1379]: Enumeration completed Aug 12 23:52:00.207027 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 12 23:52:00.207950 systemd-networkd[1379]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:52:00.208010 systemd-networkd[1379]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 12 23:52:00.210349 systemd-networkd[1379]: eth0: Link UP Aug 12 23:52:00.210476 systemd-networkd[1379]: eth0: Gained carrier Aug 12 23:52:00.210575 systemd-networkd[1379]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:52:00.228480 systemd-resolved[1308]: Positive Trust Anchors: Aug 12 23:52:00.229849 systemd-networkd[1379]: eth0: DHCPv4 address 10.0.0.6/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 12 23:52:00.230068 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 12 23:52:00.230494 systemd-timesyncd[1380]: Network configuration changed, trying to establish connection. Aug 12 23:52:00.231642 systemd-resolved[1308]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 12 23:52:00.231681 systemd-resolved[1308]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 12 23:52:00.232577 systemd-timesyncd[1380]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 12 23:52:00.232743 systemd-timesyncd[1380]: Initial clock synchronization to Tue 2025-08-12 23:52:00.223667 UTC. Aug 12 23:52:00.235166 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:52:00.239664 systemd-resolved[1308]: Defaulting to hostname 'linux'. Aug 12 23:52:00.241772 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 12 23:52:00.243949 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 12 23:52:00.245836 systemd[1]: Reached target network.target - Network. Aug 12 23:52:00.246599 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 12 23:52:00.262069 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 12 23:52:00.277138 lvm[1398]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 12 23:52:00.289007 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:52:00.315477 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 12 23:52:00.316837 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 12 23:52:00.317766 systemd[1]: Reached target sysinit.target - System Initialization. Aug 12 23:52:00.318739 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 12 23:52:00.319833 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 12 23:52:00.321009 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 12 23:52:00.322010 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 12 23:52:00.322962 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 12 23:52:00.323983 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 12 23:52:00.324019 systemd[1]: Reached target paths.target - Path Units. Aug 12 23:52:00.324723 systemd[1]: Reached target timers.target - Timer Units. Aug 12 23:52:00.326862 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 12 23:52:00.329324 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 12 23:52:00.338780 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 12 23:52:00.341153 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 12 23:52:00.342702 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 12 23:52:00.343786 systemd[1]: Reached target sockets.target - Socket Units. Aug 12 23:52:00.344611 systemd[1]: Reached target basic.target - Basic System. Aug 12 23:52:00.345426 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 12 23:52:00.345461 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 12 23:52:00.346557 systemd[1]: Starting containerd.service - containerd container runtime... Aug 12 23:52:00.348606 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 12 23:52:00.350919 lvm[1406]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 12 23:52:00.352978 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 12 23:52:00.357127 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 12 23:52:00.358407 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 12 23:52:00.363934 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 12 23:52:00.370343 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 12 23:52:00.373062 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 12 23:52:00.377104 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 12 23:52:00.380073 jq[1409]: false Aug 12 23:52:00.381152 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 12 23:52:00.393547 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 12 23:52:00.394628 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 12 23:52:00.401279 systemd[1]: Starting update-engine.service - Update Engine... Aug 12 23:52:00.404907 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 12 23:52:00.406750 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 12 23:52:00.409127 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 12 23:52:00.409324 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 12 23:52:00.410968 extend-filesystems[1410]: Found loop3 Aug 12 23:52:00.410968 extend-filesystems[1410]: Found loop4 Aug 12 23:52:00.412366 extend-filesystems[1410]: Found loop5 Aug 12 23:52:00.412366 extend-filesystems[1410]: Found vda Aug 12 23:52:00.412366 extend-filesystems[1410]: Found vda1 Aug 12 23:52:00.412366 extend-filesystems[1410]: Found vda2 Aug 12 23:52:00.412366 extend-filesystems[1410]: Found vda3 Aug 12 23:52:00.412366 extend-filesystems[1410]: Found usr Aug 12 23:52:00.412366 extend-filesystems[1410]: Found vda4 Aug 12 23:52:00.412366 extend-filesystems[1410]: Found vda6 Aug 12 23:52:00.412366 extend-filesystems[1410]: Found vda7 Aug 12 23:52:00.412366 extend-filesystems[1410]: Found vda9 Aug 12 23:52:00.412366 extend-filesystems[1410]: Checking size of /dev/vda9 Aug 12 23:52:00.411741 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 12 23:52:00.420001 dbus-daemon[1408]: [system] SELinux support is enabled Aug 12 23:52:00.412044 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 12 23:52:00.445905 jq[1425]: true Aug 12 23:52:00.420326 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 12 23:52:00.430267 systemd[1]: motdgen.service: Deactivated successfully. Aug 12 23:52:00.430731 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 12 23:52:00.440776 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 12 23:52:00.443002 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 12 23:52:00.444139 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 12 23:52:00.444173 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 12 23:52:00.450584 (ntainerd)[1436]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 12 23:52:00.458495 extend-filesystems[1410]: Resized partition /dev/vda9 Aug 12 23:52:00.459939 jq[1435]: true Aug 12 23:52:00.465208 extend-filesystems[1444]: resize2fs 1.47.1 (20-May-2024) Aug 12 23:52:00.486922 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 12 23:52:00.486986 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1360) Aug 12 23:52:00.488104 tar[1428]: linux-arm64/helm Aug 12 23:52:00.518978 update_engine[1420]: I20250812 23:52:00.518486 1420 main.cc:92] Flatcar Update Engine starting Aug 12 23:52:00.525455 systemd-logind[1417]: Watching system buttons on /dev/input/event0 (Power Button) Aug 12 23:52:00.525976 systemd-logind[1417]: New seat seat0. Aug 12 23:52:00.527941 systemd[1]: Started systemd-logind.service - User Login Management. Aug 12 23:52:00.528062 update_engine[1420]: I20250812 23:52:00.527920 1420 update_check_scheduler.cc:74] Next update check in 5m18s Aug 12 23:52:00.529762 systemd[1]: Started update-engine.service - Update Engine. Aug 12 23:52:00.557195 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 12 23:52:00.565875 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 12 23:52:00.610580 sshd_keygen[1427]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 12 23:52:00.611100 extend-filesystems[1444]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 12 23:52:00.611100 extend-filesystems[1444]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 12 23:52:00.611100 extend-filesystems[1444]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 12 23:52:00.614742 extend-filesystems[1410]: Resized filesystem in /dev/vda9 Aug 12 23:52:00.612831 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 12 23:52:00.613472 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 12 23:52:00.625860 bash[1461]: Updated "/home/core/.ssh/authorized_keys" Aug 12 23:52:00.627775 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 12 23:52:00.629698 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 12 23:52:00.633349 locksmithd[1462]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 12 23:52:00.638647 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 12 23:52:00.653142 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 12 23:52:00.660647 systemd[1]: issuegen.service: Deactivated successfully. Aug 12 23:52:00.661062 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 12 23:52:00.666123 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 12 23:52:00.695153 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 12 23:52:00.721748 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 12 23:52:00.724955 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Aug 12 23:52:00.726609 systemd[1]: Reached target getty.target - Login Prompts. Aug 12 23:52:00.834110 containerd[1436]: time="2025-08-12T23:52:00.833912600Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Aug 12 23:52:00.861884 containerd[1436]: time="2025-08-12T23:52:00.861759120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 12 23:52:00.863604 containerd[1436]: time="2025-08-12T23:52:00.863542080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 12 23:52:00.863604 containerd[1436]: time="2025-08-12T23:52:00.863598760Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 12 23:52:00.863670 containerd[1436]: time="2025-08-12T23:52:00.863618320Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 12 23:52:00.864736 containerd[1436]: time="2025-08-12T23:52:00.864693440Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 12 23:52:00.864813 containerd[1436]: time="2025-08-12T23:52:00.864738160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 12 23:52:00.864838 containerd[1436]: time="2025-08-12T23:52:00.864820000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 12 23:52:00.864885 containerd[1436]: time="2025-08-12T23:52:00.864867560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 12 23:52:00.865097 containerd[1436]: time="2025-08-12T23:52:00.865073200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 12 23:52:00.865097 containerd[1436]: time="2025-08-12T23:52:00.865094560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 12 23:52:00.865152 containerd[1436]: time="2025-08-12T23:52:00.865110120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 12 23:52:00.865152 containerd[1436]: time="2025-08-12T23:52:00.865120200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 12 23:52:00.865211 containerd[1436]: time="2025-08-12T23:52:00.865192800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 12 23:52:00.865537 containerd[1436]: time="2025-08-12T23:52:00.865427680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 12 23:52:00.865686 containerd[1436]: time="2025-08-12T23:52:00.865662960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 12 23:52:00.865713 containerd[1436]: time="2025-08-12T23:52:00.865685760Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 12 23:52:00.865801 containerd[1436]: time="2025-08-12T23:52:00.865773040Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 12 23:52:00.865872 containerd[1436]: time="2025-08-12T23:52:00.865853920Z" level=info msg="metadata content store policy set" policy=shared Aug 12 23:52:00.884344 containerd[1436]: time="2025-08-12T23:52:00.884289960Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 12 23:52:00.884453 containerd[1436]: time="2025-08-12T23:52:00.884375840Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 12 23:52:00.884453 containerd[1436]: time="2025-08-12T23:52:00.884408520Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 12 23:52:00.884453 containerd[1436]: time="2025-08-12T23:52:00.884428240Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 12 23:52:00.884453 containerd[1436]: time="2025-08-12T23:52:00.884449760Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 12 23:52:00.884685 containerd[1436]: time="2025-08-12T23:52:00.884660760Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 12 23:52:00.884988 containerd[1436]: time="2025-08-12T23:52:00.884963800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 12 23:52:00.885127 containerd[1436]: time="2025-08-12T23:52:00.885106800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 12 23:52:00.885153 containerd[1436]: time="2025-08-12T23:52:00.885130040Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 12 23:52:00.885153 containerd[1436]: time="2025-08-12T23:52:00.885144600Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 12 23:52:00.885190 containerd[1436]: time="2025-08-12T23:52:00.885159200Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 12 23:52:00.885190 containerd[1436]: time="2025-08-12T23:52:00.885173640Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 12 23:52:00.885190 containerd[1436]: time="2025-08-12T23:52:00.885186960Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 12 23:52:00.885254 containerd[1436]: time="2025-08-12T23:52:00.885202760Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 12 23:52:00.885254 containerd[1436]: time="2025-08-12T23:52:00.885221080Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 12 23:52:00.885254 containerd[1436]: time="2025-08-12T23:52:00.885235040Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 12 23:52:00.885254 containerd[1436]: time="2025-08-12T23:52:00.885248080Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 12 23:52:00.885321 containerd[1436]: time="2025-08-12T23:52:00.885261320Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 12 23:52:00.885321 containerd[1436]: time="2025-08-12T23:52:00.885283680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 12 23:52:00.885321 containerd[1436]: time="2025-08-12T23:52:00.885308360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 12 23:52:00.885375 containerd[1436]: time="2025-08-12T23:52:00.885322040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 12 23:52:00.885375 containerd[1436]: time="2025-08-12T23:52:00.885335440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 12 23:52:00.885375 containerd[1436]: time="2025-08-12T23:52:00.885348320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 12 23:52:00.885375 containerd[1436]: time="2025-08-12T23:52:00.885362720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 12 23:52:00.885450 containerd[1436]: time="2025-08-12T23:52:00.885375640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 12 23:52:00.885450 containerd[1436]: time="2025-08-12T23:52:00.885389320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 12 23:52:00.885450 containerd[1436]: time="2025-08-12T23:52:00.885402440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 12 23:52:00.885450 containerd[1436]: time="2025-08-12T23:52:00.885417560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 12 23:52:00.885450 containerd[1436]: time="2025-08-12T23:52:00.885431560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 12 23:52:00.885450 containerd[1436]: time="2025-08-12T23:52:00.885443320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 12 23:52:00.885564 containerd[1436]: time="2025-08-12T23:52:00.885456360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 12 23:52:00.885564 containerd[1436]: time="2025-08-12T23:52:00.885482480Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 12 23:52:00.885564 containerd[1436]: time="2025-08-12T23:52:00.885515360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 12 23:52:00.885564 containerd[1436]: time="2025-08-12T23:52:00.885537800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 12 23:52:00.885564 containerd[1436]: time="2025-08-12T23:52:00.885555720Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 12 23:52:00.886100 containerd[1436]: time="2025-08-12T23:52:00.886080400Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 12 23:52:00.886132 containerd[1436]: time="2025-08-12T23:52:00.886106400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 12 23:52:00.886132 containerd[1436]: time="2025-08-12T23:52:00.886119120Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 12 23:52:00.886170 containerd[1436]: time="2025-08-12T23:52:00.886132560Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 12 23:52:00.886170 containerd[1436]: time="2025-08-12T23:52:00.886142960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 12 23:52:00.886170 containerd[1436]: time="2025-08-12T23:52:00.886167040Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 12 23:52:00.886222 containerd[1436]: time="2025-08-12T23:52:00.886178120Z" level=info msg="NRI interface is disabled by configuration." Aug 12 23:52:00.886222 containerd[1436]: time="2025-08-12T23:52:00.886189440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 12 23:52:00.887722 containerd[1436]: time="2025-08-12T23:52:00.887482200Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 12 23:52:00.887861 containerd[1436]: time="2025-08-12T23:52:00.887730200Z" level=info msg="Connect containerd service" Aug 12 23:52:00.887861 containerd[1436]: time="2025-08-12T23:52:00.887768640Z" level=info msg="using legacy CRI server" Aug 12 23:52:00.887861 containerd[1436]: time="2025-08-12T23:52:00.887776320Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 12 23:52:00.887941 containerd[1436]: time="2025-08-12T23:52:00.887916440Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 12 23:52:00.889337 containerd[1436]: time="2025-08-12T23:52:00.889302480Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 12 23:52:00.889989 containerd[1436]: time="2025-08-12T23:52:00.889956920Z" level=info msg="Start subscribing containerd event" Aug 12 23:52:00.890024 containerd[1436]: time="2025-08-12T23:52:00.890007600Z" level=info msg="Start recovering state" Aug 12 23:52:00.890091 containerd[1436]: time="2025-08-12T23:52:00.890076200Z" level=info msg="Start event monitor" Aug 12 23:52:00.890118 containerd[1436]: time="2025-08-12T23:52:00.890092840Z" level=info msg="Start snapshots syncer" Aug 12 23:52:00.890118 containerd[1436]: time="2025-08-12T23:52:00.890102240Z" level=info msg="Start cni network conf syncer for default" Aug 12 23:52:00.890118 containerd[1436]: time="2025-08-12T23:52:00.890111160Z" level=info msg="Start streaming server" Aug 12 23:52:00.891823 containerd[1436]: time="2025-08-12T23:52:00.891784760Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 12 23:52:00.891881 containerd[1436]: time="2025-08-12T23:52:00.891851280Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 12 23:52:00.892355 containerd[1436]: time="2025-08-12T23:52:00.891906160Z" level=info msg="containerd successfully booted in 0.060094s" Aug 12 23:52:00.892121 systemd[1]: Started containerd.service - containerd container runtime. Aug 12 23:52:00.914514 tar[1428]: linux-arm64/LICENSE Aug 12 23:52:00.914514 tar[1428]: linux-arm64/README.md Aug 12 23:52:00.928567 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 12 23:52:02.191032 systemd-networkd[1379]: eth0: Gained IPv6LL Aug 12 23:52:02.193785 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 12 23:52:02.195675 systemd[1]: Reached target network-online.target - Network is Online. Aug 12 23:52:02.212125 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 12 23:52:02.215030 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:52:02.217704 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 12 23:52:02.235425 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 12 23:52:02.235646 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 12 23:52:02.237223 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 12 23:52:02.252203 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 12 23:52:02.903842 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:52:02.905252 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 12 23:52:02.908751 (kubelet)[1522]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 12 23:52:02.911502 systemd[1]: Startup finished in 814ms (kernel) + 5.516s (initrd) + 4.595s (userspace) = 10.927s. Aug 12 23:52:03.442977 kubelet[1522]: E0812 23:52:03.442915 1522 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 12 23:52:03.445621 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 12 23:52:03.445822 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 12 23:52:05.806382 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 12 23:52:05.807944 systemd[1]: Started sshd@0-10.0.0.6:22-10.0.0.1:56836.service - OpenSSH per-connection server daemon (10.0.0.1:56836). Aug 12 23:52:05.973543 sshd[1535]: Accepted publickey for core from 10.0.0.1 port 56836 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:52:05.980478 sshd[1535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:52:06.002943 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 12 23:52:06.012147 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 12 23:52:06.014104 systemd-logind[1417]: New session 1 of user core. Aug 12 23:52:06.024333 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 12 23:52:06.029870 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 12 23:52:06.035630 (systemd)[1539]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:52:06.129241 systemd[1539]: Queued start job for default target default.target. Aug 12 23:52:06.141991 systemd[1539]: Created slice app.slice - User Application Slice. Aug 12 23:52:06.142028 systemd[1539]: Reached target paths.target - Paths. Aug 12 23:52:06.142042 systemd[1539]: Reached target timers.target - Timers. Aug 12 23:52:06.143482 systemd[1539]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 12 23:52:06.158892 systemd[1539]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 12 23:52:06.159625 systemd[1539]: Reached target sockets.target - Sockets. Aug 12 23:52:06.159649 systemd[1539]: Reached target basic.target - Basic System. Aug 12 23:52:06.159702 systemd[1539]: Reached target default.target - Main User Target. Aug 12 23:52:06.159731 systemd[1539]: Startup finished in 117ms. Aug 12 23:52:06.159848 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 12 23:52:06.161202 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 12 23:52:06.219556 systemd[1]: Started sshd@1-10.0.0.6:22-10.0.0.1:56860.service - OpenSSH per-connection server daemon (10.0.0.1:56860). Aug 12 23:52:06.257908 sshd[1550]: Accepted publickey for core from 10.0.0.1 port 56860 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:52:06.260159 sshd[1550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:52:06.266482 systemd-logind[1417]: New session 2 of user core. Aug 12 23:52:06.275072 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 12 23:52:06.329731 sshd[1550]: pam_unix(sshd:session): session closed for user core Aug 12 23:52:06.348511 systemd[1]: sshd@1-10.0.0.6:22-10.0.0.1:56860.service: Deactivated successfully. Aug 12 23:52:06.350676 systemd[1]: session-2.scope: Deactivated successfully. Aug 12 23:52:06.352882 systemd-logind[1417]: Session 2 logged out. Waiting for processes to exit. Aug 12 23:52:06.366366 systemd[1]: Started sshd@2-10.0.0.6:22-10.0.0.1:56866.service - OpenSSH per-connection server daemon (10.0.0.1:56866). Aug 12 23:52:06.373342 systemd-logind[1417]: Removed session 2. Aug 12 23:52:06.427358 sshd[1557]: Accepted publickey for core from 10.0.0.1 port 56866 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:52:06.427829 sshd[1557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:52:06.432177 systemd-logind[1417]: New session 3 of user core. Aug 12 23:52:06.443041 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 12 23:52:06.493644 sshd[1557]: pam_unix(sshd:session): session closed for user core Aug 12 23:52:06.507405 systemd[1]: sshd@2-10.0.0.6:22-10.0.0.1:56866.service: Deactivated successfully. Aug 12 23:52:06.508925 systemd[1]: session-3.scope: Deactivated successfully. Aug 12 23:52:06.510202 systemd-logind[1417]: Session 3 logged out. Waiting for processes to exit. Aug 12 23:52:06.511438 systemd[1]: Started sshd@3-10.0.0.6:22-10.0.0.1:56868.service - OpenSSH per-connection server daemon (10.0.0.1:56868). Aug 12 23:52:06.512347 systemd-logind[1417]: Removed session 3. Aug 12 23:52:06.548464 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 56868 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:52:06.550102 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:52:06.553879 systemd-logind[1417]: New session 4 of user core. Aug 12 23:52:06.564009 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 12 23:52:06.617754 sshd[1564]: pam_unix(sshd:session): session closed for user core Aug 12 23:52:06.628486 systemd[1]: sshd@3-10.0.0.6:22-10.0.0.1:56868.service: Deactivated successfully. Aug 12 23:52:06.631735 systemd[1]: session-4.scope: Deactivated successfully. Aug 12 23:52:06.634331 systemd-logind[1417]: Session 4 logged out. Waiting for processes to exit. Aug 12 23:52:06.644237 systemd[1]: Started sshd@4-10.0.0.6:22-10.0.0.1:56884.service - OpenSSH per-connection server daemon (10.0.0.1:56884). Aug 12 23:52:06.645174 systemd-logind[1417]: Removed session 4. Aug 12 23:52:06.682235 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 56884 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:52:06.684011 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:52:06.688380 systemd-logind[1417]: New session 5 of user core. Aug 12 23:52:06.702993 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 12 23:52:06.788522 sudo[1574]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 12 23:52:06.788885 sudo[1574]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:52:06.802920 sudo[1574]: pam_unix(sudo:session): session closed for user root Aug 12 23:52:06.806233 sshd[1571]: pam_unix(sshd:session): session closed for user core Aug 12 23:52:06.817980 systemd[1]: sshd@4-10.0.0.6:22-10.0.0.1:56884.service: Deactivated successfully. Aug 12 23:52:06.819909 systemd[1]: session-5.scope: Deactivated successfully. Aug 12 23:52:06.821465 systemd-logind[1417]: Session 5 logged out. Waiting for processes to exit. Aug 12 23:52:06.823831 systemd[1]: Started sshd@5-10.0.0.6:22-10.0.0.1:56892.service - OpenSSH per-connection server daemon (10.0.0.1:56892). Aug 12 23:52:06.824656 systemd-logind[1417]: Removed session 5. Aug 12 23:52:06.860688 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 56892 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:52:06.862356 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:52:06.866130 systemd-logind[1417]: New session 6 of user core. Aug 12 23:52:06.883000 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 12 23:52:06.935502 sudo[1583]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 12 23:52:06.935825 sudo[1583]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:52:06.939536 sudo[1583]: pam_unix(sudo:session): session closed for user root Aug 12 23:52:06.945137 sudo[1582]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 12 23:52:06.945726 sudo[1582]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:52:06.965194 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 12 23:52:06.968692 auditctl[1586]: No rules Aug 12 23:52:06.969775 systemd[1]: audit-rules.service: Deactivated successfully. Aug 12 23:52:06.970180 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 12 23:52:06.973224 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 12 23:52:07.003956 augenrules[1604]: No rules Aug 12 23:52:07.005687 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 12 23:52:07.008078 sudo[1582]: pam_unix(sudo:session): session closed for user root Aug 12 23:52:07.010123 sshd[1579]: pam_unix(sshd:session): session closed for user core Aug 12 23:52:07.027629 systemd[1]: sshd@5-10.0.0.6:22-10.0.0.1:56892.service: Deactivated successfully. Aug 12 23:52:07.029492 systemd[1]: session-6.scope: Deactivated successfully. Aug 12 23:52:07.031004 systemd-logind[1417]: Session 6 logged out. Waiting for processes to exit. Aug 12 23:52:07.044289 systemd[1]: Started sshd@6-10.0.0.6:22-10.0.0.1:56894.service - OpenSSH per-connection server daemon (10.0.0.1:56894). Aug 12 23:52:07.045302 systemd-logind[1417]: Removed session 6. Aug 12 23:52:07.078482 sshd[1612]: Accepted publickey for core from 10.0.0.1 port 56894 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:52:07.079815 sshd[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:52:07.083863 systemd-logind[1417]: New session 7 of user core. Aug 12 23:52:07.096007 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 12 23:52:07.146899 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 12 23:52:07.147221 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:52:07.557153 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 12 23:52:07.557286 (dockerd)[1633]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 12 23:52:08.034920 dockerd[1633]: time="2025-08-12T23:52:08.034767524Z" level=info msg="Starting up" Aug 12 23:52:08.236381 dockerd[1633]: time="2025-08-12T23:52:08.236322583Z" level=info msg="Loading containers: start." Aug 12 23:52:08.363823 kernel: Initializing XFRM netlink socket Aug 12 23:52:08.460228 systemd-networkd[1379]: docker0: Link UP Aug 12 23:52:08.488733 dockerd[1633]: time="2025-08-12T23:52:08.488640595Z" level=info msg="Loading containers: done." Aug 12 23:52:08.516471 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck221563098-merged.mount: Deactivated successfully. Aug 12 23:52:08.525306 dockerd[1633]: time="2025-08-12T23:52:08.525239664Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 12 23:52:08.525455 dockerd[1633]: time="2025-08-12T23:52:08.525361157Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Aug 12 23:52:08.525593 dockerd[1633]: time="2025-08-12T23:52:08.525488209Z" level=info msg="Daemon has completed initialization" Aug 12 23:52:08.583607 dockerd[1633]: time="2025-08-12T23:52:08.583471014Z" level=info msg="API listen on /run/docker.sock" Aug 12 23:52:08.583909 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 12 23:52:09.357498 containerd[1436]: time="2025-08-12T23:52:09.357375733Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 12 23:52:10.174882 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount988188576.mount: Deactivated successfully. Aug 12 23:52:11.140168 containerd[1436]: time="2025-08-12T23:52:11.140098726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:11.141228 containerd[1436]: time="2025-08-12T23:52:11.141181830Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.11: active requests=0, bytes read=25651815" Aug 12 23:52:11.142017 containerd[1436]: time="2025-08-12T23:52:11.141983229Z" level=info msg="ImageCreate event name:\"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:11.145303 containerd[1436]: time="2025-08-12T23:52:11.145251736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:11.146809 containerd[1436]: time="2025-08-12T23:52:11.146616063Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.11\" with image id \"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\", size \"25648613\" in 1.789196939s" Aug 12 23:52:11.146809 containerd[1436]: time="2025-08-12T23:52:11.146665013Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\"" Aug 12 23:52:11.149984 containerd[1436]: time="2025-08-12T23:52:11.149953795Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 12 23:52:12.256828 containerd[1436]: time="2025-08-12T23:52:12.256767392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:12.257347 containerd[1436]: time="2025-08-12T23:52:12.257306247Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.11: active requests=0, bytes read=22460285" Aug 12 23:52:12.258305 containerd[1436]: time="2025-08-12T23:52:12.258274300Z" level=info msg="ImageCreate event name:\"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:12.264205 containerd[1436]: time="2025-08-12T23:52:12.264139163Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:12.265419 containerd[1436]: time="2025-08-12T23:52:12.265366206Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.11\" with image id \"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\", size \"23996073\" in 1.115372419s" Aug 12 23:52:12.265485 containerd[1436]: time="2025-08-12T23:52:12.265421955Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\"" Aug 12 23:52:12.266002 containerd[1436]: time="2025-08-12T23:52:12.265973128Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 12 23:52:13.417870 containerd[1436]: time="2025-08-12T23:52:13.417645876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:13.418781 containerd[1436]: time="2025-08-12T23:52:13.418751709Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.11: active requests=0, bytes read=17125091" Aug 12 23:52:13.419465 containerd[1436]: time="2025-08-12T23:52:13.419427702Z" level=info msg="ImageCreate event name:\"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:13.422821 containerd[1436]: time="2025-08-12T23:52:13.422749478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:13.424693 containerd[1436]: time="2025-08-12T23:52:13.424655361Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.11\" with image id \"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\", size \"18660897\" in 1.158641881s" Aug 12 23:52:13.424921 containerd[1436]: time="2025-08-12T23:52:13.424814251Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\"" Aug 12 23:52:13.425371 containerd[1436]: time="2025-08-12T23:52:13.425346631Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 12 23:52:13.455116 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 12 23:52:13.469019 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:52:13.572937 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:52:13.578282 (kubelet)[1850]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 12 23:52:13.623965 kubelet[1850]: E0812 23:52:13.623908 1850 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 12 23:52:13.627353 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 12 23:52:13.627553 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 12 23:52:14.456267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3077239464.mount: Deactivated successfully. Aug 12 23:52:14.853189 containerd[1436]: time="2025-08-12T23:52:14.853074695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:14.854728 containerd[1436]: time="2025-08-12T23:52:14.854682563Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.11: active requests=0, bytes read=26915995" Aug 12 23:52:14.855815 containerd[1436]: time="2025-08-12T23:52:14.855754968Z" level=info msg="ImageCreate event name:\"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:14.857876 containerd[1436]: time="2025-08-12T23:52:14.857843348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:14.858565 containerd[1436]: time="2025-08-12T23:52:14.858497989Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.11\" with image id \"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\", repo tag \"registry.k8s.io/kube-proxy:v1.31.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\", size \"26915012\" in 1.433117045s" Aug 12 23:52:14.858565 containerd[1436]: time="2025-08-12T23:52:14.858534742Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\"" Aug 12 23:52:14.859220 containerd[1436]: time="2025-08-12T23:52:14.859048249Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 12 23:52:15.481353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1350643949.mount: Deactivated successfully. Aug 12 23:52:16.114855 containerd[1436]: time="2025-08-12T23:52:16.114797751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:16.116367 containerd[1436]: time="2025-08-12T23:52:16.116323490Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Aug 12 23:52:16.117453 containerd[1436]: time="2025-08-12T23:52:16.117413944Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:16.121376 containerd[1436]: time="2025-08-12T23:52:16.121330836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:16.123405 containerd[1436]: time="2025-08-12T23:52:16.123361249Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.264283965s" Aug 12 23:52:16.123454 containerd[1436]: time="2025-08-12T23:52:16.123413361Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Aug 12 23:52:16.123895 containerd[1436]: time="2025-08-12T23:52:16.123873522Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 12 23:52:16.601287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2638890157.mount: Deactivated successfully. Aug 12 23:52:16.617823 containerd[1436]: time="2025-08-12T23:52:16.617759726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:16.619455 containerd[1436]: time="2025-08-12T23:52:16.619414843Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Aug 12 23:52:16.620429 containerd[1436]: time="2025-08-12T23:52:16.620362242Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:16.625398 containerd[1436]: time="2025-08-12T23:52:16.623715710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:16.625398 containerd[1436]: time="2025-08-12T23:52:16.624931742Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 501.028386ms" Aug 12 23:52:16.625398 containerd[1436]: time="2025-08-12T23:52:16.624961577Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Aug 12 23:52:16.626001 containerd[1436]: time="2025-08-12T23:52:16.625968285Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 12 23:52:17.221142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1157376696.mount: Deactivated successfully. Aug 12 23:52:19.069215 containerd[1436]: time="2025-08-12T23:52:19.069151869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:19.070131 containerd[1436]: time="2025-08-12T23:52:19.070093403Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" Aug 12 23:52:19.071163 containerd[1436]: time="2025-08-12T23:52:19.071118244Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:19.075081 containerd[1436]: time="2025-08-12T23:52:19.075038796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:19.076388 containerd[1436]: time="2025-08-12T23:52:19.076347833Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.450345593s" Aug 12 23:52:19.076445 containerd[1436]: time="2025-08-12T23:52:19.076389186Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Aug 12 23:52:23.706386 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 12 23:52:23.714027 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:52:23.884465 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:52:23.889427 (kubelet)[2008]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 12 23:52:23.928516 kubelet[2008]: E0812 23:52:23.928454 2008 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 12 23:52:23.931335 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 12 23:52:23.931620 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 12 23:52:25.222551 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:52:25.234287 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:52:25.254753 systemd[1]: Reloading requested from client PID 2024 ('systemctl') (unit session-7.scope)... Aug 12 23:52:25.254770 systemd[1]: Reloading... Aug 12 23:52:25.322883 zram_generator::config[2065]: No configuration found. Aug 12 23:52:25.451181 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:52:25.506663 systemd[1]: Reloading finished in 251 ms. Aug 12 23:52:25.545598 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 12 23:52:25.545667 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 12 23:52:25.546846 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:52:25.550023 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:52:25.664148 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:52:25.669154 (kubelet)[2109]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 12 23:52:25.719970 kubelet[2109]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:52:25.720472 kubelet[2109]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 12 23:52:25.720472 kubelet[2109]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:52:25.720641 kubelet[2109]: I0812 23:52:25.720455 2109 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 12 23:52:26.478543 kubelet[2109]: I0812 23:52:26.478478 2109 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 12 23:52:26.478543 kubelet[2109]: I0812 23:52:26.478512 2109 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 12 23:52:26.478914 kubelet[2109]: I0812 23:52:26.478770 2109 server.go:934] "Client rotation is on, will bootstrap in background" Aug 12 23:52:26.549898 kubelet[2109]: E0812 23:52:26.549852 2109 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:52:26.550686 kubelet[2109]: I0812 23:52:26.550465 2109 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 12 23:52:26.565158 kubelet[2109]: E0812 23:52:26.565004 2109 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 12 23:52:26.565158 kubelet[2109]: I0812 23:52:26.565065 2109 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 12 23:52:26.570078 kubelet[2109]: I0812 23:52:26.570055 2109 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 12 23:52:26.571457 kubelet[2109]: I0812 23:52:26.571419 2109 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 12 23:52:26.571633 kubelet[2109]: I0812 23:52:26.571600 2109 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 12 23:52:26.572038 kubelet[2109]: I0812 23:52:26.571629 2109 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 12 23:52:26.572149 kubelet[2109]: I0812 23:52:26.572102 2109 topology_manager.go:138] "Creating topology manager with none policy" Aug 12 23:52:26.572149 kubelet[2109]: I0812 23:52:26.572114 2109 container_manager_linux.go:300] "Creating device plugin manager" Aug 12 23:52:26.572541 kubelet[2109]: I0812 23:52:26.572508 2109 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:52:26.575012 kubelet[2109]: I0812 23:52:26.574972 2109 kubelet.go:408] "Attempting to sync node with API server" Aug 12 23:52:26.575012 kubelet[2109]: I0812 23:52:26.575011 2109 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 12 23:52:26.575118 kubelet[2109]: I0812 23:52:26.575042 2109 kubelet.go:314] "Adding apiserver pod source" Aug 12 23:52:26.575257 kubelet[2109]: I0812 23:52:26.575227 2109 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 12 23:52:26.577783 kubelet[2109]: W0812 23:52:26.577483 2109 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Aug 12 23:52:26.577783 kubelet[2109]: W0812 23:52:26.577494 2109 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Aug 12 23:52:26.577783 kubelet[2109]: E0812 23:52:26.577725 2109 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:52:26.577783 kubelet[2109]: E0812 23:52:26.577727 2109 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:52:26.579636 kubelet[2109]: I0812 23:52:26.579343 2109 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 12 23:52:26.580407 kubelet[2109]: I0812 23:52:26.580355 2109 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 12 23:52:26.580748 kubelet[2109]: W0812 23:52:26.580722 2109 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 12 23:52:26.582411 kubelet[2109]: I0812 23:52:26.582380 2109 server.go:1274] "Started kubelet" Aug 12 23:52:26.583960 kubelet[2109]: I0812 23:52:26.582906 2109 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 12 23:52:26.583960 kubelet[2109]: I0812 23:52:26.583100 2109 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 12 23:52:26.583960 kubelet[2109]: I0812 23:52:26.583658 2109 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 12 23:52:26.584597 kubelet[2109]: I0812 23:52:26.584516 2109 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 12 23:52:26.584764 kubelet[2109]: I0812 23:52:26.584740 2109 server.go:449] "Adding debug handlers to kubelet server" Aug 12 23:52:26.586122 kubelet[2109]: I0812 23:52:26.585016 2109 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 12 23:52:26.591066 kubelet[2109]: I0812 23:52:26.591041 2109 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 12 23:52:26.591384 kubelet[2109]: I0812 23:52:26.591368 2109 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 12 23:52:26.591510 kubelet[2109]: I0812 23:52:26.591499 2109 reconciler.go:26] "Reconciler: start to sync state" Aug 12 23:52:26.592421 kubelet[2109]: W0812 23:52:26.592369 2109 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Aug 12 23:52:26.592574 kubelet[2109]: E0812 23:52:26.592543 2109 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:52:26.593082 kubelet[2109]: E0812 23:52:26.593057 2109 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:52:26.593248 kubelet[2109]: E0812 23:52:26.593223 2109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="200ms" Aug 12 23:52:26.593804 kubelet[2109]: I0812 23:52:26.593763 2109 factory.go:221] Registration of the systemd container factory successfully Aug 12 23:52:26.593912 kubelet[2109]: I0812 23:52:26.593889 2109 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 12 23:52:26.595754 kubelet[2109]: I0812 23:52:26.595725 2109 factory.go:221] Registration of the containerd container factory successfully Aug 12 23:52:26.597918 kubelet[2109]: E0812 23:52:26.596387 2109 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.6:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.6:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b2a1453aab817 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-12 23:52:26.582349847 +0000 UTC m=+0.909890143,LastTimestamp:2025-08-12 23:52:26.582349847 +0000 UTC m=+0.909890143,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 12 23:52:26.599495 kubelet[2109]: E0812 23:52:26.599458 2109 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 12 23:52:26.609689 kubelet[2109]: I0812 23:52:26.609624 2109 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 12 23:52:26.611128 kubelet[2109]: I0812 23:52:26.611108 2109 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 12 23:52:26.611284 kubelet[2109]: I0812 23:52:26.611239 2109 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 12 23:52:26.611369 kubelet[2109]: I0812 23:52:26.611345 2109 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:52:26.611888 kubelet[2109]: I0812 23:52:26.611453 2109 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 12 23:52:26.611888 kubelet[2109]: I0812 23:52:26.611481 2109 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 12 23:52:26.611888 kubelet[2109]: I0812 23:52:26.611504 2109 kubelet.go:2321] "Starting kubelet main sync loop" Aug 12 23:52:26.611888 kubelet[2109]: E0812 23:52:26.611548 2109 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 12 23:52:26.612730 kubelet[2109]: W0812 23:52:26.612674 2109 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Aug 12 23:52:26.612881 kubelet[2109]: E0812 23:52:26.612856 2109 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:52:26.694119 kubelet[2109]: E0812 23:52:26.694083 2109 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:52:26.696677 kubelet[2109]: I0812 23:52:26.696574 2109 policy_none.go:49] "None policy: Start" Aug 12 23:52:26.697403 kubelet[2109]: I0812 23:52:26.697384 2109 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 12 23:52:26.697499 kubelet[2109]: I0812 23:52:26.697412 2109 state_mem.go:35] "Initializing new in-memory state store" Aug 12 23:52:26.712023 kubelet[2109]: E0812 23:52:26.711953 2109 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 12 23:52:26.720878 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 12 23:52:26.733755 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 12 23:52:26.738109 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 12 23:52:26.754924 kubelet[2109]: I0812 23:52:26.754814 2109 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 12 23:52:26.755230 kubelet[2109]: I0812 23:52:26.755170 2109 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 12 23:52:26.755230 kubelet[2109]: I0812 23:52:26.755185 2109 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 12 23:52:26.755464 kubelet[2109]: I0812 23:52:26.755426 2109 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 12 23:52:26.756917 kubelet[2109]: E0812 23:52:26.756813 2109 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 12 23:52:26.793938 kubelet[2109]: E0812 23:52:26.793886 2109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="400ms" Aug 12 23:52:26.857120 kubelet[2109]: I0812 23:52:26.857087 2109 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 12 23:52:26.857859 kubelet[2109]: E0812 23:52:26.857831 2109 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Aug 12 23:52:26.921611 systemd[1]: Created slice kubepods-burstable-pod27e4a50e94f48ec00f6bd509cb48ed05.slice - libcontainer container kubepods-burstable-pod27e4a50e94f48ec00f6bd509cb48ed05.slice. Aug 12 23:52:26.935059 systemd[1]: Created slice kubepods-burstable-pod71ad98f8457ee6e5585e2d0f105457f7.slice - libcontainer container kubepods-burstable-pod71ad98f8457ee6e5585e2d0f105457f7.slice. Aug 12 23:52:26.938987 systemd[1]: Created slice kubepods-burstable-pod407c569889bb86d746b0274843003fd0.slice - libcontainer container kubepods-burstable-pod407c569889bb86d746b0274843003fd0.slice. Aug 12 23:52:26.993356 kubelet[2109]: I0812 23:52:26.993210 2109 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71ad98f8457ee6e5585e2d0f105457f7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"71ad98f8457ee6e5585e2d0f105457f7\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:52:26.993356 kubelet[2109]: I0812 23:52:26.993247 2109 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71ad98f8457ee6e5585e2d0f105457f7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"71ad98f8457ee6e5585e2d0f105457f7\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:52:26.993356 kubelet[2109]: I0812 23:52:26.993279 2109 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:52:26.993356 kubelet[2109]: I0812 23:52:26.993300 2109 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:52:26.993356 kubelet[2109]: I0812 23:52:26.993319 2109 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:52:26.993577 kubelet[2109]: I0812 23:52:26.993334 2109 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Aug 12 23:52:26.993577 kubelet[2109]: I0812 23:52:26.993350 2109 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71ad98f8457ee6e5585e2d0f105457f7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"71ad98f8457ee6e5585e2d0f105457f7\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:52:26.993577 kubelet[2109]: I0812 23:52:26.993367 2109 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:52:26.993577 kubelet[2109]: I0812 23:52:26.993381 2109 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:52:27.059730 kubelet[2109]: I0812 23:52:27.059698 2109 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 12 23:52:27.060076 kubelet[2109]: E0812 23:52:27.060031 2109 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Aug 12 23:52:27.194696 kubelet[2109]: E0812 23:52:27.194646 2109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="800ms" Aug 12 23:52:27.231954 kubelet[2109]: E0812 23:52:27.231911 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:27.232553 containerd[1436]: time="2025-08-12T23:52:27.232510067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,}" Aug 12 23:52:27.237820 kubelet[2109]: E0812 23:52:27.237770 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:27.238271 containerd[1436]: time="2025-08-12T23:52:27.238235898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:71ad98f8457ee6e5585e2d0f105457f7,Namespace:kube-system,Attempt:0,}" Aug 12 23:52:27.241901 kubelet[2109]: E0812 23:52:27.241842 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:27.242745 containerd[1436]: time="2025-08-12T23:52:27.242348843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,}" Aug 12 23:52:27.382847 kubelet[2109]: W0812 23:52:27.382551 2109 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Aug 12 23:52:27.382847 kubelet[2109]: E0812 23:52:27.382619 2109 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:52:27.461614 kubelet[2109]: I0812 23:52:27.461571 2109 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 12 23:52:27.461931 kubelet[2109]: E0812 23:52:27.461908 2109 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Aug 12 23:52:27.650168 kubelet[2109]: W0812 23:52:27.650021 2109 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Aug 12 23:52:27.650168 kubelet[2109]: E0812 23:52:27.650092 2109 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:52:27.740967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2965719164.mount: Deactivated successfully. Aug 12 23:52:27.749132 containerd[1436]: time="2025-08-12T23:52:27.749082107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:52:27.751323 containerd[1436]: time="2025-08-12T23:52:27.751279202Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Aug 12 23:52:27.756427 containerd[1436]: time="2025-08-12T23:52:27.756281080Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:52:27.759031 containerd[1436]: time="2025-08-12T23:52:27.758839092Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:52:27.760652 containerd[1436]: time="2025-08-12T23:52:27.760305916Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 12 23:52:27.762013 containerd[1436]: time="2025-08-12T23:52:27.761984354Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:52:27.763433 containerd[1436]: time="2025-08-12T23:52:27.763385025Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 12 23:52:27.767609 containerd[1436]: time="2025-08-12T23:52:27.767560003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:52:27.770676 containerd[1436]: time="2025-08-12T23:52:27.770612716Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 538.00802ms" Aug 12 23:52:27.771987 containerd[1436]: time="2025-08-12T23:52:27.771934637Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 533.618868ms" Aug 12 23:52:27.772488 containerd[1436]: time="2025-08-12T23:52:27.772440336Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 530.011462ms" Aug 12 23:52:27.901889 containerd[1436]: time="2025-08-12T23:52:27.901372501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:52:27.901889 containerd[1436]: time="2025-08-12T23:52:27.901602713Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:52:27.901889 containerd[1436]: time="2025-08-12T23:52:27.901663266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:52:27.902524 containerd[1436]: time="2025-08-12T23:52:27.902037221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:52:27.902568 containerd[1436]: time="2025-08-12T23:52:27.902461930Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:52:27.902674 containerd[1436]: time="2025-08-12T23:52:27.902583995Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:52:27.902674 containerd[1436]: time="2025-08-12T23:52:27.902629470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:52:27.903667 containerd[1436]: time="2025-08-12T23:52:27.902751215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:52:27.904954 containerd[1436]: time="2025-08-12T23:52:27.904267233Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:52:27.904954 containerd[1436]: time="2025-08-12T23:52:27.904343424Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:52:27.904954 containerd[1436]: time="2025-08-12T23:52:27.904355742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:52:27.904954 containerd[1436]: time="2025-08-12T23:52:27.904491246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:52:27.928055 systemd[1]: Started cri-containerd-405ff5d1c5d00184664ba38e64194c491cdf52eff7180fa340b140a8592a5e66.scope - libcontainer container 405ff5d1c5d00184664ba38e64194c491cdf52eff7180fa340b140a8592a5e66. Aug 12 23:52:27.929264 systemd[1]: Started cri-containerd-8aa004b79b85b6bd29f26164a0b6a196f4299c5bc93ecbb45d4568263b70b0f3.scope - libcontainer container 8aa004b79b85b6bd29f26164a0b6a196f4299c5bc93ecbb45d4568263b70b0f3. Aug 12 23:52:27.932991 systemd[1]: Started cri-containerd-29eba051c2181453dcfa9e34eb569908576d5ac34f7be315ecd2edacec4da393.scope - libcontainer container 29eba051c2181453dcfa9e34eb569908576d5ac34f7be315ecd2edacec4da393. Aug 12 23:52:27.965342 containerd[1436]: time="2025-08-12T23:52:27.965295929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"8aa004b79b85b6bd29f26164a0b6a196f4299c5bc93ecbb45d4568263b70b0f3\"" Aug 12 23:52:27.967070 kubelet[2109]: E0812 23:52:27.967028 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:27.971373 containerd[1436]: time="2025-08-12T23:52:27.970653884Z" level=info msg="CreateContainer within sandbox \"8aa004b79b85b6bd29f26164a0b6a196f4299c5bc93ecbb45d4568263b70b0f3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 12 23:52:27.971373 containerd[1436]: time="2025-08-12T23:52:27.970997483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:71ad98f8457ee6e5585e2d0f105457f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"405ff5d1c5d00184664ba38e64194c491cdf52eff7180fa340b140a8592a5e66\"" Aug 12 23:52:27.971577 kubelet[2109]: E0812 23:52:27.971469 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:27.974024 containerd[1436]: time="2025-08-12T23:52:27.973972445Z" level=info msg="CreateContainer within sandbox \"405ff5d1c5d00184664ba38e64194c491cdf52eff7180fa340b140a8592a5e66\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 12 23:52:27.976822 containerd[1436]: time="2025-08-12T23:52:27.976631765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,} returns sandbox id \"29eba051c2181453dcfa9e34eb569908576d5ac34f7be315ecd2edacec4da393\"" Aug 12 23:52:27.978377 kubelet[2109]: E0812 23:52:27.978323 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:27.980918 containerd[1436]: time="2025-08-12T23:52:27.980766827Z" level=info msg="CreateContainer within sandbox \"29eba051c2181453dcfa9e34eb569908576d5ac34f7be315ecd2edacec4da393\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 12 23:52:27.995837 kubelet[2109]: E0812 23:52:27.995772 2109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="1.6s" Aug 12 23:52:28.001047 containerd[1436]: time="2025-08-12T23:52:28.000612639Z" level=info msg="CreateContainer within sandbox \"8aa004b79b85b6bd29f26164a0b6a196f4299c5bc93ecbb45d4568263b70b0f3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"46731126dadb826649e13f66fd63a92b80fcb5fc5281b1821efcc65a1acc739b\"" Aug 12 23:52:28.001326 containerd[1436]: time="2025-08-12T23:52:28.001285598Z" level=info msg="StartContainer for \"46731126dadb826649e13f66fd63a92b80fcb5fc5281b1821efcc65a1acc739b\"" Aug 12 23:52:28.008654 containerd[1436]: time="2025-08-12T23:52:28.008519713Z" level=info msg="CreateContainer within sandbox \"29eba051c2181453dcfa9e34eb569908576d5ac34f7be315ecd2edacec4da393\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a5f68a2093eaba9fbb75a520bb2e20c5ff9f21a11c442f0f1a1e3126a2e89bd9\"" Aug 12 23:52:28.009443 containerd[1436]: time="2025-08-12T23:52:28.009380693Z" level=info msg="CreateContainer within sandbox \"405ff5d1c5d00184664ba38e64194c491cdf52eff7180fa340b140a8592a5e66\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ba654c53db059e73a1c6517f4f7a85719d608194705fbaef60ddddaea62b2096\"" Aug 12 23:52:28.010051 containerd[1436]: time="2025-08-12T23:52:28.010022458Z" level=info msg="StartContainer for \"ba654c53db059e73a1c6517f4f7a85719d608194705fbaef60ddddaea62b2096\"" Aug 12 23:52:28.010457 containerd[1436]: time="2025-08-12T23:52:28.010034456Z" level=info msg="StartContainer for \"a5f68a2093eaba9fbb75a520bb2e20c5ff9f21a11c442f0f1a1e3126a2e89bd9\"" Aug 12 23:52:28.029970 systemd[1]: Started cri-containerd-46731126dadb826649e13f66fd63a92b80fcb5fc5281b1821efcc65a1acc739b.scope - libcontainer container 46731126dadb826649e13f66fd63a92b80fcb5fc5281b1821efcc65a1acc739b. Aug 12 23:52:28.042051 systemd[1]: Started cri-containerd-a5f68a2093eaba9fbb75a520bb2e20c5ff9f21a11c442f0f1a1e3126a2e89bd9.scope - libcontainer container a5f68a2093eaba9fbb75a520bb2e20c5ff9f21a11c442f0f1a1e3126a2e89bd9. Aug 12 23:52:28.045605 systemd[1]: Started cri-containerd-ba654c53db059e73a1c6517f4f7a85719d608194705fbaef60ddddaea62b2096.scope - libcontainer container ba654c53db059e73a1c6517f4f7a85719d608194705fbaef60ddddaea62b2096. Aug 12 23:52:28.069422 kubelet[2109]: W0812 23:52:28.069232 2109 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Aug 12 23:52:28.069422 kubelet[2109]: E0812 23:52:28.069308 2109 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:52:28.081628 containerd[1436]: time="2025-08-12T23:52:28.081267793Z" level=info msg="StartContainer for \"46731126dadb826649e13f66fd63a92b80fcb5fc5281b1821efcc65a1acc739b\" returns successfully" Aug 12 23:52:28.081850 kubelet[2109]: W0812 23:52:28.081752 2109 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Aug 12 23:52:28.081893 kubelet[2109]: E0812 23:52:28.081864 2109 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:52:28.094161 containerd[1436]: time="2025-08-12T23:52:28.093994709Z" level=info msg="StartContainer for \"ba654c53db059e73a1c6517f4f7a85719d608194705fbaef60ddddaea62b2096\" returns successfully" Aug 12 23:52:28.113026 containerd[1436]: time="2025-08-12T23:52:28.112902865Z" level=info msg="StartContainer for \"a5f68a2093eaba9fbb75a520bb2e20c5ff9f21a11c442f0f1a1e3126a2e89bd9\" returns successfully" Aug 12 23:52:28.264224 kubelet[2109]: I0812 23:52:28.263887 2109 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 12 23:52:28.266145 kubelet[2109]: E0812 23:52:28.266102 2109 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Aug 12 23:52:28.621737 kubelet[2109]: E0812 23:52:28.621398 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:28.623715 kubelet[2109]: E0812 23:52:28.623689 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:28.625661 kubelet[2109]: E0812 23:52:28.625636 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:29.626910 kubelet[2109]: E0812 23:52:29.626877 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:29.841159 kubelet[2109]: E0812 23:52:29.841098 2109 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 12 23:52:29.867705 kubelet[2109]: I0812 23:52:29.867483 2109 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 12 23:52:29.978740 kubelet[2109]: I0812 23:52:29.978696 2109 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 12 23:52:29.978740 kubelet[2109]: E0812 23:52:29.978739 2109 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Aug 12 23:52:29.986082 kubelet[2109]: E0812 23:52:29.986046 2109 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:52:30.086937 kubelet[2109]: E0812 23:52:30.086892 2109 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:52:30.579211 kubelet[2109]: I0812 23:52:30.579166 2109 apiserver.go:52] "Watching apiserver" Aug 12 23:52:30.591954 kubelet[2109]: I0812 23:52:30.591916 2109 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 12 23:52:32.663750 systemd[1]: Reloading requested from client PID 2385 ('systemctl') (unit session-7.scope)... Aug 12 23:52:32.663767 systemd[1]: Reloading... Aug 12 23:52:32.730824 zram_generator::config[2425]: No configuration found. Aug 12 23:52:32.824964 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:52:32.837206 kubelet[2109]: E0812 23:52:32.837173 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:32.894100 systemd[1]: Reloading finished in 229 ms. Aug 12 23:52:32.923466 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:52:32.931923 systemd[1]: kubelet.service: Deactivated successfully. Aug 12 23:52:32.932147 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:52:32.932202 systemd[1]: kubelet.service: Consumed 1.328s CPU time, 132.6M memory peak, 0B memory swap peak. Aug 12 23:52:32.944125 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:52:33.048804 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:52:33.054277 (kubelet)[2466]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 12 23:52:33.111810 kubelet[2466]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:52:33.111810 kubelet[2466]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 12 23:52:33.111810 kubelet[2466]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:52:33.112483 kubelet[2466]: I0812 23:52:33.111920 2466 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 12 23:52:33.127327 kubelet[2466]: I0812 23:52:33.126394 2466 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 12 23:52:33.127327 kubelet[2466]: I0812 23:52:33.126425 2466 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 12 23:52:33.127327 kubelet[2466]: I0812 23:52:33.126675 2466 server.go:934] "Client rotation is on, will bootstrap in background" Aug 12 23:52:33.129175 kubelet[2466]: I0812 23:52:33.129131 2466 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 12 23:52:33.132147 kubelet[2466]: I0812 23:52:33.131726 2466 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 12 23:52:33.141607 kubelet[2466]: E0812 23:52:33.140966 2466 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 12 23:52:33.141607 kubelet[2466]: I0812 23:52:33.141182 2466 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 12 23:52:33.144302 kubelet[2466]: I0812 23:52:33.144268 2466 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 12 23:52:33.145105 kubelet[2466]: I0812 23:52:33.144556 2466 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 12 23:52:33.145105 kubelet[2466]: I0812 23:52:33.144671 2466 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 12 23:52:33.145105 kubelet[2466]: I0812 23:52:33.144697 2466 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 12 23:52:33.145105 kubelet[2466]: I0812 23:52:33.144910 2466 topology_manager.go:138] "Creating topology manager with none policy" Aug 12 23:52:33.145327 kubelet[2466]: I0812 23:52:33.144920 2466 container_manager_linux.go:300] "Creating device plugin manager" Aug 12 23:52:33.145327 kubelet[2466]: I0812 23:52:33.144956 2466 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:52:33.145327 kubelet[2466]: I0812 23:52:33.145070 2466 kubelet.go:408] "Attempting to sync node with API server" Aug 12 23:52:33.145327 kubelet[2466]: I0812 23:52:33.145116 2466 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 12 23:52:33.145327 kubelet[2466]: I0812 23:52:33.145140 2466 kubelet.go:314] "Adding apiserver pod source" Aug 12 23:52:33.145327 kubelet[2466]: I0812 23:52:33.145154 2466 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 12 23:52:33.147008 kubelet[2466]: I0812 23:52:33.146948 2466 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 12 23:52:33.147762 kubelet[2466]: I0812 23:52:33.147612 2466 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 12 23:52:33.149401 kubelet[2466]: I0812 23:52:33.148217 2466 server.go:1274] "Started kubelet" Aug 12 23:52:33.151833 kubelet[2466]: I0812 23:52:33.150186 2466 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 12 23:52:33.151833 kubelet[2466]: I0812 23:52:33.150479 2466 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 12 23:52:33.151833 kubelet[2466]: I0812 23:52:33.150536 2466 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 12 23:52:33.151833 kubelet[2466]: I0812 23:52:33.151461 2466 server.go:449] "Adding debug handlers to kubelet server" Aug 12 23:52:33.152692 kubelet[2466]: I0812 23:52:33.152660 2466 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 12 23:52:33.161137 kubelet[2466]: I0812 23:52:33.161095 2466 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 12 23:52:33.162174 kubelet[2466]: I0812 23:52:33.162147 2466 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 12 23:52:33.164033 kubelet[2466]: E0812 23:52:33.163988 2466 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:52:33.165001 kubelet[2466]: I0812 23:52:33.164970 2466 factory.go:221] Registration of the systemd container factory successfully Aug 12 23:52:33.165250 kubelet[2466]: I0812 23:52:33.165227 2466 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 12 23:52:33.165536 kubelet[2466]: I0812 23:52:33.165349 2466 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 12 23:52:33.165536 kubelet[2466]: I0812 23:52:33.165487 2466 reconciler.go:26] "Reconciler: start to sync state" Aug 12 23:52:33.172620 kubelet[2466]: E0812 23:52:33.172578 2466 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 12 23:52:33.176753 kubelet[2466]: I0812 23:52:33.176612 2466 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 12 23:52:33.178521 kubelet[2466]: I0812 23:52:33.178482 2466 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 12 23:52:33.178521 kubelet[2466]: I0812 23:52:33.178525 2466 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 12 23:52:33.178634 kubelet[2466]: I0812 23:52:33.178552 2466 kubelet.go:2321] "Starting kubelet main sync loop" Aug 12 23:52:33.179106 kubelet[2466]: E0812 23:52:33.179055 2466 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 12 23:52:33.184025 kubelet[2466]: I0812 23:52:33.183986 2466 factory.go:221] Registration of the containerd container factory successfully Aug 12 23:52:33.226501 kubelet[2466]: I0812 23:52:33.226455 2466 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 12 23:52:33.226501 kubelet[2466]: I0812 23:52:33.226484 2466 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 12 23:52:33.226501 kubelet[2466]: I0812 23:52:33.226505 2466 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:52:33.226731 kubelet[2466]: I0812 23:52:33.226696 2466 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 12 23:52:33.226731 kubelet[2466]: I0812 23:52:33.226716 2466 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 12 23:52:33.226830 kubelet[2466]: I0812 23:52:33.226740 2466 policy_none.go:49] "None policy: Start" Aug 12 23:52:33.227417 kubelet[2466]: I0812 23:52:33.227386 2466 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 12 23:52:33.227417 kubelet[2466]: I0812 23:52:33.227415 2466 state_mem.go:35] "Initializing new in-memory state store" Aug 12 23:52:33.227597 kubelet[2466]: I0812 23:52:33.227569 2466 state_mem.go:75] "Updated machine memory state" Aug 12 23:52:33.235243 kubelet[2466]: I0812 23:52:33.235204 2466 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 12 23:52:33.235448 kubelet[2466]: I0812 23:52:33.235402 2466 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 12 23:52:33.235448 kubelet[2466]: I0812 23:52:33.235425 2466 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 12 23:52:33.236227 kubelet[2466]: I0812 23:52:33.236021 2466 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 12 23:52:33.290751 kubelet[2466]: E0812 23:52:33.290706 2466 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 12 23:52:33.339610 kubelet[2466]: I0812 23:52:33.339572 2466 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 12 23:52:33.349231 kubelet[2466]: I0812 23:52:33.349082 2466 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Aug 12 23:52:33.349231 kubelet[2466]: I0812 23:52:33.349169 2466 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 12 23:52:33.467460 kubelet[2466]: I0812 23:52:33.467413 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71ad98f8457ee6e5585e2d0f105457f7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"71ad98f8457ee6e5585e2d0f105457f7\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:52:33.467625 kubelet[2466]: I0812 23:52:33.467495 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:52:33.467625 kubelet[2466]: I0812 23:52:33.467523 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:52:33.467625 kubelet[2466]: I0812 23:52:33.467604 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:52:33.467625 kubelet[2466]: I0812 23:52:33.467624 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:52:33.467723 kubelet[2466]: I0812 23:52:33.467643 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Aug 12 23:52:33.467723 kubelet[2466]: I0812 23:52:33.467688 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71ad98f8457ee6e5585e2d0f105457f7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"71ad98f8457ee6e5585e2d0f105457f7\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:52:33.467723 kubelet[2466]: I0812 23:52:33.467705 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:52:33.467816 kubelet[2466]: I0812 23:52:33.467747 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71ad98f8457ee6e5585e2d0f105457f7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"71ad98f8457ee6e5585e2d0f105457f7\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:52:33.591469 kubelet[2466]: E0812 23:52:33.591351 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:33.591469 kubelet[2466]: E0812 23:52:33.591423 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:33.591626 kubelet[2466]: E0812 23:52:33.591578 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:33.662047 sudo[2503]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 12 23:52:33.662343 sudo[2503]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Aug 12 23:52:34.094774 sudo[2503]: pam_unix(sudo:session): session closed for user root Aug 12 23:52:34.147070 kubelet[2466]: I0812 23:52:34.146774 2466 apiserver.go:52] "Watching apiserver" Aug 12 23:52:34.166532 kubelet[2466]: I0812 23:52:34.166475 2466 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 12 23:52:34.201117 kubelet[2466]: E0812 23:52:34.201010 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:34.201117 kubelet[2466]: E0812 23:52:34.201010 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:34.209201 kubelet[2466]: E0812 23:52:34.209167 2466 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 12 23:52:34.209590 kubelet[2466]: E0812 23:52:34.209520 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:34.236748 kubelet[2466]: I0812 23:52:34.236654 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.236629438 podStartE2EDuration="1.236629438s" podCreationTimestamp="2025-08-12 23:52:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:52:34.225898152 +0000 UTC m=+1.165582652" watchObservedRunningTime="2025-08-12 23:52:34.236629438 +0000 UTC m=+1.176313858" Aug 12 23:52:34.236916 kubelet[2466]: I0812 23:52:34.236812 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.236807101 podStartE2EDuration="2.236807101s" podCreationTimestamp="2025-08-12 23:52:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:52:34.236777744 +0000 UTC m=+1.176462204" watchObservedRunningTime="2025-08-12 23:52:34.236807101 +0000 UTC m=+1.176491601" Aug 12 23:52:35.202907 kubelet[2466]: E0812 23:52:35.202869 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:35.642887 sudo[1615]: pam_unix(sudo:session): session closed for user root Aug 12 23:52:35.645592 sshd[1612]: pam_unix(sshd:session): session closed for user core Aug 12 23:52:35.648839 systemd[1]: sshd@6-10.0.0.6:22-10.0.0.1:56894.service: Deactivated successfully. Aug 12 23:52:35.650967 systemd[1]: session-7.scope: Deactivated successfully. Aug 12 23:52:35.651201 systemd[1]: session-7.scope: Consumed 8.274s CPU time, 151.1M memory peak, 0B memory swap peak. Aug 12 23:52:35.653060 systemd-logind[1417]: Session 7 logged out. Waiting for processes to exit. Aug 12 23:52:35.654823 systemd-logind[1417]: Removed session 7. Aug 12 23:52:37.674980 kubelet[2466]: I0812 23:52:37.674947 2466 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 12 23:52:37.675860 containerd[1436]: time="2025-08-12T23:52:37.675819601Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 12 23:52:37.676451 kubelet[2466]: I0812 23:52:37.676218 2466 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 12 23:52:38.647755 kubelet[2466]: I0812 23:52:38.647618 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=5.647599242 podStartE2EDuration="5.647599242s" podCreationTimestamp="2025-08-12 23:52:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:52:34.246448692 +0000 UTC m=+1.186133152" watchObservedRunningTime="2025-08-12 23:52:38.647599242 +0000 UTC m=+5.587283702" Aug 12 23:52:38.681682 systemd[1]: Created slice kubepods-besteffort-pod7386e770_124c_4c10_8cff_603c22b8a256.slice - libcontainer container kubepods-besteffort-pod7386e770_124c_4c10_8cff_603c22b8a256.slice. Aug 12 23:52:38.695131 systemd[1]: Created slice kubepods-burstable-pod4b4ecf9a_a078_4ab2_aa30_f10ac9d18a01.slice - libcontainer container kubepods-burstable-pod4b4ecf9a_a078_4ab2_aa30_f10ac9d18a01.slice. Aug 12 23:52:38.701627 kubelet[2466]: I0812 23:52:38.701570 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-cilium-config-path\") pod \"cilium-zg77b\" (UID: \"4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01\") " pod="kube-system/cilium-zg77b" Aug 12 23:52:38.701627 kubelet[2466]: I0812 23:52:38.701626 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmnhd\" (UniqueName: \"kubernetes.io/projected/7386e770-124c-4c10-8cff-603c22b8a256-kube-api-access-hmnhd\") pod \"kube-proxy-bl9dt\" (UID: \"7386e770-124c-4c10-8cff-603c22b8a256\") " pod="kube-system/kube-proxy-bl9dt" Aug 12 23:52:38.702032 kubelet[2466]: I0812 23:52:38.701649 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-clustermesh-secrets\") pod \"cilium-zg77b\" (UID: \"4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01\") " pod="kube-system/cilium-zg77b" Aug 12 23:52:38.702032 kubelet[2466]: I0812 23:52:38.701665 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-host-proc-sys-kernel\") pod \"cilium-zg77b\" (UID: \"4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01\") " pod="kube-system/cilium-zg77b" Aug 12 23:52:38.702032 kubelet[2466]: I0812 23:52:38.701681 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcqll\" (UniqueName: \"kubernetes.io/projected/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-kube-api-access-rcqll\") pod \"cilium-zg77b\" (UID: \"4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01\") " pod="kube-system/cilium-zg77b" Aug 12 23:52:38.702032 kubelet[2466]: I0812 23:52:38.701699 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7386e770-124c-4c10-8cff-603c22b8a256-kube-proxy\") pod \"kube-proxy-bl9dt\" (UID: \"7386e770-124c-4c10-8cff-603c22b8a256\") " pod="kube-system/kube-proxy-bl9dt" Aug 12 23:52:38.702032 kubelet[2466]: I0812 23:52:38.701714 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-cilium-run\") pod \"cilium-zg77b\" (UID: \"4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01\") " pod="kube-system/cilium-zg77b" Aug 12 23:52:38.702192 kubelet[2466]: I0812 23:52:38.701729 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-cilium-cgroup\") pod \"cilium-zg77b\" (UID: \"4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01\") " pod="kube-system/cilium-zg77b" Aug 12 23:52:38.702192 kubelet[2466]: I0812 23:52:38.701744 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-etc-cni-netd\") pod \"cilium-zg77b\" (UID: \"4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01\") " pod="kube-system/cilium-zg77b" Aug 12 23:52:38.702192 kubelet[2466]: I0812 23:52:38.701759 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-lib-modules\") pod \"cilium-zg77b\" (UID: \"4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01\") " pod="kube-system/cilium-zg77b" Aug 12 23:52:38.702192 kubelet[2466]: I0812 23:52:38.701773 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-xtables-lock\") pod \"cilium-zg77b\" (UID: \"4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01\") " pod="kube-system/cilium-zg77b" Aug 12 23:52:38.702192 kubelet[2466]: I0812 23:52:38.701809 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-hostproc\") pod \"cilium-zg77b\" (UID: \"4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01\") " pod="kube-system/cilium-zg77b" Aug 12 23:52:38.702192 kubelet[2466]: I0812 23:52:38.701860 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-cni-path\") pod \"cilium-zg77b\" (UID: \"4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01\") " pod="kube-system/cilium-zg77b" Aug 12 23:52:38.702331 kubelet[2466]: I0812 23:52:38.701886 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-host-proc-sys-net\") pod \"cilium-zg77b\" (UID: \"4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01\") " pod="kube-system/cilium-zg77b" Aug 12 23:52:38.702331 kubelet[2466]: I0812 23:52:38.701905 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7386e770-124c-4c10-8cff-603c22b8a256-lib-modules\") pod \"kube-proxy-bl9dt\" (UID: \"7386e770-124c-4c10-8cff-603c22b8a256\") " pod="kube-system/kube-proxy-bl9dt" Aug 12 23:52:38.702331 kubelet[2466]: I0812 23:52:38.701922 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-bpf-maps\") pod \"cilium-zg77b\" (UID: \"4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01\") " pod="kube-system/cilium-zg77b" Aug 12 23:52:38.702331 kubelet[2466]: I0812 23:52:38.701939 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7386e770-124c-4c10-8cff-603c22b8a256-xtables-lock\") pod \"kube-proxy-bl9dt\" (UID: \"7386e770-124c-4c10-8cff-603c22b8a256\") " pod="kube-system/kube-proxy-bl9dt" Aug 12 23:52:38.702331 kubelet[2466]: I0812 23:52:38.701955 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-hubble-tls\") pod \"cilium-zg77b\" (UID: \"4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01\") " pod="kube-system/cilium-zg77b" Aug 12 23:52:38.828321 systemd[1]: Created slice kubepods-besteffort-pod19671571_21af_4027_9427_acde95972777.slice - libcontainer container kubepods-besteffort-pod19671571_21af_4027_9427_acde95972777.slice. Aug 12 23:52:38.903000 kubelet[2466]: I0812 23:52:38.902848 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19671571-21af-4027-9427-acde95972777-cilium-config-path\") pod \"cilium-operator-5d85765b45-ggkzm\" (UID: \"19671571-21af-4027-9427-acde95972777\") " pod="kube-system/cilium-operator-5d85765b45-ggkzm" Aug 12 23:52:38.903000 kubelet[2466]: I0812 23:52:38.902899 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtrcz\" (UniqueName: \"kubernetes.io/projected/19671571-21af-4027-9427-acde95972777-kube-api-access-qtrcz\") pod \"cilium-operator-5d85765b45-ggkzm\" (UID: \"19671571-21af-4027-9427-acde95972777\") " pod="kube-system/cilium-operator-5d85765b45-ggkzm" Aug 12 23:52:38.992987 kubelet[2466]: E0812 23:52:38.992863 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:38.994095 containerd[1436]: time="2025-08-12T23:52:38.994057442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bl9dt,Uid:7386e770-124c-4c10-8cff-603c22b8a256,Namespace:kube-system,Attempt:0,}" Aug 12 23:52:38.999295 kubelet[2466]: E0812 23:52:38.999224 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:39.000237 containerd[1436]: time="2025-08-12T23:52:38.999986378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zg77b,Uid:4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01,Namespace:kube-system,Attempt:0,}" Aug 12 23:52:39.033263 containerd[1436]: time="2025-08-12T23:52:39.032426427Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:52:39.033263 containerd[1436]: time="2025-08-12T23:52:39.032491621Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:52:39.033263 containerd[1436]: time="2025-08-12T23:52:39.032504460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:52:39.033263 containerd[1436]: time="2025-08-12T23:52:39.032653528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:52:39.043774 containerd[1436]: time="2025-08-12T23:52:39.043203661Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:52:39.043774 containerd[1436]: time="2025-08-12T23:52:39.043734857Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:52:39.043774 containerd[1436]: time="2025-08-12T23:52:39.043748976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:52:39.044362 containerd[1436]: time="2025-08-12T23:52:39.044107986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:52:39.058023 systemd[1]: Started cri-containerd-9c72ec097bb26a399518db5eb961a0c49ce7cba577bd8b4a33c3f6ab3589a373.scope - libcontainer container 9c72ec097bb26a399518db5eb961a0c49ce7cba577bd8b4a33c3f6ab3589a373. Aug 12 23:52:39.060824 systemd[1]: Started cri-containerd-d145981ccd7704669fb43330719da395e69f608546bcea038e07a2711b4550c8.scope - libcontainer container d145981ccd7704669fb43330719da395e69f608546bcea038e07a2711b4550c8. Aug 12 23:52:39.087895 containerd[1436]: time="2025-08-12T23:52:39.087685444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zg77b,Uid:4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01,Namespace:kube-system,Attempt:0,} returns sandbox id \"d145981ccd7704669fb43330719da395e69f608546bcea038e07a2711b4550c8\"" Aug 12 23:52:39.088096 containerd[1436]: time="2025-08-12T23:52:39.087932864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bl9dt,Uid:7386e770-124c-4c10-8cff-603c22b8a256,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c72ec097bb26a399518db5eb961a0c49ce7cba577bd8b4a33c3f6ab3589a373\"" Aug 12 23:52:39.089314 kubelet[2466]: E0812 23:52:39.089284 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:39.089528 kubelet[2466]: E0812 23:52:39.089308 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:39.091677 containerd[1436]: time="2025-08-12T23:52:39.091616881Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 12 23:52:39.093054 containerd[1436]: time="2025-08-12T23:52:39.092909535Z" level=info msg="CreateContainer within sandbox \"9c72ec097bb26a399518db5eb961a0c49ce7cba577bd8b4a33c3f6ab3589a373\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 12 23:52:39.110093 containerd[1436]: time="2025-08-12T23:52:39.110037327Z" level=info msg="CreateContainer within sandbox \"9c72ec097bb26a399518db5eb961a0c49ce7cba577bd8b4a33c3f6ab3589a373\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4e567a82a4f7d8ce979f34d7ba09b0aad9114e3d6e668251adcbcc8ab1537827\"" Aug 12 23:52:39.110814 containerd[1436]: time="2025-08-12T23:52:39.110761867Z" level=info msg="StartContainer for \"4e567a82a4f7d8ce979f34d7ba09b0aad9114e3d6e668251adcbcc8ab1537827\"" Aug 12 23:52:39.141873 kubelet[2466]: E0812 23:52:39.141833 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:39.142560 containerd[1436]: time="2025-08-12T23:52:39.142518496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-ggkzm,Uid:19671571-21af-4027-9427-acde95972777,Namespace:kube-system,Attempt:0,}" Aug 12 23:52:39.145076 systemd[1]: Started cri-containerd-4e567a82a4f7d8ce979f34d7ba09b0aad9114e3d6e668251adcbcc8ab1537827.scope - libcontainer container 4e567a82a4f7d8ce979f34d7ba09b0aad9114e3d6e668251adcbcc8ab1537827. Aug 12 23:52:39.178897 containerd[1436]: time="2025-08-12T23:52:39.177004301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:52:39.178897 containerd[1436]: time="2025-08-12T23:52:39.177073696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:52:39.178897 containerd[1436]: time="2025-08-12T23:52:39.177646009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:52:39.178897 containerd[1436]: time="2025-08-12T23:52:39.177823074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:52:39.193486 containerd[1436]: time="2025-08-12T23:52:39.193367956Z" level=info msg="StartContainer for \"4e567a82a4f7d8ce979f34d7ba09b0aad9114e3d6e668251adcbcc8ab1537827\" returns successfully" Aug 12 23:52:39.199033 systemd[1]: Started cri-containerd-21cf9b68e95fdef837ee3d9914b5fec7665e83b0efa7d3655b4a54318f5206b5.scope - libcontainer container 21cf9b68e95fdef837ee3d9914b5fec7665e83b0efa7d3655b4a54318f5206b5. Aug 12 23:52:39.213752 kubelet[2466]: E0812 23:52:39.212067 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:39.240807 kubelet[2466]: I0812 23:52:39.238729 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bl9dt" podStartSLOduration=1.238708309 podStartE2EDuration="1.238708309s" podCreationTimestamp="2025-08-12 23:52:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:52:39.237524206 +0000 UTC m=+6.177208706" watchObservedRunningTime="2025-08-12 23:52:39.238708309 +0000 UTC m=+6.178392769" Aug 12 23:52:39.244227 containerd[1436]: time="2025-08-12T23:52:39.243624985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-ggkzm,Uid:19671571-21af-4027-9427-acde95972777,Namespace:kube-system,Attempt:0,} returns sandbox id \"21cf9b68e95fdef837ee3d9914b5fec7665e83b0efa7d3655b4a54318f5206b5\"" Aug 12 23:52:39.245856 kubelet[2466]: E0812 23:52:39.245572 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:40.081072 kubelet[2466]: E0812 23:52:40.081027 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:40.215979 kubelet[2466]: E0812 23:52:40.214081 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:42.419210 kubelet[2466]: E0812 23:52:42.419180 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:43.221381 kubelet[2466]: E0812 23:52:43.221230 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:43.978537 kubelet[2466]: E0812 23:52:43.977194 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:45.987501 update_engine[1420]: I20250812 23:52:45.987406 1420 update_attempter.cc:509] Updating boot flags... Aug 12 23:52:46.037824 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2842) Aug 12 23:52:46.082011 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2841) Aug 12 23:52:49.688286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2550823328.mount: Deactivated successfully. Aug 12 23:52:51.072237 containerd[1436]: time="2025-08-12T23:52:51.071820825Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:51.073287 containerd[1436]: time="2025-08-12T23:52:51.073252065Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Aug 12 23:52:51.074301 containerd[1436]: time="2025-08-12T23:52:51.074274767Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:51.076047 containerd[1436]: time="2025-08-12T23:52:51.075913595Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 11.984244758s" Aug 12 23:52:51.076047 containerd[1436]: time="2025-08-12T23:52:51.075954833Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Aug 12 23:52:51.079388 containerd[1436]: time="2025-08-12T23:52:51.079343763Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 12 23:52:51.098031 containerd[1436]: time="2025-08-12T23:52:51.097989236Z" level=info msg="CreateContainer within sandbox \"d145981ccd7704669fb43330719da395e69f608546bcea038e07a2711b4550c8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 12 23:52:51.131434 containerd[1436]: time="2025-08-12T23:52:51.131301165Z" level=info msg="CreateContainer within sandbox \"d145981ccd7704669fb43330719da395e69f608546bcea038e07a2711b4550c8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d22ce9b1ff7a05095c56307bb3a45d2248af89820f791580c90cd8ea80a5f0ab\"" Aug 12 23:52:51.135665 containerd[1436]: time="2025-08-12T23:52:51.135624322Z" level=info msg="StartContainer for \"d22ce9b1ff7a05095c56307bb3a45d2248af89820f791580c90cd8ea80a5f0ab\"" Aug 12 23:52:51.158191 systemd[1]: run-containerd-runc-k8s.io-d22ce9b1ff7a05095c56307bb3a45d2248af89820f791580c90cd8ea80a5f0ab-runc.rkhWJZ.mount: Deactivated successfully. Aug 12 23:52:51.171539 systemd[1]: Started cri-containerd-d22ce9b1ff7a05095c56307bb3a45d2248af89820f791580c90cd8ea80a5f0ab.scope - libcontainer container d22ce9b1ff7a05095c56307bb3a45d2248af89820f791580c90cd8ea80a5f0ab. Aug 12 23:52:51.218956 containerd[1436]: time="2025-08-12T23:52:51.218783412Z" level=info msg="StartContainer for \"d22ce9b1ff7a05095c56307bb3a45d2248af89820f791580c90cd8ea80a5f0ab\" returns successfully" Aug 12 23:52:51.245989 kubelet[2466]: E0812 23:52:51.244622 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:51.300756 systemd[1]: cri-containerd-d22ce9b1ff7a05095c56307bb3a45d2248af89820f791580c90cd8ea80a5f0ab.scope: Deactivated successfully. Aug 12 23:52:51.526922 containerd[1436]: time="2025-08-12T23:52:51.520911404Z" level=info msg="shim disconnected" id=d22ce9b1ff7a05095c56307bb3a45d2248af89820f791580c90cd8ea80a5f0ab namespace=k8s.io Aug 12 23:52:51.526922 containerd[1436]: time="2025-08-12T23:52:51.526925066Z" level=warning msg="cleaning up after shim disconnected" id=d22ce9b1ff7a05095c56307bb3a45d2248af89820f791580c90cd8ea80a5f0ab namespace=k8s.io Aug 12 23:52:51.527454 containerd[1436]: time="2025-08-12T23:52:51.526944985Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:52:52.136670 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d22ce9b1ff7a05095c56307bb3a45d2248af89820f791580c90cd8ea80a5f0ab-rootfs.mount: Deactivated successfully. Aug 12 23:52:52.258901 kubelet[2466]: E0812 23:52:52.258811 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:52.265878 containerd[1436]: time="2025-08-12T23:52:52.265241817Z" level=info msg="CreateContainer within sandbox \"d145981ccd7704669fb43330719da395e69f608546bcea038e07a2711b4550c8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 12 23:52:52.308078 containerd[1436]: time="2025-08-12T23:52:52.308025769Z" level=info msg="CreateContainer within sandbox \"d145981ccd7704669fb43330719da395e69f608546bcea038e07a2711b4550c8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ec4a60acbaa660eff45c2dc4123855aa476028e8e2436dfdac36a1a87987700a\"" Aug 12 23:52:52.311013 containerd[1436]: time="2025-08-12T23:52:52.310952410Z" level=info msg="StartContainer for \"ec4a60acbaa660eff45c2dc4123855aa476028e8e2436dfdac36a1a87987700a\"" Aug 12 23:52:52.346092 systemd[1]: Started cri-containerd-ec4a60acbaa660eff45c2dc4123855aa476028e8e2436dfdac36a1a87987700a.scope - libcontainer container ec4a60acbaa660eff45c2dc4123855aa476028e8e2436dfdac36a1a87987700a. Aug 12 23:52:52.393227 containerd[1436]: time="2025-08-12T23:52:52.393086301Z" level=info msg="StartContainer for \"ec4a60acbaa660eff45c2dc4123855aa476028e8e2436dfdac36a1a87987700a\" returns successfully" Aug 12 23:52:52.417070 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 12 23:52:52.417292 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:52:52.417481 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 12 23:52:52.427984 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 12 23:52:52.428335 systemd[1]: cri-containerd-ec4a60acbaa660eff45c2dc4123855aa476028e8e2436dfdac36a1a87987700a.scope: Deactivated successfully. Aug 12 23:52:52.460259 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:52:52.467742 containerd[1436]: time="2025-08-12T23:52:52.467678363Z" level=info msg="shim disconnected" id=ec4a60acbaa660eff45c2dc4123855aa476028e8e2436dfdac36a1a87987700a namespace=k8s.io Aug 12 23:52:52.467742 containerd[1436]: time="2025-08-12T23:52:52.467734000Z" level=warning msg="cleaning up after shim disconnected" id=ec4a60acbaa660eff45c2dc4123855aa476028e8e2436dfdac36a1a87987700a namespace=k8s.io Aug 12 23:52:52.467742 containerd[1436]: time="2025-08-12T23:52:52.467742680Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:52:52.852227 containerd[1436]: time="2025-08-12T23:52:52.852158005Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:52.853434 containerd[1436]: time="2025-08-12T23:52:52.853397337Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Aug 12 23:52:52.854376 containerd[1436]: time="2025-08-12T23:52:52.854318767Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:52.855823 containerd[1436]: time="2025-08-12T23:52:52.855754169Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.776367289s" Aug 12 23:52:52.855823 containerd[1436]: time="2025-08-12T23:52:52.855807366Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Aug 12 23:52:52.860008 containerd[1436]: time="2025-08-12T23:52:52.859890504Z" level=info msg="CreateContainer within sandbox \"21cf9b68e95fdef837ee3d9914b5fec7665e83b0efa7d3655b4a54318f5206b5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 12 23:52:52.879553 containerd[1436]: time="2025-08-12T23:52:52.879499517Z" level=info msg="CreateContainer within sandbox \"21cf9b68e95fdef837ee3d9914b5fec7665e83b0efa7d3655b4a54318f5206b5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"664e3a53526c23b46148b0af554c97172e2336d4c4084b5d47f536c8afced632\"" Aug 12 23:52:52.880091 containerd[1436]: time="2025-08-12T23:52:52.880068006Z" level=info msg="StartContainer for \"664e3a53526c23b46148b0af554c97172e2336d4c4084b5d47f536c8afced632\"" Aug 12 23:52:52.916053 systemd[1]: Started cri-containerd-664e3a53526c23b46148b0af554c97172e2336d4c4084b5d47f536c8afced632.scope - libcontainer container 664e3a53526c23b46148b0af554c97172e2336d4c4084b5d47f536c8afced632. Aug 12 23:52:52.947891 containerd[1436]: time="2025-08-12T23:52:52.947832920Z" level=info msg="StartContainer for \"664e3a53526c23b46148b0af554c97172e2336d4c4084b5d47f536c8afced632\" returns successfully" Aug 12 23:52:53.134208 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec4a60acbaa660eff45c2dc4123855aa476028e8e2436dfdac36a1a87987700a-rootfs.mount: Deactivated successfully. Aug 12 23:52:53.262388 kubelet[2466]: E0812 23:52:53.262337 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:53.269943 containerd[1436]: time="2025-08-12T23:52:53.268359255Z" level=info msg="CreateContainer within sandbox \"d145981ccd7704669fb43330719da395e69f608546bcea038e07a2711b4550c8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 12 23:52:53.270347 kubelet[2466]: E0812 23:52:53.268957 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:53.337850 containerd[1436]: time="2025-08-12T23:52:53.337768076Z" level=info msg="CreateContainer within sandbox \"d145981ccd7704669fb43330719da395e69f608546bcea038e07a2711b4550c8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"35fdfdcc412b1b03b23dcfbd77eb5ebcfd9dc7b5244ce577576b3545a4b1b568\"" Aug 12 23:52:53.338404 containerd[1436]: time="2025-08-12T23:52:53.338366965Z" level=info msg="StartContainer for \"35fdfdcc412b1b03b23dcfbd77eb5ebcfd9dc7b5244ce577576b3545a4b1b568\"" Aug 12 23:52:53.376005 systemd[1]: Started cri-containerd-35fdfdcc412b1b03b23dcfbd77eb5ebcfd9dc7b5244ce577576b3545a4b1b568.scope - libcontainer container 35fdfdcc412b1b03b23dcfbd77eb5ebcfd9dc7b5244ce577576b3545a4b1b568. Aug 12 23:52:53.431650 systemd[1]: cri-containerd-35fdfdcc412b1b03b23dcfbd77eb5ebcfd9dc7b5244ce577576b3545a4b1b568.scope: Deactivated successfully. Aug 12 23:52:53.444382 containerd[1436]: time="2025-08-12T23:52:53.444333580Z" level=info msg="StartContainer for \"35fdfdcc412b1b03b23dcfbd77eb5ebcfd9dc7b5244ce577576b3545a4b1b568\" returns successfully" Aug 12 23:52:53.468137 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35fdfdcc412b1b03b23dcfbd77eb5ebcfd9dc7b5244ce577576b3545a4b1b568-rootfs.mount: Deactivated successfully. Aug 12 23:52:53.476843 containerd[1436]: time="2025-08-12T23:52:53.475878037Z" level=info msg="shim disconnected" id=35fdfdcc412b1b03b23dcfbd77eb5ebcfd9dc7b5244ce577576b3545a4b1b568 namespace=k8s.io Aug 12 23:52:53.477166 containerd[1436]: time="2025-08-12T23:52:53.476904103Z" level=warning msg="cleaning up after shim disconnected" id=35fdfdcc412b1b03b23dcfbd77eb5ebcfd9dc7b5244ce577576b3545a4b1b568 namespace=k8s.io Aug 12 23:52:53.477166 containerd[1436]: time="2025-08-12T23:52:53.476918182Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:52:54.273233 kubelet[2466]: E0812 23:52:54.272661 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:54.273233 kubelet[2466]: E0812 23:52:54.272712 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:54.279073 containerd[1436]: time="2025-08-12T23:52:54.279017281Z" level=info msg="CreateContainer within sandbox \"d145981ccd7704669fb43330719da395e69f608546bcea038e07a2711b4550c8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 12 23:52:54.288491 kubelet[2466]: I0812 23:52:54.288372 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-ggkzm" podStartSLOduration=2.678191231 podStartE2EDuration="16.288349325s" podCreationTimestamp="2025-08-12 23:52:38 +0000 UTC" firstStartedPulling="2025-08-12 23:52:39.248054981 +0000 UTC m=+6.187739441" lastFinishedPulling="2025-08-12 23:52:52.858213115 +0000 UTC m=+19.797897535" observedRunningTime="2025-08-12 23:52:53.301986082 +0000 UTC m=+20.241670542" watchObservedRunningTime="2025-08-12 23:52:54.288349325 +0000 UTC m=+21.228033745" Aug 12 23:52:54.294673 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount166062570.mount: Deactivated successfully. Aug 12 23:52:54.295996 containerd[1436]: time="2025-08-12T23:52:54.295944697Z" level=info msg="CreateContainer within sandbox \"d145981ccd7704669fb43330719da395e69f608546bcea038e07a2711b4550c8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2458c311161663e6a6213642d92eb22be83372c118d6b8071b2cc1931ef793f3\"" Aug 12 23:52:54.296831 containerd[1436]: time="2025-08-12T23:52:54.296621702Z" level=info msg="StartContainer for \"2458c311161663e6a6213642d92eb22be83372c118d6b8071b2cc1931ef793f3\"" Aug 12 23:52:54.320963 systemd[1]: Started cri-containerd-2458c311161663e6a6213642d92eb22be83372c118d6b8071b2cc1931ef793f3.scope - libcontainer container 2458c311161663e6a6213642d92eb22be83372c118d6b8071b2cc1931ef793f3. Aug 12 23:52:54.341386 systemd[1]: cri-containerd-2458c311161663e6a6213642d92eb22be83372c118d6b8071b2cc1931ef793f3.scope: Deactivated successfully. Aug 12 23:52:54.344372 containerd[1436]: time="2025-08-12T23:52:54.344008563Z" level=info msg="StartContainer for \"2458c311161663e6a6213642d92eb22be83372c118d6b8071b2cc1931ef793f3\" returns successfully" Aug 12 23:52:54.365833 containerd[1436]: time="2025-08-12T23:52:54.365765292Z" level=info msg="shim disconnected" id=2458c311161663e6a6213642d92eb22be83372c118d6b8071b2cc1931ef793f3 namespace=k8s.io Aug 12 23:52:54.366049 containerd[1436]: time="2025-08-12T23:52:54.366031519Z" level=warning msg="cleaning up after shim disconnected" id=2458c311161663e6a6213642d92eb22be83372c118d6b8071b2cc1931ef793f3 namespace=k8s.io Aug 12 23:52:54.366121 containerd[1436]: time="2025-08-12T23:52:54.366108795Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:52:55.278328 kubelet[2466]: E0812 23:52:55.278267 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:55.281376 containerd[1436]: time="2025-08-12T23:52:55.281319672Z" level=info msg="CreateContainer within sandbox \"d145981ccd7704669fb43330719da395e69f608546bcea038e07a2711b4550c8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 12 23:52:55.291438 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2458c311161663e6a6213642d92eb22be83372c118d6b8071b2cc1931ef793f3-rootfs.mount: Deactivated successfully. Aug 12 23:52:55.348589 containerd[1436]: time="2025-08-12T23:52:55.348539867Z" level=info msg="CreateContainer within sandbox \"d145981ccd7704669fb43330719da395e69f608546bcea038e07a2711b4550c8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"503df6ce8e4c75fe369d8fac0878acb520c428065c376d8f799f7e03061e7c3d\"" Aug 12 23:52:55.349324 containerd[1436]: time="2025-08-12T23:52:55.349291990Z" level=info msg="StartContainer for \"503df6ce8e4c75fe369d8fac0878acb520c428065c376d8f799f7e03061e7c3d\"" Aug 12 23:52:55.380056 systemd[1]: Started cri-containerd-503df6ce8e4c75fe369d8fac0878acb520c428065c376d8f799f7e03061e7c3d.scope - libcontainer container 503df6ce8e4c75fe369d8fac0878acb520c428065c376d8f799f7e03061e7c3d. Aug 12 23:52:55.436969 containerd[1436]: time="2025-08-12T23:52:55.436914776Z" level=info msg="StartContainer for \"503df6ce8e4c75fe369d8fac0878acb520c428065c376d8f799f7e03061e7c3d\" returns successfully" Aug 12 23:52:55.682907 kubelet[2466]: I0812 23:52:55.682784 2466 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 12 23:52:55.746271 systemd[1]: Created slice kubepods-burstable-pod8bbabd98_94a2_43c2_921b_f9ef63fd9d01.slice - libcontainer container kubepods-burstable-pod8bbabd98_94a2_43c2_921b_f9ef63fd9d01.slice. Aug 12 23:52:55.754350 systemd[1]: Created slice kubepods-burstable-pod75a51a54_d8e8_4377_985e_c8a8b2f43520.slice - libcontainer container kubepods-burstable-pod75a51a54_d8e8_4377_985e_c8a8b2f43520.slice. Aug 12 23:52:55.907602 kubelet[2466]: I0812 23:52:55.907549 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75a51a54-d8e8-4377-985e-c8a8b2f43520-config-volume\") pod \"coredns-7c65d6cfc9-pcgq7\" (UID: \"75a51a54-d8e8-4377-985e-c8a8b2f43520\") " pod="kube-system/coredns-7c65d6cfc9-pcgq7" Aug 12 23:52:55.907602 kubelet[2466]: I0812 23:52:55.907600 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8bbabd98-94a2-43c2-921b-f9ef63fd9d01-config-volume\") pod \"coredns-7c65d6cfc9-h7kpb\" (UID: \"8bbabd98-94a2-43c2-921b-f9ef63fd9d01\") " pod="kube-system/coredns-7c65d6cfc9-h7kpb" Aug 12 23:52:55.907775 kubelet[2466]: I0812 23:52:55.907621 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqcsg\" (UniqueName: \"kubernetes.io/projected/75a51a54-d8e8-4377-985e-c8a8b2f43520-kube-api-access-fqcsg\") pod \"coredns-7c65d6cfc9-pcgq7\" (UID: \"75a51a54-d8e8-4377-985e-c8a8b2f43520\") " pod="kube-system/coredns-7c65d6cfc9-pcgq7" Aug 12 23:52:55.907775 kubelet[2466]: I0812 23:52:55.907650 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-226q2\" (UniqueName: \"kubernetes.io/projected/8bbabd98-94a2-43c2-921b-f9ef63fd9d01-kube-api-access-226q2\") pod \"coredns-7c65d6cfc9-h7kpb\" (UID: \"8bbabd98-94a2-43c2-921b-f9ef63fd9d01\") " pod="kube-system/coredns-7c65d6cfc9-h7kpb" Aug 12 23:52:56.050409 kubelet[2466]: E0812 23:52:56.050365 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:56.052647 containerd[1436]: time="2025-08-12T23:52:56.052398372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-h7kpb,Uid:8bbabd98-94a2-43c2-921b-f9ef63fd9d01,Namespace:kube-system,Attempt:0,}" Aug 12 23:52:56.058696 kubelet[2466]: E0812 23:52:56.058393 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:56.058921 containerd[1436]: time="2025-08-12T23:52:56.058867622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pcgq7,Uid:75a51a54-d8e8-4377-985e-c8a8b2f43520,Namespace:kube-system,Attempt:0,}" Aug 12 23:52:56.284833 kubelet[2466]: E0812 23:52:56.284487 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:56.326657 kubelet[2466]: I0812 23:52:56.326503 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zg77b" podStartSLOduration=6.338800486 podStartE2EDuration="18.326483478s" podCreationTimestamp="2025-08-12 23:52:38 +0000 UTC" firstStartedPulling="2025-08-12 23:52:39.091125201 +0000 UTC m=+6.030809661" lastFinishedPulling="2025-08-12 23:52:51.078808193 +0000 UTC m=+18.018492653" observedRunningTime="2025-08-12 23:52:56.325724795 +0000 UTC m=+23.265409255" watchObservedRunningTime="2025-08-12 23:52:56.326483478 +0000 UTC m=+23.266167938" Aug 12 23:52:57.288732 kubelet[2466]: E0812 23:52:57.288653 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:57.892419 systemd-networkd[1379]: cilium_host: Link UP Aug 12 23:52:57.892540 systemd-networkd[1379]: cilium_net: Link UP Aug 12 23:52:57.892682 systemd-networkd[1379]: cilium_net: Gained carrier Aug 12 23:52:57.892821 systemd-networkd[1379]: cilium_host: Gained carrier Aug 12 23:52:57.990596 systemd-networkd[1379]: cilium_vxlan: Link UP Aug 12 23:52:57.990605 systemd-networkd[1379]: cilium_vxlan: Gained carrier Aug 12 23:52:58.291078 kubelet[2466]: E0812 23:52:58.290895 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:58.447011 systemd-networkd[1379]: cilium_host: Gained IPv6LL Aug 12 23:52:58.485916 kernel: NET: Registered PF_ALG protocol family Aug 12 23:52:58.575028 systemd-networkd[1379]: cilium_net: Gained IPv6LL Aug 12 23:52:59.023040 systemd-networkd[1379]: cilium_vxlan: Gained IPv6LL Aug 12 23:52:59.144514 systemd-networkd[1379]: lxc_health: Link UP Aug 12 23:52:59.151677 systemd-networkd[1379]: lxc_health: Gained carrier Aug 12 23:52:59.295772 systemd-networkd[1379]: lxcbc84b842dacc: Link UP Aug 12 23:52:59.304867 kernel: eth0: renamed from tmpa44b5 Aug 12 23:52:59.312579 systemd-networkd[1379]: lxc7709634271fa: Link UP Aug 12 23:52:59.324415 systemd-networkd[1379]: lxcbc84b842dacc: Gained carrier Aug 12 23:52:59.324844 kernel: eth0: renamed from tmp4ccb4 Aug 12 23:52:59.332351 systemd-networkd[1379]: lxc7709634271fa: Gained carrier Aug 12 23:53:00.751071 systemd-networkd[1379]: lxc_health: Gained IPv6LL Aug 12 23:53:01.010783 kubelet[2466]: E0812 23:53:01.010677 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:01.071119 systemd-networkd[1379]: lxc7709634271fa: Gained IPv6LL Aug 12 23:53:01.263067 systemd-networkd[1379]: lxcbc84b842dacc: Gained IPv6LL Aug 12 23:53:01.304985 kubelet[2466]: E0812 23:53:01.304779 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:02.306906 kubelet[2466]: E0812 23:53:02.306871 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:03.119534 containerd[1436]: time="2025-08-12T23:53:03.119321015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:53:03.119534 containerd[1436]: time="2025-08-12T23:53:03.119402412Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:53:03.119534 containerd[1436]: time="2025-08-12T23:53:03.119413771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:53:03.120053 containerd[1436]: time="2025-08-12T23:53:03.119543126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:53:03.127337 containerd[1436]: time="2025-08-12T23:53:03.127061718Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:53:03.127337 containerd[1436]: time="2025-08-12T23:53:03.127120356Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:53:03.127337 containerd[1436]: time="2025-08-12T23:53:03.127131955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:53:03.127337 containerd[1436]: time="2025-08-12T23:53:03.127226351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:53:03.156035 systemd[1]: Started cri-containerd-4ccb40c55b03b95b31071821fdb8a086f6a9b0372e4f6bfdb2f022d76e5677bf.scope - libcontainer container 4ccb40c55b03b95b31071821fdb8a086f6a9b0372e4f6bfdb2f022d76e5677bf. Aug 12 23:53:03.158514 systemd[1]: Started cri-containerd-a44b5834e999037dff905964e93b578ffd119e363a636bc3a88fc051c66196ba.scope - libcontainer container a44b5834e999037dff905964e93b578ffd119e363a636bc3a88fc051c66196ba. Aug 12 23:53:03.172215 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:53:03.174471 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:53:03.197931 containerd[1436]: time="2025-08-12T23:53:03.197058432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-h7kpb,Uid:8bbabd98-94a2-43c2-921b-f9ef63fd9d01,Namespace:kube-system,Attempt:0,} returns sandbox id \"a44b5834e999037dff905964e93b578ffd119e363a636bc3a88fc051c66196ba\"" Aug 12 23:53:03.198064 kubelet[2466]: E0812 23:53:03.197641 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:03.200737 containerd[1436]: time="2025-08-12T23:53:03.200682013Z" level=info msg="CreateContainer within sandbox \"a44b5834e999037dff905964e93b578ffd119e363a636bc3a88fc051c66196ba\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 12 23:53:03.201171 containerd[1436]: time="2025-08-12T23:53:03.201143795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pcgq7,Uid:75a51a54-d8e8-4377-985e-c8a8b2f43520,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ccb40c55b03b95b31071821fdb8a086f6a9b0372e4f6bfdb2f022d76e5677bf\"" Aug 12 23:53:03.202273 kubelet[2466]: E0812 23:53:03.202240 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:03.204009 containerd[1436]: time="2025-08-12T23:53:03.203971647Z" level=info msg="CreateContainer within sandbox \"4ccb40c55b03b95b31071821fdb8a086f6a9b0372e4f6bfdb2f022d76e5677bf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 12 23:53:03.226185 containerd[1436]: time="2025-08-12T23:53:03.226131397Z" level=info msg="CreateContainer within sandbox \"a44b5834e999037dff905964e93b578ffd119e363a636bc3a88fc051c66196ba\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5d7aa337bab6eed3ebe1072b241e60654468de224b3ddda99047c0977319c114\"" Aug 12 23:53:03.226595 containerd[1436]: time="2025-08-12T23:53:03.226575380Z" level=info msg="StartContainer for \"5d7aa337bab6eed3ebe1072b241e60654468de224b3ddda99047c0977319c114\"" Aug 12 23:53:03.228814 containerd[1436]: time="2025-08-12T23:53:03.227648778Z" level=info msg="CreateContainer within sandbox \"4ccb40c55b03b95b31071821fdb8a086f6a9b0372e4f6bfdb2f022d76e5677bf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4082fc68f1122a550a0cfdc15451e1a40c61501b3dbf9b2cc63f70bc17041f7b\"" Aug 12 23:53:03.229218 containerd[1436]: time="2025-08-12T23:53:03.229193479Z" level=info msg="StartContainer for \"4082fc68f1122a550a0cfdc15451e1a40c61501b3dbf9b2cc63f70bc17041f7b\"" Aug 12 23:53:03.257023 systemd[1]: Started cri-containerd-5d7aa337bab6eed3ebe1072b241e60654468de224b3ddda99047c0977319c114.scope - libcontainer container 5d7aa337bab6eed3ebe1072b241e60654468de224b3ddda99047c0977319c114. Aug 12 23:53:03.260362 systemd[1]: Started cri-containerd-4082fc68f1122a550a0cfdc15451e1a40c61501b3dbf9b2cc63f70bc17041f7b.scope - libcontainer container 4082fc68f1122a550a0cfdc15451e1a40c61501b3dbf9b2cc63f70bc17041f7b. Aug 12 23:53:03.321812 containerd[1436]: time="2025-08-12T23:53:03.321742208Z" level=info msg="StartContainer for \"5d7aa337bab6eed3ebe1072b241e60654468de224b3ddda99047c0977319c114\" returns successfully" Aug 12 23:53:03.321958 containerd[1436]: time="2025-08-12T23:53:03.321777167Z" level=info msg="StartContainer for \"4082fc68f1122a550a0cfdc15451e1a40c61501b3dbf9b2cc63f70bc17041f7b\" returns successfully" Aug 12 23:53:03.339007 kubelet[2466]: E0812 23:53:03.338851 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:03.341827 kubelet[2466]: E0812 23:53:03.340095 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:03.406083 kubelet[2466]: I0812 23:53:03.405914 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-h7kpb" podStartSLOduration=25.40589342 podStartE2EDuration="25.40589342s" podCreationTimestamp="2025-08-12 23:52:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:53:03.401952011 +0000 UTC m=+30.341636471" watchObservedRunningTime="2025-08-12 23:53:03.40589342 +0000 UTC m=+30.345577880" Aug 12 23:53:03.406083 kubelet[2466]: I0812 23:53:03.406021 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-pcgq7" podStartSLOduration=25.406016975 podStartE2EDuration="25.406016975s" podCreationTimestamp="2025-08-12 23:52:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:53:03.365316976 +0000 UTC m=+30.305001436" watchObservedRunningTime="2025-08-12 23:53:03.406016975 +0000 UTC m=+30.345701435" Aug 12 23:53:03.554539 systemd[1]: Started sshd@7-10.0.0.6:22-10.0.0.1:36094.service - OpenSSH per-connection server daemon (10.0.0.1:36094). Aug 12 23:53:03.619954 sshd[3867]: Accepted publickey for core from 10.0.0.1 port 36094 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:53:03.622364 sshd[3867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:53:03.637553 systemd-logind[1417]: New session 8 of user core. Aug 12 23:53:03.645038 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 12 23:53:03.806399 sshd[3867]: pam_unix(sshd:session): session closed for user core Aug 12 23:53:03.810629 systemd[1]: sshd@7-10.0.0.6:22-10.0.0.1:36094.service: Deactivated successfully. Aug 12 23:53:03.813109 systemd[1]: session-8.scope: Deactivated successfully. Aug 12 23:53:03.815551 systemd-logind[1417]: Session 8 logged out. Waiting for processes to exit. Aug 12 23:53:03.816506 systemd-logind[1417]: Removed session 8. Aug 12 23:53:04.342248 kubelet[2466]: E0812 23:53:04.342206 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:04.342710 kubelet[2466]: E0812 23:53:04.342432 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:05.343880 kubelet[2466]: E0812 23:53:05.343451 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:05.343880 kubelet[2466]: E0812 23:53:05.343538 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:08.822834 systemd[1]: Started sshd@8-10.0.0.6:22-10.0.0.1:36096.service - OpenSSH per-connection server daemon (10.0.0.1:36096). Aug 12 23:53:08.866892 sshd[3893]: Accepted publickey for core from 10.0.0.1 port 36096 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:53:08.869498 sshd[3893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:53:08.875348 systemd-logind[1417]: New session 9 of user core. Aug 12 23:53:08.885057 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 12 23:53:09.024153 sshd[3893]: pam_unix(sshd:session): session closed for user core Aug 12 23:53:09.028634 systemd[1]: sshd@8-10.0.0.6:22-10.0.0.1:36096.service: Deactivated successfully. Aug 12 23:53:09.030926 systemd[1]: session-9.scope: Deactivated successfully. Aug 12 23:53:09.031630 systemd-logind[1417]: Session 9 logged out. Waiting for processes to exit. Aug 12 23:53:09.032627 systemd-logind[1417]: Removed session 9. Aug 12 23:53:14.035585 systemd[1]: Started sshd@9-10.0.0.6:22-10.0.0.1:33030.service - OpenSSH per-connection server daemon (10.0.0.1:33030). Aug 12 23:53:14.078849 sshd[3911]: Accepted publickey for core from 10.0.0.1 port 33030 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:53:14.079831 sshd[3911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:53:14.084708 systemd-logind[1417]: New session 10 of user core. Aug 12 23:53:14.094006 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 12 23:53:14.235581 sshd[3911]: pam_unix(sshd:session): session closed for user core Aug 12 23:53:14.241033 systemd[1]: sshd@9-10.0.0.6:22-10.0.0.1:33030.service: Deactivated successfully. Aug 12 23:53:14.243853 systemd[1]: session-10.scope: Deactivated successfully. Aug 12 23:53:14.244998 systemd-logind[1417]: Session 10 logged out. Waiting for processes to exit. Aug 12 23:53:14.246232 systemd-logind[1417]: Removed session 10. Aug 12 23:53:19.249784 systemd[1]: Started sshd@10-10.0.0.6:22-10.0.0.1:33040.service - OpenSSH per-connection server daemon (10.0.0.1:33040). Aug 12 23:53:19.295156 sshd[3926]: Accepted publickey for core from 10.0.0.1 port 33040 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:53:19.296890 sshd[3926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:53:19.302923 systemd-logind[1417]: New session 11 of user core. Aug 12 23:53:19.312090 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 12 23:53:19.482950 sshd[3926]: pam_unix(sshd:session): session closed for user core Aug 12 23:53:19.496648 systemd[1]: sshd@10-10.0.0.6:22-10.0.0.1:33040.service: Deactivated successfully. Aug 12 23:53:19.500033 systemd[1]: session-11.scope: Deactivated successfully. Aug 12 23:53:19.502690 systemd-logind[1417]: Session 11 logged out. Waiting for processes to exit. Aug 12 23:53:19.518203 systemd[1]: Started sshd@11-10.0.0.6:22-10.0.0.1:33048.service - OpenSSH per-connection server daemon (10.0.0.1:33048). Aug 12 23:53:19.519429 systemd-logind[1417]: Removed session 11. Aug 12 23:53:19.559273 sshd[3942]: Accepted publickey for core from 10.0.0.1 port 33048 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:53:19.560300 sshd[3942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:53:19.567898 systemd-logind[1417]: New session 12 of user core. Aug 12 23:53:19.581968 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 12 23:53:19.797830 sshd[3942]: pam_unix(sshd:session): session closed for user core Aug 12 23:53:19.810966 systemd[1]: sshd@11-10.0.0.6:22-10.0.0.1:33048.service: Deactivated successfully. Aug 12 23:53:19.815134 systemd[1]: session-12.scope: Deactivated successfully. Aug 12 23:53:19.818700 systemd-logind[1417]: Session 12 logged out. Waiting for processes to exit. Aug 12 23:53:19.829392 systemd[1]: Started sshd@12-10.0.0.6:22-10.0.0.1:33052.service - OpenSSH per-connection server daemon (10.0.0.1:33052). Aug 12 23:53:19.832717 systemd-logind[1417]: Removed session 12. Aug 12 23:53:19.895472 sshd[3954]: Accepted publickey for core from 10.0.0.1 port 33052 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:53:19.897188 sshd[3954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:53:19.902100 systemd-logind[1417]: New session 13 of user core. Aug 12 23:53:19.915088 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 12 23:53:20.058553 sshd[3954]: pam_unix(sshd:session): session closed for user core Aug 12 23:53:20.062865 systemd[1]: sshd@12-10.0.0.6:22-10.0.0.1:33052.service: Deactivated successfully. Aug 12 23:53:20.065103 systemd[1]: session-13.scope: Deactivated successfully. Aug 12 23:53:20.066045 systemd-logind[1417]: Session 13 logged out. Waiting for processes to exit. Aug 12 23:53:20.067357 systemd-logind[1417]: Removed session 13. Aug 12 23:53:25.074595 systemd[1]: Started sshd@13-10.0.0.6:22-10.0.0.1:42792.service - OpenSSH per-connection server daemon (10.0.0.1:42792). Aug 12 23:53:25.147691 sshd[3969]: Accepted publickey for core from 10.0.0.1 port 42792 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:53:25.145641 sshd[3969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:53:25.151923 systemd-logind[1417]: New session 14 of user core. Aug 12 23:53:25.163135 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 12 23:53:25.343229 sshd[3969]: pam_unix(sshd:session): session closed for user core Aug 12 23:53:25.348757 systemd[1]: sshd@13-10.0.0.6:22-10.0.0.1:42792.service: Deactivated successfully. Aug 12 23:53:25.351083 systemd[1]: session-14.scope: Deactivated successfully. Aug 12 23:53:25.352218 systemd-logind[1417]: Session 14 logged out. Waiting for processes to exit. Aug 12 23:53:25.354345 systemd-logind[1417]: Removed session 14. Aug 12 23:53:30.355925 systemd[1]: Started sshd@14-10.0.0.6:22-10.0.0.1:42804.service - OpenSSH per-connection server daemon (10.0.0.1:42804). Aug 12 23:53:30.401381 sshd[3983]: Accepted publickey for core from 10.0.0.1 port 42804 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:53:30.402889 sshd[3983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:53:30.407447 systemd-logind[1417]: New session 15 of user core. Aug 12 23:53:30.414057 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 12 23:53:30.564707 sshd[3983]: pam_unix(sshd:session): session closed for user core Aug 12 23:53:30.577403 systemd[1]: sshd@14-10.0.0.6:22-10.0.0.1:42804.service: Deactivated successfully. Aug 12 23:53:30.580283 systemd[1]: session-15.scope: Deactivated successfully. Aug 12 23:53:30.583016 systemd-logind[1417]: Session 15 logged out. Waiting for processes to exit. Aug 12 23:53:30.592236 systemd[1]: Started sshd@15-10.0.0.6:22-10.0.0.1:42818.service - OpenSSH per-connection server daemon (10.0.0.1:42818). Aug 12 23:53:30.594924 systemd-logind[1417]: Removed session 15. Aug 12 23:53:30.626701 sshd[3998]: Accepted publickey for core from 10.0.0.1 port 42818 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:53:30.628206 sshd[3998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:53:30.634610 systemd-logind[1417]: New session 16 of user core. Aug 12 23:53:30.644068 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 12 23:53:31.018189 sshd[3998]: pam_unix(sshd:session): session closed for user core Aug 12 23:53:31.022644 systemd[1]: sshd@15-10.0.0.6:22-10.0.0.1:42818.service: Deactivated successfully. Aug 12 23:53:31.025066 systemd[1]: session-16.scope: Deactivated successfully. Aug 12 23:53:31.025992 systemd-logind[1417]: Session 16 logged out. Waiting for processes to exit. Aug 12 23:53:31.047890 systemd[1]: Started sshd@16-10.0.0.6:22-10.0.0.1:42822.service - OpenSSH per-connection server daemon (10.0.0.1:42822). Aug 12 23:53:31.048610 systemd-logind[1417]: Removed session 16. Aug 12 23:53:31.099438 sshd[4010]: Accepted publickey for core from 10.0.0.1 port 42822 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:53:31.101321 sshd[4010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:53:31.106335 systemd-logind[1417]: New session 17 of user core. Aug 12 23:53:31.114437 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 12 23:53:32.594461 sshd[4010]: pam_unix(sshd:session): session closed for user core Aug 12 23:53:32.613443 systemd[1]: sshd@16-10.0.0.6:22-10.0.0.1:42822.service: Deactivated successfully. Aug 12 23:53:32.617222 systemd[1]: session-17.scope: Deactivated successfully. Aug 12 23:53:32.621409 systemd-logind[1417]: Session 17 logged out. Waiting for processes to exit. Aug 12 23:53:32.634121 systemd[1]: Started sshd@17-10.0.0.6:22-10.0.0.1:60136.service - OpenSSH per-connection server daemon (10.0.0.1:60136). Aug 12 23:53:32.635868 systemd-logind[1417]: Removed session 17. Aug 12 23:53:32.673965 sshd[4030]: Accepted publickey for core from 10.0.0.1 port 60136 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:53:32.675917 sshd[4030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:53:32.684242 systemd-logind[1417]: New session 18 of user core. Aug 12 23:53:32.696019 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 12 23:53:32.974556 sshd[4030]: pam_unix(sshd:session): session closed for user core Aug 12 23:53:32.982768 systemd[1]: sshd@17-10.0.0.6:22-10.0.0.1:60136.service: Deactivated successfully. Aug 12 23:53:32.988022 systemd[1]: session-18.scope: Deactivated successfully. Aug 12 23:53:32.989705 systemd-logind[1417]: Session 18 logged out. Waiting for processes to exit. Aug 12 23:53:33.001420 systemd[1]: Started sshd@18-10.0.0.6:22-10.0.0.1:60146.service - OpenSSH per-connection server daemon (10.0.0.1:60146). Aug 12 23:53:33.002599 systemd-logind[1417]: Removed session 18. Aug 12 23:53:33.037991 sshd[4043]: Accepted publickey for core from 10.0.0.1 port 60146 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:53:33.038987 sshd[4043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:53:33.043883 systemd-logind[1417]: New session 19 of user core. Aug 12 23:53:33.053031 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 12 23:53:33.175898 sshd[4043]: pam_unix(sshd:session): session closed for user core Aug 12 23:53:33.179479 systemd[1]: sshd@18-10.0.0.6:22-10.0.0.1:60146.service: Deactivated successfully. Aug 12 23:53:33.183416 systemd[1]: session-19.scope: Deactivated successfully. Aug 12 23:53:33.184189 systemd-logind[1417]: Session 19 logged out. Waiting for processes to exit. Aug 12 23:53:33.185098 systemd-logind[1417]: Removed session 19. Aug 12 23:53:38.209129 systemd[1]: Started sshd@19-10.0.0.6:22-10.0.0.1:60160.service - OpenSSH per-connection server daemon (10.0.0.1:60160). Aug 12 23:53:38.242030 sshd[4060]: Accepted publickey for core from 10.0.0.1 port 60160 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:53:38.243601 sshd[4060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:53:38.247572 systemd-logind[1417]: New session 20 of user core. Aug 12 23:53:38.259014 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 12 23:53:38.375564 sshd[4060]: pam_unix(sshd:session): session closed for user core Aug 12 23:53:38.381666 systemd[1]: sshd@19-10.0.0.6:22-10.0.0.1:60160.service: Deactivated successfully. Aug 12 23:53:38.385933 systemd[1]: session-20.scope: Deactivated successfully. Aug 12 23:53:38.386912 systemd-logind[1417]: Session 20 logged out. Waiting for processes to exit. Aug 12 23:53:38.388070 systemd-logind[1417]: Removed session 20. Aug 12 23:53:43.389587 systemd[1]: Started sshd@20-10.0.0.6:22-10.0.0.1:53138.service - OpenSSH per-connection server daemon (10.0.0.1:53138). Aug 12 23:53:43.430866 sshd[4079]: Accepted publickey for core from 10.0.0.1 port 53138 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:53:43.432581 sshd[4079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:53:43.437666 systemd-logind[1417]: New session 21 of user core. Aug 12 23:53:43.447047 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 12 23:53:43.568899 sshd[4079]: pam_unix(sshd:session): session closed for user core Aug 12 23:53:43.575060 systemd[1]: sshd@20-10.0.0.6:22-10.0.0.1:53138.service: Deactivated successfully. Aug 12 23:53:43.577769 systemd[1]: session-21.scope: Deactivated successfully. Aug 12 23:53:43.581435 systemd-logind[1417]: Session 21 logged out. Waiting for processes to exit. Aug 12 23:53:43.583520 systemd-logind[1417]: Removed session 21. Aug 12 23:53:48.584614 systemd[1]: Started sshd@21-10.0.0.6:22-10.0.0.1:53140.service - OpenSSH per-connection server daemon (10.0.0.1:53140). Aug 12 23:53:48.637699 sshd[4093]: Accepted publickey for core from 10.0.0.1 port 53140 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:53:48.639772 sshd[4093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:53:48.645231 systemd-logind[1417]: New session 22 of user core. Aug 12 23:53:48.661446 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 12 23:53:48.825155 sshd[4093]: pam_unix(sshd:session): session closed for user core Aug 12 23:53:48.840522 systemd[1]: sshd@21-10.0.0.6:22-10.0.0.1:53140.service: Deactivated successfully. Aug 12 23:53:48.849327 systemd[1]: session-22.scope: Deactivated successfully. Aug 12 23:53:48.853815 systemd-logind[1417]: Session 22 logged out. Waiting for processes to exit. Aug 12 23:53:48.854846 systemd-logind[1417]: Removed session 22. Aug 12 23:53:52.180460 kubelet[2466]: E0812 23:53:52.180071 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:53.834258 systemd[1]: Started sshd@22-10.0.0.6:22-10.0.0.1:40610.service - OpenSSH per-connection server daemon (10.0.0.1:40610). Aug 12 23:53:53.882971 sshd[4107]: Accepted publickey for core from 10.0.0.1 port 40610 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:53:53.884567 sshd[4107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:53:53.891722 systemd-logind[1417]: New session 23 of user core. Aug 12 23:53:53.901111 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 12 23:53:54.033018 sshd[4107]: pam_unix(sshd:session): session closed for user core Aug 12 23:53:54.039050 systemd[1]: sshd@22-10.0.0.6:22-10.0.0.1:40610.service: Deactivated successfully. Aug 12 23:53:54.041407 systemd[1]: session-23.scope: Deactivated successfully. Aug 12 23:53:54.043317 systemd-logind[1417]: Session 23 logged out. Waiting for processes to exit. Aug 12 23:53:54.046241 systemd-logind[1417]: Removed session 23. Aug 12 23:53:54.179348 kubelet[2466]: E0812 23:53:54.179228 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:56.179778 kubelet[2466]: E0812 23:53:56.179643 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:53:59.045093 systemd[1]: Started sshd@23-10.0.0.6:22-10.0.0.1:40626.service - OpenSSH per-connection server daemon (10.0.0.1:40626). Aug 12 23:53:59.091809 sshd[4121]: Accepted publickey for core from 10.0.0.1 port 40626 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:53:59.093581 sshd[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:53:59.097555 systemd-logind[1417]: New session 24 of user core. Aug 12 23:53:59.114039 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 12 23:53:59.237150 sshd[4121]: pam_unix(sshd:session): session closed for user core Aug 12 23:53:59.249618 systemd[1]: sshd@23-10.0.0.6:22-10.0.0.1:40626.service: Deactivated successfully. Aug 12 23:53:59.252471 systemd[1]: session-24.scope: Deactivated successfully. Aug 12 23:53:59.254106 systemd-logind[1417]: Session 24 logged out. Waiting for processes to exit. Aug 12 23:53:59.268180 systemd[1]: Started sshd@24-10.0.0.6:22-10.0.0.1:40638.service - OpenSSH per-connection server daemon (10.0.0.1:40638). Aug 12 23:53:59.271860 systemd-logind[1417]: Removed session 24. Aug 12 23:53:59.302262 sshd[4135]: Accepted publickey for core from 10.0.0.1 port 40638 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:53:59.303819 sshd[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:53:59.308756 systemd-logind[1417]: New session 25 of user core. Aug 12 23:53:59.317001 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 12 23:54:01.202764 containerd[1436]: time="2025-08-12T23:54:01.202709007Z" level=info msg="StopContainer for \"664e3a53526c23b46148b0af554c97172e2336d4c4084b5d47f536c8afced632\" with timeout 30 (s)" Aug 12 23:54:01.205133 containerd[1436]: time="2025-08-12T23:54:01.203280564Z" level=info msg="Stop container \"664e3a53526c23b46148b0af554c97172e2336d4c4084b5d47f536c8afced632\" with signal terminated" Aug 12 23:54:01.214669 systemd[1]: cri-containerd-664e3a53526c23b46148b0af554c97172e2336d4c4084b5d47f536c8afced632.scope: Deactivated successfully. Aug 12 23:54:01.237929 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-664e3a53526c23b46148b0af554c97172e2336d4c4084b5d47f536c8afced632-rootfs.mount: Deactivated successfully. Aug 12 23:54:01.248633 containerd[1436]: time="2025-08-12T23:54:01.248547688Z" level=info msg="shim disconnected" id=664e3a53526c23b46148b0af554c97172e2336d4c4084b5d47f536c8afced632 namespace=k8s.io Aug 12 23:54:01.248633 containerd[1436]: time="2025-08-12T23:54:01.248621768Z" level=warning msg="cleaning up after shim disconnected" id=664e3a53526c23b46148b0af554c97172e2336d4c4084b5d47f536c8afced632 namespace=k8s.io Aug 12 23:54:01.248633 containerd[1436]: time="2025-08-12T23:54:01.248634648Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:54:01.254832 containerd[1436]: time="2025-08-12T23:54:01.254234733Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 12 23:54:01.272082 containerd[1436]: time="2025-08-12T23:54:01.271971146Z" level=info msg="StopContainer for \"503df6ce8e4c75fe369d8fac0878acb520c428065c376d8f799f7e03061e7c3d\" with timeout 2 (s)" Aug 12 23:54:01.272539 containerd[1436]: time="2025-08-12T23:54:01.272513582Z" level=info msg="Stop container \"503df6ce8e4c75fe369d8fac0878acb520c428065c376d8f799f7e03061e7c3d\" with signal terminated" Aug 12 23:54:01.279707 systemd-networkd[1379]: lxc_health: Link DOWN Aug 12 23:54:01.279712 systemd-networkd[1379]: lxc_health: Lost carrier Aug 12 23:54:01.303708 systemd[1]: cri-containerd-503df6ce8e4c75fe369d8fac0878acb520c428065c376d8f799f7e03061e7c3d.scope: Deactivated successfully. Aug 12 23:54:01.304048 systemd[1]: cri-containerd-503df6ce8e4c75fe369d8fac0878acb520c428065c376d8f799f7e03061e7c3d.scope: Consumed 7.405s CPU time. Aug 12 23:54:01.324509 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-503df6ce8e4c75fe369d8fac0878acb520c428065c376d8f799f7e03061e7c3d-rootfs.mount: Deactivated successfully. Aug 12 23:54:01.326223 containerd[1436]: time="2025-08-12T23:54:01.325629539Z" level=info msg="StopContainer for \"664e3a53526c23b46148b0af554c97172e2336d4c4084b5d47f536c8afced632\" returns successfully" Aug 12 23:54:01.326661 containerd[1436]: time="2025-08-12T23:54:01.326408614Z" level=info msg="StopPodSandbox for \"21cf9b68e95fdef837ee3d9914b5fec7665e83b0efa7d3655b4a54318f5206b5\"" Aug 12 23:54:01.326661 containerd[1436]: time="2025-08-12T23:54:01.326443534Z" level=info msg="Container to stop \"664e3a53526c23b46148b0af554c97172e2336d4c4084b5d47f536c8afced632\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 12 23:54:01.329007 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-21cf9b68e95fdef837ee3d9914b5fec7665e83b0efa7d3655b4a54318f5206b5-shm.mount: Deactivated successfully. Aug 12 23:54:01.333917 systemd[1]: cri-containerd-21cf9b68e95fdef837ee3d9914b5fec7665e83b0efa7d3655b4a54318f5206b5.scope: Deactivated successfully. Aug 12 23:54:01.338953 containerd[1436]: time="2025-08-12T23:54:01.338723819Z" level=info msg="shim disconnected" id=503df6ce8e4c75fe369d8fac0878acb520c428065c376d8f799f7e03061e7c3d namespace=k8s.io Aug 12 23:54:01.339289 containerd[1436]: time="2025-08-12T23:54:01.339226016Z" level=warning msg="cleaning up after shim disconnected" id=503df6ce8e4c75fe369d8fac0878acb520c428065c376d8f799f7e03061e7c3d namespace=k8s.io Aug 12 23:54:01.339289 containerd[1436]: time="2025-08-12T23:54:01.339246456Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:54:01.356982 containerd[1436]: time="2025-08-12T23:54:01.356690190Z" level=info msg="StopContainer for \"503df6ce8e4c75fe369d8fac0878acb520c428065c376d8f799f7e03061e7c3d\" returns successfully" Aug 12 23:54:01.357549 containerd[1436]: time="2025-08-12T23:54:01.357503825Z" level=info msg="StopPodSandbox for \"d145981ccd7704669fb43330719da395e69f608546bcea038e07a2711b4550c8\"" Aug 12 23:54:01.357681 containerd[1436]: time="2025-08-12T23:54:01.357550345Z" level=info msg="Container to stop \"d22ce9b1ff7a05095c56307bb3a45d2248af89820f791580c90cd8ea80a5f0ab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 12 23:54:01.357731 containerd[1436]: time="2025-08-12T23:54:01.357683824Z" level=info msg="Container to stop \"ec4a60acbaa660eff45c2dc4123855aa476028e8e2436dfdac36a1a87987700a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 12 23:54:01.357783 containerd[1436]: time="2025-08-12T23:54:01.357697184Z" level=info msg="Container to stop \"35fdfdcc412b1b03b23dcfbd77eb5ebcfd9dc7b5244ce577576b3545a4b1b568\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 12 23:54:01.357876 containerd[1436]: time="2025-08-12T23:54:01.357782183Z" level=info msg="Container to stop \"2458c311161663e6a6213642d92eb22be83372c118d6b8071b2cc1931ef793f3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 12 23:54:01.357876 containerd[1436]: time="2025-08-12T23:54:01.357810823Z" level=info msg="Container to stop \"503df6ce8e4c75fe369d8fac0878acb520c428065c376d8f799f7e03061e7c3d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 12 23:54:01.359676 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d145981ccd7704669fb43330719da395e69f608546bcea038e07a2711b4550c8-shm.mount: Deactivated successfully. Aug 12 23:54:01.364329 systemd[1]: cri-containerd-d145981ccd7704669fb43330719da395e69f608546bcea038e07a2711b4550c8.scope: Deactivated successfully. Aug 12 23:54:01.379567 containerd[1436]: time="2025-08-12T23:54:01.379506931Z" level=info msg="shim disconnected" id=21cf9b68e95fdef837ee3d9914b5fec7665e83b0efa7d3655b4a54318f5206b5 namespace=k8s.io Aug 12 23:54:01.379567 containerd[1436]: time="2025-08-12T23:54:01.379566931Z" level=warning msg="cleaning up after shim disconnected" id=21cf9b68e95fdef837ee3d9914b5fec7665e83b0efa7d3655b4a54318f5206b5 namespace=k8s.io Aug 12 23:54:01.379567 containerd[1436]: time="2025-08-12T23:54:01.379584731Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:54:01.397950 containerd[1436]: time="2025-08-12T23:54:01.397753460Z" level=info msg="TearDown network for sandbox \"21cf9b68e95fdef837ee3d9914b5fec7665e83b0efa7d3655b4a54318f5206b5\" successfully" Aug 12 23:54:01.397950 containerd[1436]: time="2025-08-12T23:54:01.397814380Z" level=info msg="StopPodSandbox for \"21cf9b68e95fdef837ee3d9914b5fec7665e83b0efa7d3655b4a54318f5206b5\" returns successfully" Aug 12 23:54:01.409255 containerd[1436]: time="2025-08-12T23:54:01.409187071Z" level=info msg="shim disconnected" id=d145981ccd7704669fb43330719da395e69f608546bcea038e07a2711b4550c8 namespace=k8s.io Aug 12 23:54:01.409458 containerd[1436]: time="2025-08-12T23:54:01.409266550Z" level=warning msg="cleaning up after shim disconnected" id=d145981ccd7704669fb43330719da395e69f608546bcea038e07a2711b4550c8 namespace=k8s.io Aug 12 23:54:01.409458 containerd[1436]: time="2025-08-12T23:54:01.409277790Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:54:01.422992 containerd[1436]: time="2025-08-12T23:54:01.422932187Z" level=info msg="TearDown network for sandbox \"d145981ccd7704669fb43330719da395e69f608546bcea038e07a2711b4550c8\" successfully" Aug 12 23:54:01.422992 containerd[1436]: time="2025-08-12T23:54:01.422969467Z" level=info msg="StopPodSandbox for \"d145981ccd7704669fb43330719da395e69f608546bcea038e07a2711b4550c8\" returns successfully" Aug 12 23:54:01.473367 kubelet[2466]: I0812 23:54:01.473318 2466 scope.go:117] "RemoveContainer" containerID="503df6ce8e4c75fe369d8fac0878acb520c428065c376d8f799f7e03061e7c3d" Aug 12 23:54:01.475361 containerd[1436]: time="2025-08-12T23:54:01.475179189Z" level=info msg="RemoveContainer for \"503df6ce8e4c75fe369d8fac0878acb520c428065c376d8f799f7e03061e7c3d\"" Aug 12 23:54:01.478612 containerd[1436]: time="2025-08-12T23:54:01.478554409Z" level=info msg="RemoveContainer for \"503df6ce8e4c75fe369d8fac0878acb520c428065c376d8f799f7e03061e7c3d\" returns successfully" Aug 12 23:54:01.479096 kubelet[2466]: I0812 23:54:01.479029 2466 scope.go:117] "RemoveContainer" containerID="2458c311161663e6a6213642d92eb22be83372c118d6b8071b2cc1931ef793f3" Aug 12 23:54:01.480050 containerd[1436]: time="2025-08-12T23:54:01.480006880Z" level=info msg="RemoveContainer for \"2458c311161663e6a6213642d92eb22be83372c118d6b8071b2cc1931ef793f3\"" Aug 12 23:54:01.484757 containerd[1436]: time="2025-08-12T23:54:01.484717211Z" level=info msg="RemoveContainer for \"2458c311161663e6a6213642d92eb22be83372c118d6b8071b2cc1931ef793f3\" returns successfully" Aug 12 23:54:01.485053 kubelet[2466]: I0812 23:54:01.484958 2466 scope.go:117] "RemoveContainer" containerID="35fdfdcc412b1b03b23dcfbd77eb5ebcfd9dc7b5244ce577576b3545a4b1b568" Aug 12 23:54:01.486124 containerd[1436]: time="2025-08-12T23:54:01.486084283Z" level=info msg="RemoveContainer for \"35fdfdcc412b1b03b23dcfbd77eb5ebcfd9dc7b5244ce577576b3545a4b1b568\"" Aug 12 23:54:01.488785 containerd[1436]: time="2025-08-12T23:54:01.488747067Z" level=info msg="RemoveContainer for \"35fdfdcc412b1b03b23dcfbd77eb5ebcfd9dc7b5244ce577576b3545a4b1b568\" returns successfully" Aug 12 23:54:01.489019 kubelet[2466]: I0812 23:54:01.488994 2466 scope.go:117] "RemoveContainer" containerID="ec4a60acbaa660eff45c2dc4123855aa476028e8e2436dfdac36a1a87987700a" Aug 12 23:54:01.490169 containerd[1436]: time="2025-08-12T23:54:01.490122018Z" level=info msg="RemoveContainer for \"ec4a60acbaa660eff45c2dc4123855aa476028e8e2436dfdac36a1a87987700a\"" Aug 12 23:54:01.492704 containerd[1436]: time="2025-08-12T23:54:01.492657923Z" level=info msg="RemoveContainer for \"ec4a60acbaa660eff45c2dc4123855aa476028e8e2436dfdac36a1a87987700a\" returns successfully" Aug 12 23:54:01.492956 kubelet[2466]: I0812 23:54:01.492926 2466 scope.go:117] "RemoveContainer" containerID="d22ce9b1ff7a05095c56307bb3a45d2248af89820f791580c90cd8ea80a5f0ab" Aug 12 23:54:01.494011 containerd[1436]: time="2025-08-12T23:54:01.493948875Z" level=info msg="RemoveContainer for \"d22ce9b1ff7a05095c56307bb3a45d2248af89820f791580c90cd8ea80a5f0ab\"" Aug 12 23:54:01.496498 containerd[1436]: time="2025-08-12T23:54:01.496459900Z" level=info msg="RemoveContainer for \"d22ce9b1ff7a05095c56307bb3a45d2248af89820f791580c90cd8ea80a5f0ab\" returns successfully" Aug 12 23:54:01.496731 kubelet[2466]: I0812 23:54:01.496714 2466 scope.go:117] "RemoveContainer" containerID="503df6ce8e4c75fe369d8fac0878acb520c428065c376d8f799f7e03061e7c3d" Aug 12 23:54:01.497154 containerd[1436]: time="2025-08-12T23:54:01.497110496Z" level=error msg="ContainerStatus for \"503df6ce8e4c75fe369d8fac0878acb520c428065c376d8f799f7e03061e7c3d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"503df6ce8e4c75fe369d8fac0878acb520c428065c376d8f799f7e03061e7c3d\": not found" Aug 12 23:54:01.498252 kubelet[2466]: I0812 23:54:01.498228 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtrcz\" (UniqueName: \"kubernetes.io/projected/19671571-21af-4027-9427-acde95972777-kube-api-access-qtrcz\") pod \"19671571-21af-4027-9427-acde95972777\" (UID: \"19671571-21af-4027-9427-acde95972777\") " Aug 12 23:54:01.498570 kubelet[2466]: I0812 23:54:01.498427 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19671571-21af-4027-9427-acde95972777-cilium-config-path\") pod \"19671571-21af-4027-9427-acde95972777\" (UID: \"19671571-21af-4027-9427-acde95972777\") " Aug 12 23:54:01.504389 kubelet[2466]: I0812 23:54:01.504350 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19671571-21af-4027-9427-acde95972777-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "19671571-21af-4027-9427-acde95972777" (UID: "19671571-21af-4027-9427-acde95972777"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 12 23:54:01.508298 kubelet[2466]: I0812 23:54:01.508248 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19671571-21af-4027-9427-acde95972777-kube-api-access-qtrcz" (OuterVolumeSpecName: "kube-api-access-qtrcz") pod "19671571-21af-4027-9427-acde95972777" (UID: "19671571-21af-4027-9427-acde95972777"). InnerVolumeSpecName "kube-api-access-qtrcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 12 23:54:01.508953 kubelet[2466]: E0812 23:54:01.508911 2466 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"503df6ce8e4c75fe369d8fac0878acb520c428065c376d8f799f7e03061e7c3d\": not found" containerID="503df6ce8e4c75fe369d8fac0878acb520c428065c376d8f799f7e03061e7c3d" Aug 12 23:54:01.509042 kubelet[2466]: I0812 23:54:01.508959 2466 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"503df6ce8e4c75fe369d8fac0878acb520c428065c376d8f799f7e03061e7c3d"} err="failed to get container status \"503df6ce8e4c75fe369d8fac0878acb520c428065c376d8f799f7e03061e7c3d\": rpc error: code = NotFound desc = an error occurred when try to find container \"503df6ce8e4c75fe369d8fac0878acb520c428065c376d8f799f7e03061e7c3d\": not found" Aug 12 23:54:01.509081 kubelet[2466]: I0812 23:54:01.509046 2466 scope.go:117] "RemoveContainer" containerID="2458c311161663e6a6213642d92eb22be83372c118d6b8071b2cc1931ef793f3" Aug 12 23:54:01.509368 containerd[1436]: time="2025-08-12T23:54:01.509317141Z" level=error msg="ContainerStatus for \"2458c311161663e6a6213642d92eb22be83372c118d6b8071b2cc1931ef793f3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2458c311161663e6a6213642d92eb22be83372c118d6b8071b2cc1931ef793f3\": not found" Aug 12 23:54:01.509607 kubelet[2466]: E0812 23:54:01.509484 2466 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2458c311161663e6a6213642d92eb22be83372c118d6b8071b2cc1931ef793f3\": not found" containerID="2458c311161663e6a6213642d92eb22be83372c118d6b8071b2cc1931ef793f3" Aug 12 23:54:01.509607 kubelet[2466]: I0812 23:54:01.509512 2466 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2458c311161663e6a6213642d92eb22be83372c118d6b8071b2cc1931ef793f3"} err="failed to get container status \"2458c311161663e6a6213642d92eb22be83372c118d6b8071b2cc1931ef793f3\": rpc error: code = NotFound desc = an error occurred when try to find container \"2458c311161663e6a6213642d92eb22be83372c118d6b8071b2cc1931ef793f3\": not found" Aug 12 23:54:01.509607 kubelet[2466]: I0812 23:54:01.509527 2466 scope.go:117] "RemoveContainer" containerID="35fdfdcc412b1b03b23dcfbd77eb5ebcfd9dc7b5244ce577576b3545a4b1b568" Aug 12 23:54:01.509811 containerd[1436]: time="2025-08-12T23:54:01.509773099Z" level=error msg="ContainerStatus for \"35fdfdcc412b1b03b23dcfbd77eb5ebcfd9dc7b5244ce577576b3545a4b1b568\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"35fdfdcc412b1b03b23dcfbd77eb5ebcfd9dc7b5244ce577576b3545a4b1b568\": not found" Aug 12 23:54:01.509958 kubelet[2466]: E0812 23:54:01.509931 2466 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"35fdfdcc412b1b03b23dcfbd77eb5ebcfd9dc7b5244ce577576b3545a4b1b568\": not found" containerID="35fdfdcc412b1b03b23dcfbd77eb5ebcfd9dc7b5244ce577576b3545a4b1b568" Aug 12 23:54:01.509999 kubelet[2466]: I0812 23:54:01.509971 2466 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"35fdfdcc412b1b03b23dcfbd77eb5ebcfd9dc7b5244ce577576b3545a4b1b568"} err="failed to get container status \"35fdfdcc412b1b03b23dcfbd77eb5ebcfd9dc7b5244ce577576b3545a4b1b568\": rpc error: code = NotFound desc = an error occurred when try to find container \"35fdfdcc412b1b03b23dcfbd77eb5ebcfd9dc7b5244ce577576b3545a4b1b568\": not found" Aug 12 23:54:01.509999 kubelet[2466]: I0812 23:54:01.509988 2466 scope.go:117] "RemoveContainer" containerID="ec4a60acbaa660eff45c2dc4123855aa476028e8e2436dfdac36a1a87987700a" Aug 12 23:54:01.510165 containerd[1436]: time="2025-08-12T23:54:01.510138056Z" level=error msg="ContainerStatus for \"ec4a60acbaa660eff45c2dc4123855aa476028e8e2436dfdac36a1a87987700a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ec4a60acbaa660eff45c2dc4123855aa476028e8e2436dfdac36a1a87987700a\": not found" Aug 12 23:54:01.510366 kubelet[2466]: E0812 23:54:01.510260 2466 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ec4a60acbaa660eff45c2dc4123855aa476028e8e2436dfdac36a1a87987700a\": not found" containerID="ec4a60acbaa660eff45c2dc4123855aa476028e8e2436dfdac36a1a87987700a" Aug 12 23:54:01.510366 kubelet[2466]: I0812 23:54:01.510284 2466 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ec4a60acbaa660eff45c2dc4123855aa476028e8e2436dfdac36a1a87987700a"} err="failed to get container status \"ec4a60acbaa660eff45c2dc4123855aa476028e8e2436dfdac36a1a87987700a\": rpc error: code = NotFound desc = an error occurred when try to find container \"ec4a60acbaa660eff45c2dc4123855aa476028e8e2436dfdac36a1a87987700a\": not found" Aug 12 23:54:01.510366 kubelet[2466]: I0812 23:54:01.510301 2466 scope.go:117] "RemoveContainer" containerID="d22ce9b1ff7a05095c56307bb3a45d2248af89820f791580c90cd8ea80a5f0ab" Aug 12 23:54:01.510520 containerd[1436]: time="2025-08-12T23:54:01.510488374Z" level=error msg="ContainerStatus for \"d22ce9b1ff7a05095c56307bb3a45d2248af89820f791580c90cd8ea80a5f0ab\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d22ce9b1ff7a05095c56307bb3a45d2248af89820f791580c90cd8ea80a5f0ab\": not found" Aug 12 23:54:01.510682 kubelet[2466]: E0812 23:54:01.510629 2466 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d22ce9b1ff7a05095c56307bb3a45d2248af89820f791580c90cd8ea80a5f0ab\": not found" containerID="d22ce9b1ff7a05095c56307bb3a45d2248af89820f791580c90cd8ea80a5f0ab" Aug 12 23:54:01.510682 kubelet[2466]: I0812 23:54:01.510664 2466 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d22ce9b1ff7a05095c56307bb3a45d2248af89820f791580c90cd8ea80a5f0ab"} err="failed to get container status \"d22ce9b1ff7a05095c56307bb3a45d2248af89820f791580c90cd8ea80a5f0ab\": rpc error: code = NotFound desc = an error occurred when try to find container \"d22ce9b1ff7a05095c56307bb3a45d2248af89820f791580c90cd8ea80a5f0ab\": not found" Aug 12 23:54:01.510682 kubelet[2466]: I0812 23:54:01.510681 2466 scope.go:117] "RemoveContainer" containerID="664e3a53526c23b46148b0af554c97172e2336d4c4084b5d47f536c8afced632" Aug 12 23:54:01.511725 containerd[1436]: time="2025-08-12T23:54:01.511703527Z" level=info msg="RemoveContainer for \"664e3a53526c23b46148b0af554c97172e2336d4c4084b5d47f536c8afced632\"" Aug 12 23:54:01.514438 containerd[1436]: time="2025-08-12T23:54:01.514401190Z" level=info msg="RemoveContainer for \"664e3a53526c23b46148b0af554c97172e2336d4c4084b5d47f536c8afced632\" returns successfully" Aug 12 23:54:01.514723 kubelet[2466]: I0812 23:54:01.514641 2466 scope.go:117] "RemoveContainer" containerID="664e3a53526c23b46148b0af554c97172e2336d4c4084b5d47f536c8afced632" Aug 12 23:54:01.514923 containerd[1436]: time="2025-08-12T23:54:01.514877668Z" level=error msg="ContainerStatus for \"664e3a53526c23b46148b0af554c97172e2336d4c4084b5d47f536c8afced632\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"664e3a53526c23b46148b0af554c97172e2336d4c4084b5d47f536c8afced632\": not found" Aug 12 23:54:01.515127 kubelet[2466]: E0812 23:54:01.515049 2466 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"664e3a53526c23b46148b0af554c97172e2336d4c4084b5d47f536c8afced632\": not found" containerID="664e3a53526c23b46148b0af554c97172e2336d4c4084b5d47f536c8afced632" Aug 12 23:54:01.515127 kubelet[2466]: I0812 23:54:01.515076 2466 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"664e3a53526c23b46148b0af554c97172e2336d4c4084b5d47f536c8afced632"} err="failed to get container status \"664e3a53526c23b46148b0af554c97172e2336d4c4084b5d47f536c8afced632\": rpc error: code = NotFound desc = an error occurred when try to find container \"664e3a53526c23b46148b0af554c97172e2336d4c4084b5d47f536c8afced632\": not found" Aug 12 23:54:01.599900 kubelet[2466]: I0812 23:54:01.599634 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-cilium-config-path\") pod \"4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01\" (UID: \"4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01\") " Aug 12 23:54:01.599900 kubelet[2466]: I0812 23:54:01.599675 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-host-proc-sys-kernel\") pod \"4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01\" (UID: \"4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01\") " Aug 12 23:54:01.599900 kubelet[2466]: I0812 23:54:01.599699 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-cni-path\") pod \"4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01\" (UID: \"4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01\") " Aug 12 23:54:01.599900 kubelet[2466]: I0812 23:54:01.599726 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcqll\" (UniqueName: \"kubernetes.io/projected/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-kube-api-access-rcqll\") pod \"4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01\" (UID: \"4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01\") " Aug 12 23:54:01.599900 kubelet[2466]: I0812 23:54:01.599742 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-cilium-run\") pod \"4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01\" (UID: \"4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01\") " Aug 12 23:54:01.599900 kubelet[2466]: I0812 23:54:01.599756 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-host-proc-sys-net\") pod \"4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01\" (UID: \"4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01\") " Aug 12 23:54:01.600157 kubelet[2466]: I0812 23:54:01.599771 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-etc-cni-netd\") pod \"4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01\" (UID: \"4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01\") " Aug 12 23:54:01.600157 kubelet[2466]: I0812 23:54:01.599809 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-lib-modules\") pod \"4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01\" (UID: \"4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01\") " Aug 12 23:54:01.600157 kubelet[2466]: I0812 23:54:01.599826 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-hostproc\") pod \"4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01\" (UID: \"4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01\") " Aug 12 23:54:01.600157 kubelet[2466]: I0812 23:54:01.599844 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-hubble-tls\") pod \"4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01\" (UID: \"4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01\") " Aug 12 23:54:01.600157 kubelet[2466]: I0812 23:54:01.599858 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-cilium-cgroup\") pod \"4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01\" (UID: \"4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01\") " Aug 12 23:54:01.600157 kubelet[2466]: I0812 23:54:01.599873 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-xtables-lock\") pod \"4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01\" (UID: \"4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01\") " Aug 12 23:54:01.600301 kubelet[2466]: I0812 23:54:01.599892 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-clustermesh-secrets\") pod \"4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01\" (UID: \"4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01\") " Aug 12 23:54:01.600301 kubelet[2466]: I0812 23:54:01.599907 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-bpf-maps\") pod \"4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01\" (UID: \"4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01\") " Aug 12 23:54:01.600301 kubelet[2466]: I0812 23:54:01.599940 2466 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtrcz\" (UniqueName: \"kubernetes.io/projected/19671571-21af-4027-9427-acde95972777-kube-api-access-qtrcz\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:01.600301 kubelet[2466]: I0812 23:54:01.599931 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01" (UID: "4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:54:01.600301 kubelet[2466]: I0812 23:54:01.599953 2466 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19671571-21af-4027-9427-acde95972777-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:01.600301 kubelet[2466]: I0812 23:54:01.599982 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01" (UID: "4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:54:01.600440 kubelet[2466]: I0812 23:54:01.600002 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01" (UID: "4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:54:01.600440 kubelet[2466]: I0812 23:54:01.600017 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01" (UID: "4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:54:01.600440 kubelet[2466]: I0812 23:54:01.600031 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-hostproc" (OuterVolumeSpecName: "hostproc") pod "4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01" (UID: "4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:54:01.600440 kubelet[2466]: I0812 23:54:01.600335 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01" (UID: "4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:54:01.600440 kubelet[2466]: I0812 23:54:01.600365 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-cni-path" (OuterVolumeSpecName: "cni-path") pod "4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01" (UID: "4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:54:01.600552 kubelet[2466]: I0812 23:54:01.600390 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01" (UID: "4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:54:01.602151 kubelet[2466]: I0812 23:54:01.600608 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01" (UID: "4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:54:01.602390 kubelet[2466]: I0812 23:54:01.601494 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01" (UID: "4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:54:01.602435 kubelet[2466]: I0812 23:54:01.601913 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01" (UID: "4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 12 23:54:01.602651 kubelet[2466]: I0812 23:54:01.602612 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01" (UID: "4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 12 23:54:01.603016 kubelet[2466]: I0812 23:54:01.602967 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-kube-api-access-rcqll" (OuterVolumeSpecName: "kube-api-access-rcqll") pod "4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01" (UID: "4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01"). InnerVolumeSpecName "kube-api-access-rcqll". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 12 23:54:01.603357 kubelet[2466]: I0812 23:54:01.603264 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01" (UID: "4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 12 23:54:01.701198 kubelet[2466]: I0812 23:54:01.701134 2466 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:01.701198 kubelet[2466]: I0812 23:54:01.701198 2466 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-cni-path\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:01.701198 kubelet[2466]: I0812 23:54:01.701209 2466 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:01.701413 kubelet[2466]: I0812 23:54:01.701218 2466 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcqll\" (UniqueName: \"kubernetes.io/projected/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-kube-api-access-rcqll\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:01.701413 kubelet[2466]: I0812 23:54:01.701228 2466 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-cilium-run\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:01.701413 kubelet[2466]: I0812 23:54:01.701236 2466 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:01.701413 kubelet[2466]: I0812 23:54:01.701243 2466 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:01.701413 kubelet[2466]: I0812 23:54:01.701253 2466 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-lib-modules\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:01.701413 kubelet[2466]: I0812 23:54:01.701263 2466 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-hostproc\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:01.701413 kubelet[2466]: I0812 23:54:01.701270 2466 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:01.701413 kubelet[2466]: I0812 23:54:01.701278 2466 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-xtables-lock\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:01.701617 kubelet[2466]: I0812 23:54:01.701285 2466 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-hubble-tls\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:01.701617 kubelet[2466]: I0812 23:54:01.701293 2466 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-bpf-maps\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:01.701617 kubelet[2466]: I0812 23:54:01.701301 2466 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:01.779289 systemd[1]: Removed slice kubepods-burstable-pod4b4ecf9a_a078_4ab2_aa30_f10ac9d18a01.slice - libcontainer container kubepods-burstable-pod4b4ecf9a_a078_4ab2_aa30_f10ac9d18a01.slice. Aug 12 23:54:01.781937 systemd[1]: kubepods-burstable-pod4b4ecf9a_a078_4ab2_aa30_f10ac9d18a01.slice: Consumed 7.586s CPU time. Aug 12 23:54:01.784775 systemd[1]: Removed slice kubepods-besteffort-pod19671571_21af_4027_9427_acde95972777.slice - libcontainer container kubepods-besteffort-pod19671571_21af_4027_9427_acde95972777.slice. Aug 12 23:54:02.218565 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21cf9b68e95fdef837ee3d9914b5fec7665e83b0efa7d3655b4a54318f5206b5-rootfs.mount: Deactivated successfully. Aug 12 23:54:02.218677 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d145981ccd7704669fb43330719da395e69f608546bcea038e07a2711b4550c8-rootfs.mount: Deactivated successfully. Aug 12 23:54:02.218729 systemd[1]: var-lib-kubelet-pods-19671571\x2d21af\x2d4027\x2d9427\x2dacde95972777-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqtrcz.mount: Deactivated successfully. Aug 12 23:54:02.218781 systemd[1]: var-lib-kubelet-pods-4b4ecf9a\x2da078\x2d4ab2\x2daa30\x2df10ac9d18a01-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drcqll.mount: Deactivated successfully. Aug 12 23:54:02.218857 systemd[1]: var-lib-kubelet-pods-4b4ecf9a\x2da078\x2d4ab2\x2daa30\x2df10ac9d18a01-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 12 23:54:02.218909 systemd[1]: var-lib-kubelet-pods-4b4ecf9a\x2da078\x2d4ab2\x2daa30\x2df10ac9d18a01-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 12 23:54:03.142456 sshd[4135]: pam_unix(sshd:session): session closed for user core Aug 12 23:54:03.153987 systemd[1]: sshd@24-10.0.0.6:22-10.0.0.1:40638.service: Deactivated successfully. Aug 12 23:54:03.156569 systemd[1]: session-25.scope: Deactivated successfully. Aug 12 23:54:03.156742 systemd[1]: session-25.scope: Consumed 1.165s CPU time. Aug 12 23:54:03.158251 systemd-logind[1417]: Session 25 logged out. Waiting for processes to exit. Aug 12 23:54:03.164143 systemd[1]: Started sshd@25-10.0.0.6:22-10.0.0.1:38202.service - OpenSSH per-connection server daemon (10.0.0.1:38202). Aug 12 23:54:03.165064 systemd-logind[1417]: Removed session 25. Aug 12 23:54:03.182406 kubelet[2466]: I0812 23:54:03.182284 2466 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19671571-21af-4027-9427-acde95972777" path="/var/lib/kubelet/pods/19671571-21af-4027-9427-acde95972777/volumes" Aug 12 23:54:03.183227 kubelet[2466]: I0812 23:54:03.183142 2466 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01" path="/var/lib/kubelet/pods/4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01/volumes" Aug 12 23:54:03.206040 sshd[4296]: Accepted publickey for core from 10.0.0.1 port 38202 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:54:03.208013 sshd[4296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:54:03.212445 systemd-logind[1417]: New session 26 of user core. Aug 12 23:54:03.221034 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 12 23:54:03.265244 kubelet[2466]: E0812 23:54:03.265175 2466 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 12 23:54:04.523124 kubelet[2466]: I0812 23:54:04.522639 2466 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-12T23:54:04Z","lastTransitionTime":"2025-08-12T23:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 12 23:54:04.640434 sshd[4296]: pam_unix(sshd:session): session closed for user core Aug 12 23:54:04.649161 kubelet[2466]: E0812 23:54:04.648591 2466 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01" containerName="mount-cgroup" Aug 12 23:54:04.649161 kubelet[2466]: E0812 23:54:04.648624 2466 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01" containerName="apply-sysctl-overwrites" Aug 12 23:54:04.649161 kubelet[2466]: E0812 23:54:04.648632 2466 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01" containerName="clean-cilium-state" Aug 12 23:54:04.649161 kubelet[2466]: E0812 23:54:04.648639 2466 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01" containerName="cilium-agent" Aug 12 23:54:04.649161 kubelet[2466]: E0812 23:54:04.648645 2466 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="19671571-21af-4027-9427-acde95972777" containerName="cilium-operator" Aug 12 23:54:04.649161 kubelet[2466]: E0812 23:54:04.648651 2466 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01" containerName="mount-bpf-fs" Aug 12 23:54:04.649161 kubelet[2466]: I0812 23:54:04.648676 2466 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b4ecf9a-a078-4ab2-aa30-f10ac9d18a01" containerName="cilium-agent" Aug 12 23:54:04.649161 kubelet[2466]: I0812 23:54:04.648683 2466 memory_manager.go:354] "RemoveStaleState removing state" podUID="19671571-21af-4027-9427-acde95972777" containerName="cilium-operator" Aug 12 23:54:04.649269 systemd[1]: sshd@25-10.0.0.6:22-10.0.0.1:38202.service: Deactivated successfully. Aug 12 23:54:04.652573 systemd[1]: session-26.scope: Deactivated successfully. Aug 12 23:54:04.654258 systemd[1]: session-26.scope: Consumed 1.300s CPU time. Aug 12 23:54:04.656727 systemd-logind[1417]: Session 26 logged out. Waiting for processes to exit. Aug 12 23:54:04.667332 systemd[1]: Started sshd@26-10.0.0.6:22-10.0.0.1:38204.service - OpenSSH per-connection server daemon (10.0.0.1:38204). Aug 12 23:54:04.675299 systemd-logind[1417]: Removed session 26. Aug 12 23:54:04.683705 systemd[1]: Created slice kubepods-burstable-pod741ac7bd_444a_4c8c_b718_2dcf4e074d48.slice - libcontainer container kubepods-burstable-pod741ac7bd_444a_4c8c_b718_2dcf4e074d48.slice. Aug 12 23:54:04.716682 sshd[4309]: Accepted publickey for core from 10.0.0.1 port 38204 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:54:04.718785 sshd[4309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:54:04.723281 systemd-logind[1417]: New session 27 of user core. Aug 12 23:54:04.739063 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 12 23:54:04.792805 sshd[4309]: pam_unix(sshd:session): session closed for user core Aug 12 23:54:04.803641 systemd[1]: sshd@26-10.0.0.6:22-10.0.0.1:38204.service: Deactivated successfully. Aug 12 23:54:04.805528 systemd[1]: session-27.scope: Deactivated successfully. Aug 12 23:54:04.807073 systemd-logind[1417]: Session 27 logged out. Waiting for processes to exit. Aug 12 23:54:04.818166 systemd[1]: Started sshd@27-10.0.0.6:22-10.0.0.1:38214.service - OpenSSH per-connection server daemon (10.0.0.1:38214). Aug 12 23:54:04.819222 systemd-logind[1417]: Removed session 27. Aug 12 23:54:04.824976 kubelet[2466]: I0812 23:54:04.824942 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/741ac7bd-444a-4c8c-b718-2dcf4e074d48-host-proc-sys-kernel\") pod \"cilium-pdlfs\" (UID: \"741ac7bd-444a-4c8c-b718-2dcf4e074d48\") " pod="kube-system/cilium-pdlfs" Aug 12 23:54:04.825203 kubelet[2466]: I0812 23:54:04.824983 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/741ac7bd-444a-4c8c-b718-2dcf4e074d48-cilium-run\") pod \"cilium-pdlfs\" (UID: \"741ac7bd-444a-4c8c-b718-2dcf4e074d48\") " pod="kube-system/cilium-pdlfs" Aug 12 23:54:04.825203 kubelet[2466]: I0812 23:54:04.825008 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/741ac7bd-444a-4c8c-b718-2dcf4e074d48-bpf-maps\") pod \"cilium-pdlfs\" (UID: \"741ac7bd-444a-4c8c-b718-2dcf4e074d48\") " pod="kube-system/cilium-pdlfs" Aug 12 23:54:04.825203 kubelet[2466]: I0812 23:54:04.825026 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/741ac7bd-444a-4c8c-b718-2dcf4e074d48-xtables-lock\") pod \"cilium-pdlfs\" (UID: \"741ac7bd-444a-4c8c-b718-2dcf4e074d48\") " pod="kube-system/cilium-pdlfs" Aug 12 23:54:04.825203 kubelet[2466]: I0812 23:54:04.825045 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/741ac7bd-444a-4c8c-b718-2dcf4e074d48-host-proc-sys-net\") pod \"cilium-pdlfs\" (UID: \"741ac7bd-444a-4c8c-b718-2dcf4e074d48\") " pod="kube-system/cilium-pdlfs" Aug 12 23:54:04.825203 kubelet[2466]: I0812 23:54:04.825061 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/741ac7bd-444a-4c8c-b718-2dcf4e074d48-lib-modules\") pod \"cilium-pdlfs\" (UID: \"741ac7bd-444a-4c8c-b718-2dcf4e074d48\") " pod="kube-system/cilium-pdlfs" Aug 12 23:54:04.825203 kubelet[2466]: I0812 23:54:04.825077 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/741ac7bd-444a-4c8c-b718-2dcf4e074d48-clustermesh-secrets\") pod \"cilium-pdlfs\" (UID: \"741ac7bd-444a-4c8c-b718-2dcf4e074d48\") " pod="kube-system/cilium-pdlfs" Aug 12 23:54:04.826458 kubelet[2466]: I0812 23:54:04.826425 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znzk4\" (UniqueName: \"kubernetes.io/projected/741ac7bd-444a-4c8c-b718-2dcf4e074d48-kube-api-access-znzk4\") pod \"cilium-pdlfs\" (UID: \"741ac7bd-444a-4c8c-b718-2dcf4e074d48\") " pod="kube-system/cilium-pdlfs" Aug 12 23:54:04.826517 kubelet[2466]: I0812 23:54:04.826466 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/741ac7bd-444a-4c8c-b718-2dcf4e074d48-hostproc\") pod \"cilium-pdlfs\" (UID: \"741ac7bd-444a-4c8c-b718-2dcf4e074d48\") " pod="kube-system/cilium-pdlfs" Aug 12 23:54:04.826517 kubelet[2466]: I0812 23:54:04.826485 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/741ac7bd-444a-4c8c-b718-2dcf4e074d48-cilium-config-path\") pod \"cilium-pdlfs\" (UID: \"741ac7bd-444a-4c8c-b718-2dcf4e074d48\") " pod="kube-system/cilium-pdlfs" Aug 12 23:54:04.826517 kubelet[2466]: I0812 23:54:04.826506 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/741ac7bd-444a-4c8c-b718-2dcf4e074d48-cilium-ipsec-secrets\") pod \"cilium-pdlfs\" (UID: \"741ac7bd-444a-4c8c-b718-2dcf4e074d48\") " pod="kube-system/cilium-pdlfs" Aug 12 23:54:04.826628 kubelet[2466]: I0812 23:54:04.826521 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/741ac7bd-444a-4c8c-b718-2dcf4e074d48-hubble-tls\") pod \"cilium-pdlfs\" (UID: \"741ac7bd-444a-4c8c-b718-2dcf4e074d48\") " pod="kube-system/cilium-pdlfs" Aug 12 23:54:04.826628 kubelet[2466]: I0812 23:54:04.826572 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/741ac7bd-444a-4c8c-b718-2dcf4e074d48-cilium-cgroup\") pod \"cilium-pdlfs\" (UID: \"741ac7bd-444a-4c8c-b718-2dcf4e074d48\") " pod="kube-system/cilium-pdlfs" Aug 12 23:54:04.826628 kubelet[2466]: I0812 23:54:04.826591 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/741ac7bd-444a-4c8c-b718-2dcf4e074d48-cni-path\") pod \"cilium-pdlfs\" (UID: \"741ac7bd-444a-4c8c-b718-2dcf4e074d48\") " pod="kube-system/cilium-pdlfs" Aug 12 23:54:04.826628 kubelet[2466]: I0812 23:54:04.826607 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/741ac7bd-444a-4c8c-b718-2dcf4e074d48-etc-cni-netd\") pod \"cilium-pdlfs\" (UID: \"741ac7bd-444a-4c8c-b718-2dcf4e074d48\") " pod="kube-system/cilium-pdlfs" Aug 12 23:54:04.860266 sshd[4317]: Accepted publickey for core from 10.0.0.1 port 38214 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:54:04.862407 sshd[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:54:04.868468 systemd-logind[1417]: New session 28 of user core. Aug 12 23:54:04.876196 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 12 23:54:04.990245 kubelet[2466]: E0812 23:54:04.990198 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:54:04.991173 containerd[1436]: time="2025-08-12T23:54:04.991132587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pdlfs,Uid:741ac7bd-444a-4c8c-b718-2dcf4e074d48,Namespace:kube-system,Attempt:0,}" Aug 12 23:54:05.088459 containerd[1436]: time="2025-08-12T23:54:05.087224231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:54:05.088459 containerd[1436]: time="2025-08-12T23:54:05.087938747Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:54:05.088459 containerd[1436]: time="2025-08-12T23:54:05.087953827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:54:05.088459 containerd[1436]: time="2025-08-12T23:54:05.088264865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:54:05.114055 systemd[1]: Started cri-containerd-77d102d7d252bffb793dd9d283796d729e507f371cdb94c118362dd45fa4a418.scope - libcontainer container 77d102d7d252bffb793dd9d283796d729e507f371cdb94c118362dd45fa4a418. Aug 12 23:54:05.150194 containerd[1436]: time="2025-08-12T23:54:05.150124774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pdlfs,Uid:741ac7bd-444a-4c8c-b718-2dcf4e074d48,Namespace:kube-system,Attempt:0,} returns sandbox id \"77d102d7d252bffb793dd9d283796d729e507f371cdb94c118362dd45fa4a418\"" Aug 12 23:54:05.151119 kubelet[2466]: E0812 23:54:05.151018 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:54:05.153835 containerd[1436]: time="2025-08-12T23:54:05.153173677Z" level=info msg="CreateContainer within sandbox \"77d102d7d252bffb793dd9d283796d729e507f371cdb94c118362dd45fa4a418\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 12 23:54:05.209099 containerd[1436]: time="2025-08-12T23:54:05.209040458Z" level=info msg="CreateContainer within sandbox \"77d102d7d252bffb793dd9d283796d729e507f371cdb94c118362dd45fa4a418\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"09425844d6991d1170ac62d126b7dd209964eaf10824040732179f46467eb3a1\"" Aug 12 23:54:05.209694 containerd[1436]: time="2025-08-12T23:54:05.209656095Z" level=info msg="StartContainer for \"09425844d6991d1170ac62d126b7dd209964eaf10824040732179f46467eb3a1\"" Aug 12 23:54:05.243103 systemd[1]: Started cri-containerd-09425844d6991d1170ac62d126b7dd209964eaf10824040732179f46467eb3a1.scope - libcontainer container 09425844d6991d1170ac62d126b7dd209964eaf10824040732179f46467eb3a1. Aug 12 23:54:05.286214 containerd[1436]: time="2025-08-12T23:54:05.286160645Z" level=info msg="StartContainer for \"09425844d6991d1170ac62d126b7dd209964eaf10824040732179f46467eb3a1\" returns successfully" Aug 12 23:54:05.297859 systemd[1]: cri-containerd-09425844d6991d1170ac62d126b7dd209964eaf10824040732179f46467eb3a1.scope: Deactivated successfully. Aug 12 23:54:05.349042 containerd[1436]: time="2025-08-12T23:54:05.348899308Z" level=info msg="shim disconnected" id=09425844d6991d1170ac62d126b7dd209964eaf10824040732179f46467eb3a1 namespace=k8s.io Aug 12 23:54:05.349603 containerd[1436]: time="2025-08-12T23:54:05.349419546Z" level=warning msg="cleaning up after shim disconnected" id=09425844d6991d1170ac62d126b7dd209964eaf10824040732179f46467eb3a1 namespace=k8s.io Aug 12 23:54:05.349603 containerd[1436]: time="2025-08-12T23:54:05.349442666Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:54:05.492081 kubelet[2466]: E0812 23:54:05.491648 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:54:05.493897 containerd[1436]: time="2025-08-12T23:54:05.493755612Z" level=info msg="CreateContainer within sandbox \"77d102d7d252bffb793dd9d283796d729e507f371cdb94c118362dd45fa4a418\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 12 23:54:05.513255 containerd[1436]: time="2025-08-12T23:54:05.513180988Z" level=info msg="CreateContainer within sandbox \"77d102d7d252bffb793dd9d283796d729e507f371cdb94c118362dd45fa4a418\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d0941b1e5f8d4cde84ec8ab26f7964df15d095011008c2a65e37f1538617a149\"" Aug 12 23:54:05.514999 containerd[1436]: time="2025-08-12T23:54:05.513957424Z" level=info msg="StartContainer for \"d0941b1e5f8d4cde84ec8ab26f7964df15d095011008c2a65e37f1538617a149\"" Aug 12 23:54:05.564041 systemd[1]: Started cri-containerd-d0941b1e5f8d4cde84ec8ab26f7964df15d095011008c2a65e37f1538617a149.scope - libcontainer container d0941b1e5f8d4cde84ec8ab26f7964df15d095011008c2a65e37f1538617a149. Aug 12 23:54:05.588861 containerd[1436]: time="2025-08-12T23:54:05.588770343Z" level=info msg="StartContainer for \"d0941b1e5f8d4cde84ec8ab26f7964df15d095011008c2a65e37f1538617a149\" returns successfully" Aug 12 23:54:05.597836 systemd[1]: cri-containerd-d0941b1e5f8d4cde84ec8ab26f7964df15d095011008c2a65e37f1538617a149.scope: Deactivated successfully. Aug 12 23:54:05.627053 containerd[1436]: time="2025-08-12T23:54:05.626903619Z" level=info msg="shim disconnected" id=d0941b1e5f8d4cde84ec8ab26f7964df15d095011008c2a65e37f1538617a149 namespace=k8s.io Aug 12 23:54:05.627053 containerd[1436]: time="2025-08-12T23:54:05.626966618Z" level=warning msg="cleaning up after shim disconnected" id=d0941b1e5f8d4cde84ec8ab26f7964df15d095011008c2a65e37f1538617a149 namespace=k8s.io Aug 12 23:54:05.627053 containerd[1436]: time="2025-08-12T23:54:05.626977418Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:54:06.498052 kubelet[2466]: E0812 23:54:06.496348 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:54:06.502233 containerd[1436]: time="2025-08-12T23:54:06.502179292Z" level=info msg="CreateContainer within sandbox \"77d102d7d252bffb793dd9d283796d729e507f371cdb94c118362dd45fa4a418\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 12 23:54:06.528650 containerd[1436]: time="2025-08-12T23:54:06.528559075Z" level=info msg="CreateContainer within sandbox \"77d102d7d252bffb793dd9d283796d729e507f371cdb94c118362dd45fa4a418\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e07e062eeee114e4f9fe3455b5b5267de4089ef75b8fd8f293252307326be2a6\"" Aug 12 23:54:06.530058 containerd[1436]: time="2025-08-12T23:54:06.530021307Z" level=info msg="StartContainer for \"e07e062eeee114e4f9fe3455b5b5267de4089ef75b8fd8f293252307326be2a6\"" Aug 12 23:54:06.579027 systemd[1]: Started cri-containerd-e07e062eeee114e4f9fe3455b5b5267de4089ef75b8fd8f293252307326be2a6.scope - libcontainer container e07e062eeee114e4f9fe3455b5b5267de4089ef75b8fd8f293252307326be2a6. Aug 12 23:54:06.609466 containerd[1436]: time="2025-08-12T23:54:06.609420255Z" level=info msg="StartContainer for \"e07e062eeee114e4f9fe3455b5b5267de4089ef75b8fd8f293252307326be2a6\" returns successfully" Aug 12 23:54:06.611599 systemd[1]: cri-containerd-e07e062eeee114e4f9fe3455b5b5267de4089ef75b8fd8f293252307326be2a6.scope: Deactivated successfully. Aug 12 23:54:06.644692 containerd[1436]: time="2025-08-12T23:54:06.644600553Z" level=info msg="shim disconnected" id=e07e062eeee114e4f9fe3455b5b5267de4089ef75b8fd8f293252307326be2a6 namespace=k8s.io Aug 12 23:54:06.644692 containerd[1436]: time="2025-08-12T23:54:06.644681312Z" level=warning msg="cleaning up after shim disconnected" id=e07e062eeee114e4f9fe3455b5b5267de4089ef75b8fd8f293252307326be2a6 namespace=k8s.io Aug 12 23:54:06.644692 containerd[1436]: time="2025-08-12T23:54:06.644690832Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:54:06.655598 containerd[1436]: time="2025-08-12T23:54:06.655544816Z" level=warning msg="cleanup warnings time=\"2025-08-12T23:54:06Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 12 23:54:06.932029 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e07e062eeee114e4f9fe3455b5b5267de4089ef75b8fd8f293252307326be2a6-rootfs.mount: Deactivated successfully. Aug 12 23:54:07.505011 kubelet[2466]: E0812 23:54:07.504947 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:54:07.510887 containerd[1436]: time="2025-08-12T23:54:07.508995423Z" level=info msg="CreateContainer within sandbox \"77d102d7d252bffb793dd9d283796d729e507f371cdb94c118362dd45fa4a418\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 12 23:54:07.539460 containerd[1436]: time="2025-08-12T23:54:07.539391910Z" level=info msg="CreateContainer within sandbox \"77d102d7d252bffb793dd9d283796d729e507f371cdb94c118362dd45fa4a418\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"73e044b1f1debdc411b01aab5f072a09217e00eefe931125b88181426da7fb79\"" Aug 12 23:54:07.540699 containerd[1436]: time="2025-08-12T23:54:07.540654544Z" level=info msg="StartContainer for \"73e044b1f1debdc411b01aab5f072a09217e00eefe931125b88181426da7fb79\"" Aug 12 23:54:07.576156 systemd[1]: Started cri-containerd-73e044b1f1debdc411b01aab5f072a09217e00eefe931125b88181426da7fb79.scope - libcontainer container 73e044b1f1debdc411b01aab5f072a09217e00eefe931125b88181426da7fb79. Aug 12 23:54:07.607715 systemd[1]: cri-containerd-73e044b1f1debdc411b01aab5f072a09217e00eefe931125b88181426da7fb79.scope: Deactivated successfully. Aug 12 23:54:07.615215 containerd[1436]: time="2025-08-12T23:54:07.615059410Z" level=info msg="StartContainer for \"73e044b1f1debdc411b01aab5f072a09217e00eefe931125b88181426da7fb79\" returns successfully" Aug 12 23:54:07.640042 containerd[1436]: time="2025-08-12T23:54:07.639962764Z" level=info msg="shim disconnected" id=73e044b1f1debdc411b01aab5f072a09217e00eefe931125b88181426da7fb79 namespace=k8s.io Aug 12 23:54:07.640430 containerd[1436]: time="2025-08-12T23:54:07.640257603Z" level=warning msg="cleaning up after shim disconnected" id=73e044b1f1debdc411b01aab5f072a09217e00eefe931125b88181426da7fb79 namespace=k8s.io Aug 12 23:54:07.640430 containerd[1436]: time="2025-08-12T23:54:07.640273523Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:54:07.932039 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73e044b1f1debdc411b01aab5f072a09217e00eefe931125b88181426da7fb79-rootfs.mount: Deactivated successfully. Aug 12 23:54:08.266355 kubelet[2466]: E0812 23:54:08.266303 2466 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 12 23:54:08.509417 kubelet[2466]: E0812 23:54:08.509360 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:54:08.529727 containerd[1436]: time="2025-08-12T23:54:08.529606892Z" level=info msg="CreateContainer within sandbox \"77d102d7d252bffb793dd9d283796d729e507f371cdb94c118362dd45fa4a418\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 12 23:54:08.545592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount340468025.mount: Deactivated successfully. Aug 12 23:54:08.554559 containerd[1436]: time="2025-08-12T23:54:08.554468771Z" level=info msg="CreateContainer within sandbox \"77d102d7d252bffb793dd9d283796d729e507f371cdb94c118362dd45fa4a418\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"910004b8bc590fc80fcacf426f8a34b5f95e80547a958c2f4dac45dc3d44d1a5\"" Aug 12 23:54:08.555848 containerd[1436]: time="2025-08-12T23:54:08.555574325Z" level=info msg="StartContainer for \"910004b8bc590fc80fcacf426f8a34b5f95e80547a958c2f4dac45dc3d44d1a5\"" Aug 12 23:54:08.594070 systemd[1]: Started cri-containerd-910004b8bc590fc80fcacf426f8a34b5f95e80547a958c2f4dac45dc3d44d1a5.scope - libcontainer container 910004b8bc590fc80fcacf426f8a34b5f95e80547a958c2f4dac45dc3d44d1a5. Aug 12 23:54:08.632496 containerd[1436]: time="2025-08-12T23:54:08.632359230Z" level=info msg="StartContainer for \"910004b8bc590fc80fcacf426f8a34b5f95e80547a958c2f4dac45dc3d44d1a5\" returns successfully" Aug 12 23:54:09.011841 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Aug 12 23:54:09.179648 kubelet[2466]: E0812 23:54:09.179612 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:54:09.514042 kubelet[2466]: E0812 23:54:09.514007 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:54:09.532345 kubelet[2466]: I0812 23:54:09.532286 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pdlfs" podStartSLOduration=5.532269388 podStartE2EDuration="5.532269388s" podCreationTimestamp="2025-08-12 23:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:54:09.532095708 +0000 UTC m=+96.471780208" watchObservedRunningTime="2025-08-12 23:54:09.532269388 +0000 UTC m=+96.471953848" Aug 12 23:54:10.992430 kubelet[2466]: E0812 23:54:10.991859 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:54:11.180689 kubelet[2466]: E0812 23:54:11.180403 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-pcgq7" podUID="75a51a54-d8e8-4377-985e-c8a8b2f43520" Aug 12 23:54:11.320452 systemd[1]: run-containerd-runc-k8s.io-910004b8bc590fc80fcacf426f8a34b5f95e80547a958c2f4dac45dc3d44d1a5-runc.J89gOd.mount: Deactivated successfully. Aug 12 23:54:12.200868 systemd-networkd[1379]: lxc_health: Link UP Aug 12 23:54:12.210262 systemd-networkd[1379]: lxc_health: Gained carrier Aug 12 23:54:12.993810 kubelet[2466]: E0812 23:54:12.993747 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:54:13.180105 kubelet[2466]: E0812 23:54:13.180045 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-pcgq7" podUID="75a51a54-d8e8-4377-985e-c8a8b2f43520" Aug 12 23:54:13.525854 kubelet[2466]: E0812 23:54:13.525159 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:54:13.966946 systemd-networkd[1379]: lxc_health: Gained IPv6LL Aug 12 23:54:14.526708 kubelet[2466]: E0812 23:54:14.526661 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:54:15.180335 kubelet[2466]: E0812 23:54:15.179919 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:54:17.811044 sshd[4317]: pam_unix(sshd:session): session closed for user core Aug 12 23:54:17.815358 systemd[1]: sshd@27-10.0.0.6:22-10.0.0.1:38214.service: Deactivated successfully. Aug 12 23:54:17.818992 systemd[1]: session-28.scope: Deactivated successfully. Aug 12 23:54:17.821733 systemd-logind[1417]: Session 28 logged out. Waiting for processes to exit. Aug 12 23:54:17.822879 systemd-logind[1417]: Removed session 28.