Jul 11 00:10:48.886250 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 11 00:10:48.886269 kernel: Linux version 6.6.96-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Jul 10 22:41:52 -00 2025 Jul 11 00:10:48.886279 kernel: KASLR enabled Jul 11 00:10:48.886284 kernel: efi: EFI v2.7 by EDK II Jul 11 00:10:48.886290 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jul 11 00:10:48.886296 kernel: random: crng init done Jul 11 00:10:48.886303 kernel: ACPI: Early table checksum verification disabled Jul 11 00:10:48.886309 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jul 11 00:10:48.886315 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 11 00:10:48.886322 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:10:48.886328 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:10:48.886334 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:10:48.886340 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:10:48.886346 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:10:48.886439 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:10:48.886449 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:10:48.886456 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:10:48.886462 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:10:48.886468 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 11 00:10:48.886474 kernel: NUMA: Failed to initialise from firmware Jul 11 00:10:48.886481 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 11 00:10:48.886487 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jul 11 00:10:48.886494 kernel: Zone ranges: Jul 11 00:10:48.886500 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 11 00:10:48.886507 kernel: DMA32 empty Jul 11 00:10:48.886514 kernel: Normal empty Jul 11 00:10:48.886520 kernel: Movable zone start for each node Jul 11 00:10:48.886526 kernel: Early memory node ranges Jul 11 00:10:48.886533 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jul 11 00:10:48.886539 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 11 00:10:48.886545 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 11 00:10:48.886552 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 11 00:10:48.886558 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 11 00:10:48.886565 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 11 00:10:48.886571 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 11 00:10:48.886577 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 11 00:10:48.886584 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 11 00:10:48.886591 kernel: psci: probing for conduit method from ACPI. Jul 11 00:10:48.886597 kernel: psci: PSCIv1.1 detected in firmware. Jul 11 00:10:48.886604 kernel: psci: Using standard PSCI v0.2 function IDs Jul 11 00:10:48.886613 kernel: psci: Trusted OS migration not required Jul 11 00:10:48.886619 kernel: psci: SMC Calling Convention v1.1 Jul 11 00:10:48.886626 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 11 00:10:48.886634 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 11 00:10:48.886641 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 11 00:10:48.886648 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 11 00:10:48.886655 kernel: Detected PIPT I-cache on CPU0 Jul 11 00:10:48.886662 kernel: CPU features: detected: GIC system register CPU interface Jul 11 00:10:48.886669 kernel: CPU features: detected: Hardware dirty bit management Jul 11 00:10:48.886676 kernel: CPU features: detected: Spectre-v4 Jul 11 00:10:48.886682 kernel: CPU features: detected: Spectre-BHB Jul 11 00:10:48.886689 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 11 00:10:48.886696 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 11 00:10:48.886705 kernel: CPU features: detected: ARM erratum 1418040 Jul 11 00:10:48.886712 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 11 00:10:48.886719 kernel: alternatives: applying boot alternatives Jul 11 00:10:48.886727 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1479f76954ab5eb3c0ce800eb2a80ad04b273ff773a5af5c1fe82fb8feef2990 Jul 11 00:10:48.886734 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 11 00:10:48.886742 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 11 00:10:48.886759 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 11 00:10:48.886766 kernel: Fallback order for Node 0: 0 Jul 11 00:10:48.886773 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 11 00:10:48.886780 kernel: Policy zone: DMA Jul 11 00:10:48.886788 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 11 00:10:48.886805 kernel: software IO TLB: area num 4. Jul 11 00:10:48.886816 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 11 00:10:48.886823 kernel: Memory: 2386404K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185884K reserved, 0K cma-reserved) Jul 11 00:10:48.886831 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 11 00:10:48.886838 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 11 00:10:48.886845 kernel: rcu: RCU event tracing is enabled. Jul 11 00:10:48.886853 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 11 00:10:48.886860 kernel: Trampoline variant of Tasks RCU enabled. Jul 11 00:10:48.886867 kernel: Tracing variant of Tasks RCU enabled. Jul 11 00:10:48.886876 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 11 00:10:48.886883 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 11 00:10:48.886890 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 11 00:10:48.886898 kernel: GICv3: 256 SPIs implemented Jul 11 00:10:48.886907 kernel: GICv3: 0 Extended SPIs implemented Jul 11 00:10:48.886916 kernel: Root IRQ handler: gic_handle_irq Jul 11 00:10:48.886925 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 11 00:10:48.886932 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 11 00:10:48.886939 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 11 00:10:48.886946 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jul 11 00:10:48.886953 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jul 11 00:10:48.886960 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 11 00:10:48.886967 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 11 00:10:48.886976 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 11 00:10:48.886984 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 00:10:48.886991 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 11 00:10:48.886998 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 11 00:10:48.887005 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 11 00:10:48.887012 kernel: arm-pv: using stolen time PV Jul 11 00:10:48.887038 kernel: Console: colour dummy device 80x25 Jul 11 00:10:48.887046 kernel: ACPI: Core revision 20230628 Jul 11 00:10:48.887056 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 11 00:10:48.887062 kernel: pid_max: default: 32768 minimum: 301 Jul 11 00:10:48.887069 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 11 00:10:48.887077 kernel: landlock: Up and running. Jul 11 00:10:48.887084 kernel: SELinux: Initializing. Jul 11 00:10:48.887091 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 00:10:48.887098 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 00:10:48.887105 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:10:48.887112 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:10:48.887118 kernel: rcu: Hierarchical SRCU implementation. Jul 11 00:10:48.887125 kernel: rcu: Max phase no-delay instances is 400. Jul 11 00:10:48.887132 kernel: Platform MSI: ITS@0x8080000 domain created Jul 11 00:10:48.887140 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 11 00:10:48.887147 kernel: Remapping and enabling EFI services. Jul 11 00:10:48.887154 kernel: smp: Bringing up secondary CPUs ... Jul 11 00:10:48.887161 kernel: Detected PIPT I-cache on CPU1 Jul 11 00:10:48.887168 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 11 00:10:48.887175 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 11 00:10:48.887181 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 00:10:48.887188 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 11 00:10:48.887195 kernel: Detected PIPT I-cache on CPU2 Jul 11 00:10:48.887202 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 11 00:10:48.887210 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 11 00:10:48.887217 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 00:10:48.887228 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 11 00:10:48.887236 kernel: Detected PIPT I-cache on CPU3 Jul 11 00:10:48.887244 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 11 00:10:48.887251 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 11 00:10:48.887258 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 00:10:48.887265 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 11 00:10:48.887272 kernel: smp: Brought up 1 node, 4 CPUs Jul 11 00:10:48.887281 kernel: SMP: Total of 4 processors activated. Jul 11 00:10:48.887288 kernel: CPU features: detected: 32-bit EL0 Support Jul 11 00:10:48.887295 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 11 00:10:48.887302 kernel: CPU features: detected: Common not Private translations Jul 11 00:10:48.887309 kernel: CPU features: detected: CRC32 instructions Jul 11 00:10:48.887316 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 11 00:10:48.887323 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 11 00:10:48.887331 kernel: CPU features: detected: LSE atomic instructions Jul 11 00:10:48.887339 kernel: CPU features: detected: Privileged Access Never Jul 11 00:10:48.887346 kernel: CPU features: detected: RAS Extension Support Jul 11 00:10:48.887360 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 11 00:10:48.887368 kernel: CPU: All CPU(s) started at EL1 Jul 11 00:10:48.887375 kernel: alternatives: applying system-wide alternatives Jul 11 00:10:48.887382 kernel: devtmpfs: initialized Jul 11 00:10:48.887389 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 11 00:10:48.887397 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 11 00:10:48.887404 kernel: pinctrl core: initialized pinctrl subsystem Jul 11 00:10:48.887413 kernel: SMBIOS 3.0.0 present. Jul 11 00:10:48.887420 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jul 11 00:10:48.887427 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 11 00:10:48.887434 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 11 00:10:48.887441 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 11 00:10:48.887449 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 11 00:10:48.887456 kernel: audit: initializing netlink subsys (disabled) Jul 11 00:10:48.887463 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Jul 11 00:10:48.887471 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 11 00:10:48.887479 kernel: cpuidle: using governor menu Jul 11 00:10:48.887487 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 11 00:10:48.887494 kernel: ASID allocator initialised with 32768 entries Jul 11 00:10:48.887501 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 11 00:10:48.887508 kernel: Serial: AMBA PL011 UART driver Jul 11 00:10:48.887516 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 11 00:10:48.887523 kernel: Modules: 0 pages in range for non-PLT usage Jul 11 00:10:48.887530 kernel: Modules: 509008 pages in range for PLT usage Jul 11 00:10:48.887537 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 11 00:10:48.887545 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 11 00:10:48.887553 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 11 00:10:48.887560 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 11 00:10:48.887567 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 11 00:10:48.887574 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 11 00:10:48.887581 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 11 00:10:48.887589 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 11 00:10:48.887596 kernel: ACPI: Added _OSI(Module Device) Jul 11 00:10:48.887609 kernel: ACPI: Added _OSI(Processor Device) Jul 11 00:10:48.887617 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 11 00:10:48.887624 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 11 00:10:48.887632 kernel: ACPI: Interpreter enabled Jul 11 00:10:48.887639 kernel: ACPI: Using GIC for interrupt routing Jul 11 00:10:48.887646 kernel: ACPI: MCFG table detected, 1 entries Jul 11 00:10:48.887653 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 11 00:10:48.887660 kernel: printk: console [ttyAMA0] enabled Jul 11 00:10:48.887668 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 11 00:10:48.887800 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 11 00:10:48.887875 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 11 00:10:48.887938 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 11 00:10:48.888001 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 11 00:10:48.888076 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 11 00:10:48.888086 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 11 00:10:48.888093 kernel: PCI host bridge to bus 0000:00 Jul 11 00:10:48.888160 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 11 00:10:48.888223 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 11 00:10:48.888282 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 11 00:10:48.888340 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 11 00:10:48.888455 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 11 00:10:48.888532 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 11 00:10:48.888598 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 11 00:10:48.888665 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 11 00:10:48.888730 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 11 00:10:48.888806 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 11 00:10:48.888871 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 11 00:10:48.888935 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 11 00:10:48.888994 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 11 00:10:48.889049 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 11 00:10:48.889109 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 11 00:10:48.889119 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 11 00:10:48.889126 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 11 00:10:48.889133 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 11 00:10:48.889141 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 11 00:10:48.889148 kernel: iommu: Default domain type: Translated Jul 11 00:10:48.889155 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 11 00:10:48.889162 kernel: efivars: Registered efivars operations Jul 11 00:10:48.889169 kernel: vgaarb: loaded Jul 11 00:10:48.889178 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 11 00:10:48.889185 kernel: VFS: Disk quotas dquot_6.6.0 Jul 11 00:10:48.889193 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 11 00:10:48.889200 kernel: pnp: PnP ACPI init Jul 11 00:10:48.889268 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 11 00:10:48.889278 kernel: pnp: PnP ACPI: found 1 devices Jul 11 00:10:48.889285 kernel: NET: Registered PF_INET protocol family Jul 11 00:10:48.889293 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 11 00:10:48.889302 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 11 00:10:48.889309 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 11 00:10:48.889317 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 11 00:10:48.889324 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 11 00:10:48.889331 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 11 00:10:48.889339 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 00:10:48.889346 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 00:10:48.889379 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 11 00:10:48.889387 kernel: PCI: CLS 0 bytes, default 64 Jul 11 00:10:48.889397 kernel: kvm [1]: HYP mode not available Jul 11 00:10:48.889404 kernel: Initialise system trusted keyrings Jul 11 00:10:48.889412 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 11 00:10:48.889419 kernel: Key type asymmetric registered Jul 11 00:10:48.889426 kernel: Asymmetric key parser 'x509' registered Jul 11 00:10:48.889433 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 11 00:10:48.889440 kernel: io scheduler mq-deadline registered Jul 11 00:10:48.889448 kernel: io scheduler kyber registered Jul 11 00:10:48.889455 kernel: io scheduler bfq registered Jul 11 00:10:48.889464 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 11 00:10:48.889471 kernel: ACPI: button: Power Button [PWRB] Jul 11 00:10:48.889479 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 11 00:10:48.889554 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 11 00:10:48.889564 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 11 00:10:48.889571 kernel: thunder_xcv, ver 1.0 Jul 11 00:10:48.889578 kernel: thunder_bgx, ver 1.0 Jul 11 00:10:48.889585 kernel: nicpf, ver 1.0 Jul 11 00:10:48.889593 kernel: nicvf, ver 1.0 Jul 11 00:10:48.889669 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 11 00:10:48.889744 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-11T00:10:48 UTC (1752192648) Jul 11 00:10:48.889763 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 11 00:10:48.889771 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 11 00:10:48.889779 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 11 00:10:48.889786 kernel: watchdog: Hard watchdog permanently disabled Jul 11 00:10:48.889793 kernel: NET: Registered PF_INET6 protocol family Jul 11 00:10:48.889801 kernel: Segment Routing with IPv6 Jul 11 00:10:48.889810 kernel: In-situ OAM (IOAM) with IPv6 Jul 11 00:10:48.889817 kernel: NET: Registered PF_PACKET protocol family Jul 11 00:10:48.889824 kernel: Key type dns_resolver registered Jul 11 00:10:48.889832 kernel: registered taskstats version 1 Jul 11 00:10:48.889839 kernel: Loading compiled-in X.509 certificates Jul 11 00:10:48.889846 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.96-flatcar: 9d58afa0c1753353480d5539f26f662c9ce000cb' Jul 11 00:10:48.889853 kernel: Key type .fscrypt registered Jul 11 00:10:48.889860 kernel: Key type fscrypt-provisioning registered Jul 11 00:10:48.889867 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 11 00:10:48.889876 kernel: ima: Allocated hash algorithm: sha1 Jul 11 00:10:48.889883 kernel: ima: No architecture policies found Jul 11 00:10:48.889890 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 11 00:10:48.889897 kernel: clk: Disabling unused clocks Jul 11 00:10:48.889905 kernel: Freeing unused kernel memory: 39424K Jul 11 00:10:48.889912 kernel: Run /init as init process Jul 11 00:10:48.889919 kernel: with arguments: Jul 11 00:10:48.889926 kernel: /init Jul 11 00:10:48.889933 kernel: with environment: Jul 11 00:10:48.889941 kernel: HOME=/ Jul 11 00:10:48.889948 kernel: TERM=linux Jul 11 00:10:48.889955 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 11 00:10:48.889964 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 11 00:10:48.889973 systemd[1]: Detected virtualization kvm. Jul 11 00:10:48.889980 systemd[1]: Detected architecture arm64. Jul 11 00:10:48.889988 systemd[1]: Running in initrd. Jul 11 00:10:48.889997 systemd[1]: No hostname configured, using default hostname. Jul 11 00:10:48.890004 systemd[1]: Hostname set to . Jul 11 00:10:48.890012 systemd[1]: Initializing machine ID from VM UUID. Jul 11 00:10:48.890020 systemd[1]: Queued start job for default target initrd.target. Jul 11 00:10:48.890027 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:10:48.890035 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:10:48.890043 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 11 00:10:48.890051 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 00:10:48.890060 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 11 00:10:48.890068 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 11 00:10:48.890077 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 11 00:10:48.890085 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 11 00:10:48.890093 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:10:48.890101 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:10:48.890108 systemd[1]: Reached target paths.target - Path Units. Jul 11 00:10:48.890117 systemd[1]: Reached target slices.target - Slice Units. Jul 11 00:10:48.890125 systemd[1]: Reached target swap.target - Swaps. Jul 11 00:10:48.890132 systemd[1]: Reached target timers.target - Timer Units. Jul 11 00:10:48.890140 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 00:10:48.890147 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 00:10:48.890155 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 11 00:10:48.890164 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 11 00:10:48.890172 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:10:48.890179 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 00:10:48.890188 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:10:48.890196 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 00:10:48.890204 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 11 00:10:48.890212 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 00:10:48.890221 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 11 00:10:48.890230 systemd[1]: Starting systemd-fsck-usr.service... Jul 11 00:10:48.890240 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 00:10:48.890248 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 00:10:48.890257 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:10:48.890265 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 11 00:10:48.890272 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:10:48.890280 systemd[1]: Finished systemd-fsck-usr.service. Jul 11 00:10:48.890288 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 11 00:10:48.890297 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:10:48.890305 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 00:10:48.890330 systemd-journald[237]: Collecting audit messages is disabled. Jul 11 00:10:48.890348 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 11 00:10:48.890365 kernel: Bridge firewalling registered Jul 11 00:10:48.890374 systemd-journald[237]: Journal started Jul 11 00:10:48.890392 systemd-journald[237]: Runtime Journal (/run/log/journal/e249803290a0401092d6ebc0a6e55b29) is 5.9M, max 47.3M, 41.4M free. Jul 11 00:10:48.899469 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:10:48.873087 systemd-modules-load[238]: Inserted module 'overlay' Jul 11 00:10:48.890190 systemd-modules-load[238]: Inserted module 'br_netfilter' Jul 11 00:10:48.902841 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 00:10:48.902858 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 00:10:48.903759 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 00:10:48.908197 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:10:48.910393 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 00:10:48.912110 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:10:48.918564 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:10:48.920281 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:10:48.922070 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:10:48.924577 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 11 00:10:48.926936 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 00:10:48.939376 dracut-cmdline[275]: dracut-dracut-053 Jul 11 00:10:48.940009 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1479f76954ab5eb3c0ce800eb2a80ad04b273ff773a5af5c1fe82fb8feef2990 Jul 11 00:10:48.953861 systemd-resolved[276]: Positive Trust Anchors: Jul 11 00:10:48.953876 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:10:48.953907 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 00:10:48.958528 systemd-resolved[276]: Defaulting to hostname 'linux'. Jul 11 00:10:48.959629 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 00:10:48.961309 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:10:49.000378 kernel: SCSI subsystem initialized Jul 11 00:10:49.004371 kernel: Loading iSCSI transport class v2.0-870. Jul 11 00:10:49.013387 kernel: iscsi: registered transport (tcp) Jul 11 00:10:49.026379 kernel: iscsi: registered transport (qla4xxx) Jul 11 00:10:49.026395 kernel: QLogic iSCSI HBA Driver Jul 11 00:10:49.066012 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 11 00:10:49.077484 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 11 00:10:49.094377 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 11 00:10:49.094416 kernel: device-mapper: uevent: version 1.0.3 Jul 11 00:10:49.096376 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 11 00:10:49.144379 kernel: raid6: neonx8 gen() 15766 MB/s Jul 11 00:10:49.161366 kernel: raid6: neonx4 gen() 15666 MB/s Jul 11 00:10:49.178368 kernel: raid6: neonx2 gen() 13226 MB/s Jul 11 00:10:49.195374 kernel: raid6: neonx1 gen() 10486 MB/s Jul 11 00:10:49.212362 kernel: raid6: int64x8 gen() 6949 MB/s Jul 11 00:10:49.229372 kernel: raid6: int64x4 gen() 7324 MB/s Jul 11 00:10:49.246372 kernel: raid6: int64x2 gen() 6120 MB/s Jul 11 00:10:49.263373 kernel: raid6: int64x1 gen() 5047 MB/s Jul 11 00:10:49.263398 kernel: raid6: using algorithm neonx8 gen() 15766 MB/s Jul 11 00:10:49.280368 kernel: raid6: .... xor() 11912 MB/s, rmw enabled Jul 11 00:10:49.280381 kernel: raid6: using neon recovery algorithm Jul 11 00:10:49.285702 kernel: xor: measuring software checksum speed Jul 11 00:10:49.285719 kernel: 8regs : 19254 MB/sec Jul 11 00:10:49.285728 kernel: 32regs : 19482 MB/sec Jul 11 00:10:49.286597 kernel: arm64_neon : 26322 MB/sec Jul 11 00:10:49.286621 kernel: xor: using function: arm64_neon (26322 MB/sec) Jul 11 00:10:49.337378 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 11 00:10:49.349389 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 11 00:10:49.357557 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:10:49.367780 systemd-udevd[460]: Using default interface naming scheme 'v255'. Jul 11 00:10:49.370901 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:10:49.383590 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 11 00:10:49.394778 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Jul 11 00:10:49.419976 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 00:10:49.435495 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 00:10:49.473438 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:10:49.478564 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 11 00:10:49.492431 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 11 00:10:49.493709 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 00:10:49.495195 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:10:49.496689 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 00:10:49.506486 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 11 00:10:49.514539 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 11 00:10:49.514881 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 11 00:10:49.515805 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 11 00:10:49.519544 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 11 00:10:49.519639 kernel: GPT:9289727 != 19775487 Jul 11 00:10:49.519650 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 11 00:10:49.519660 kernel: GPT:9289727 != 19775487 Jul 11 00:10:49.522362 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 11 00:10:49.522386 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:10:49.525323 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 00:10:49.525444 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:10:49.527963 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:10:49.528726 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:10:49.528854 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:10:49.530500 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:10:49.539631 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:10:49.545400 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (515) Jul 11 00:10:49.547378 kernel: BTRFS: device fsid f5d5cad7-cb7a-4b07-bec7-847b84711ad7 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (518) Jul 11 00:10:49.553628 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 11 00:10:49.557376 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:10:49.564606 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 11 00:10:49.571525 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 00:10:49.574967 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 11 00:10:49.575863 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 11 00:10:49.588546 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 11 00:10:49.589973 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:10:49.596009 disk-uuid[549]: Primary Header is updated. Jul 11 00:10:49.596009 disk-uuid[549]: Secondary Entries is updated. Jul 11 00:10:49.596009 disk-uuid[549]: Secondary Header is updated. Jul 11 00:10:49.601388 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:10:49.612658 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:10:50.613467 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:10:50.613892 disk-uuid[550]: The operation has completed successfully. Jul 11 00:10:50.631318 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 11 00:10:50.631423 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 11 00:10:50.657487 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 11 00:10:50.660074 sh[571]: Success Jul 11 00:10:50.672445 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 11 00:10:50.699286 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 11 00:10:50.712662 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 11 00:10:50.714530 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 11 00:10:50.723827 kernel: BTRFS info (device dm-0): first mount of filesystem f5d5cad7-cb7a-4b07-bec7-847b84711ad7 Jul 11 00:10:50.723855 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 11 00:10:50.726116 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 11 00:10:50.726146 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 11 00:10:50.726157 kernel: BTRFS info (device dm-0): using free space tree Jul 11 00:10:50.729284 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 11 00:10:50.730323 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 11 00:10:50.737466 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 11 00:10:50.738675 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 11 00:10:50.745823 kernel: BTRFS info (device vda6): first mount of filesystem 183e1727-cabf-4be9-ba6e-b2af88e10184 Jul 11 00:10:50.745854 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 11 00:10:50.745864 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:10:50.748524 kernel: BTRFS info (device vda6): auto enabling async discard Jul 11 00:10:50.754782 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 11 00:10:50.756405 kernel: BTRFS info (device vda6): last unmount of filesystem 183e1727-cabf-4be9-ba6e-b2af88e10184 Jul 11 00:10:50.761688 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 11 00:10:50.767531 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 11 00:10:50.828257 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 00:10:50.842502 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 00:10:50.869273 ignition[662]: Ignition 2.19.0 Jul 11 00:10:50.869283 ignition[662]: Stage: fetch-offline Jul 11 00:10:50.869321 ignition[662]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:10:50.871859 systemd-networkd[765]: lo: Link UP Jul 11 00:10:50.869346 ignition[662]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:10:50.871862 systemd-networkd[765]: lo: Gained carrier Jul 11 00:10:50.869502 ignition[662]: parsed url from cmdline: "" Jul 11 00:10:50.872790 systemd-networkd[765]: Enumeration completed Jul 11 00:10:50.869507 ignition[662]: no config URL provided Jul 11 00:10:50.873845 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 00:10:50.869512 ignition[662]: reading system config file "/usr/lib/ignition/user.ign" Jul 11 00:10:50.874569 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:10:50.869519 ignition[662]: no config at "/usr/lib/ignition/user.ign" Jul 11 00:10:50.874576 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 00:10:50.869538 ignition[662]: op(1): [started] loading QEMU firmware config module Jul 11 00:10:50.875285 systemd[1]: Reached target network.target - Network. Jul 11 00:10:50.869542 ignition[662]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 11 00:10:50.877443 systemd-networkd[765]: eth0: Link UP Jul 11 00:10:50.877751 ignition[662]: op(1): [finished] loading QEMU firmware config module Jul 11 00:10:50.877446 systemd-networkd[765]: eth0: Gained carrier Jul 11 00:10:50.877453 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:10:50.898400 systemd-networkd[765]: eth0: DHCPv4 address 10.0.0.50/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 00:10:50.923959 ignition[662]: parsing config with SHA512: 9dd3f178ec5ab275c8d1e6fffd167a091f95ed3a5537213bda4ee70bfd66b8cdd3c6094bc0487eacc5bb36ad2ed7f1e3c7f01c9d908780341345e36d99def406 Jul 11 00:10:50.929661 unknown[662]: fetched base config from "system" Jul 11 00:10:50.929670 unknown[662]: fetched user config from "qemu" Jul 11 00:10:50.930131 ignition[662]: fetch-offline: fetch-offline passed Jul 11 00:10:50.930191 ignition[662]: Ignition finished successfully Jul 11 00:10:50.931809 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 00:10:50.932849 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 11 00:10:50.940558 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 11 00:10:50.950404 ignition[771]: Ignition 2.19.0 Jul 11 00:10:50.950414 ignition[771]: Stage: kargs Jul 11 00:10:50.950566 ignition[771]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:10:50.950575 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:10:50.951535 ignition[771]: kargs: kargs passed Jul 11 00:10:50.954359 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 11 00:10:50.951579 ignition[771]: Ignition finished successfully Jul 11 00:10:50.961502 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 11 00:10:50.970881 ignition[780]: Ignition 2.19.0 Jul 11 00:10:50.970890 ignition[780]: Stage: disks Jul 11 00:10:50.971036 ignition[780]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:10:50.971045 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:10:50.973731 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 11 00:10:50.971946 ignition[780]: disks: disks passed Jul 11 00:10:50.974910 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 11 00:10:50.971985 ignition[780]: Ignition finished successfully Jul 11 00:10:50.976219 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 11 00:10:50.977366 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 00:10:50.978670 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 00:10:50.979769 systemd[1]: Reached target basic.target - Basic System. Jul 11 00:10:50.991480 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 11 00:10:51.000434 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 11 00:10:51.004304 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 11 00:10:51.007504 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 11 00:10:51.051375 kernel: EXT4-fs (vda9): mounted filesystem a2a437d1-0a8e-46b9-88bf-4a47ff29fe90 r/w with ordered data mode. Quota mode: none. Jul 11 00:10:51.052003 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 11 00:10:51.053000 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 11 00:10:51.060482 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 00:10:51.061909 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 11 00:10:51.063504 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 11 00:10:51.063542 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 11 00:10:51.068473 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (799) Jul 11 00:10:51.063564 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 00:10:51.071656 kernel: BTRFS info (device vda6): first mount of filesystem 183e1727-cabf-4be9-ba6e-b2af88e10184 Jul 11 00:10:51.071670 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 11 00:10:51.071685 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:10:51.067536 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 11 00:10:51.071579 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 11 00:10:51.075381 kernel: BTRFS info (device vda6): auto enabling async discard Jul 11 00:10:51.075891 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 00:10:51.114284 initrd-setup-root[823]: cut: /sysroot/etc/passwd: No such file or directory Jul 11 00:10:51.118198 initrd-setup-root[830]: cut: /sysroot/etc/group: No such file or directory Jul 11 00:10:51.122164 initrd-setup-root[837]: cut: /sysroot/etc/shadow: No such file or directory Jul 11 00:10:51.125412 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory Jul 11 00:10:51.188820 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 11 00:10:51.200432 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 11 00:10:51.202718 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 11 00:10:51.206374 kernel: BTRFS info (device vda6): last unmount of filesystem 183e1727-cabf-4be9-ba6e-b2af88e10184 Jul 11 00:10:51.221628 ignition[912]: INFO : Ignition 2.19.0 Jul 11 00:10:51.221628 ignition[912]: INFO : Stage: mount Jul 11 00:10:51.223971 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:10:51.223971 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:10:51.223971 ignition[912]: INFO : mount: mount passed Jul 11 00:10:51.223971 ignition[912]: INFO : Ignition finished successfully Jul 11 00:10:51.221715 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 11 00:10:51.223799 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 11 00:10:51.229440 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 11 00:10:51.723462 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 11 00:10:51.733525 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 00:10:51.738361 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (925) Jul 11 00:10:51.740512 kernel: BTRFS info (device vda6): first mount of filesystem 183e1727-cabf-4be9-ba6e-b2af88e10184 Jul 11 00:10:51.740528 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 11 00:10:51.740538 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:10:51.742370 kernel: BTRFS info (device vda6): auto enabling async discard Jul 11 00:10:51.743465 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 00:10:51.759716 ignition[943]: INFO : Ignition 2.19.0 Jul 11 00:10:51.759716 ignition[943]: INFO : Stage: files Jul 11 00:10:51.761045 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:10:51.761045 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:10:51.761045 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Jul 11 00:10:51.763562 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 11 00:10:51.763562 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 11 00:10:51.763562 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 11 00:10:51.766450 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 11 00:10:51.766450 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 11 00:10:51.766450 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 11 00:10:51.766450 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 11 00:10:51.766450 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 11 00:10:51.766450 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 11 00:10:51.764051 unknown[943]: wrote ssh authorized keys file for user: core Jul 11 00:10:52.366114 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 11 00:10:52.770489 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 11 00:10:52.770489 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 11 00:10:52.773231 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 11 00:10:52.839609 systemd-networkd[765]: eth0: Gained IPv6LL Jul 11 00:10:53.140375 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jul 11 00:10:53.279779 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 11 00:10:53.281695 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jul 11 00:10:53.281695 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jul 11 00:10:53.281695 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 11 00:10:53.281695 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 11 00:10:53.281695 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 00:10:53.281695 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 00:10:53.281695 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 00:10:53.281695 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 00:10:53.281695 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:10:53.281695 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:10:53.281695 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 11 00:10:53.281695 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 11 00:10:53.281695 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 11 00:10:53.281695 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 11 00:10:53.716420 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jul 11 00:10:54.111638 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 11 00:10:54.111638 ignition[943]: INFO : files: op(d): [started] processing unit "containerd.service" Jul 11 00:10:54.115174 ignition[943]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 11 00:10:54.115174 ignition[943]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 11 00:10:54.115174 ignition[943]: INFO : files: op(d): [finished] processing unit "containerd.service" Jul 11 00:10:54.115174 ignition[943]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jul 11 00:10:54.115174 ignition[943]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 00:10:54.115174 ignition[943]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 00:10:54.115174 ignition[943]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jul 11 00:10:54.115174 ignition[943]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Jul 11 00:10:54.115174 ignition[943]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:10:54.115174 ignition[943]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:10:54.115174 ignition[943]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Jul 11 00:10:54.115174 ignition[943]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Jul 11 00:10:54.136716 ignition[943]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:10:54.139968 ignition[943]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:10:54.141491 ignition[943]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Jul 11 00:10:54.141491 ignition[943]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Jul 11 00:10:54.141491 ignition[943]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Jul 11 00:10:54.141491 ignition[943]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:10:54.141491 ignition[943]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:10:54.141491 ignition[943]: INFO : files: files passed Jul 11 00:10:54.141491 ignition[943]: INFO : Ignition finished successfully Jul 11 00:10:54.142101 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 11 00:10:54.151510 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 11 00:10:54.152931 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 11 00:10:54.153928 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 11 00:10:54.154026 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 11 00:10:54.160028 initrd-setup-root-after-ignition[970]: grep: /sysroot/oem/oem-release: No such file or directory Jul 11 00:10:54.162992 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:10:54.162992 initrd-setup-root-after-ignition[972]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:10:54.165715 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:10:54.168436 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 00:10:54.170412 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 11 00:10:54.179508 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 11 00:10:54.197590 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 11 00:10:54.197696 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 11 00:10:54.199566 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 11 00:10:54.201101 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 11 00:10:54.202745 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 11 00:10:54.203476 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 11 00:10:54.218028 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 00:10:54.231489 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 11 00:10:54.239334 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:10:54.240526 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:10:54.242219 systemd[1]: Stopped target timers.target - Timer Units. Jul 11 00:10:54.243683 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 11 00:10:54.243812 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 00:10:54.245854 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 11 00:10:54.247537 systemd[1]: Stopped target basic.target - Basic System. Jul 11 00:10:54.248903 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 11 00:10:54.250401 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 00:10:54.252102 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 11 00:10:54.253850 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 11 00:10:54.255433 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 00:10:54.257217 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 11 00:10:54.258974 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 11 00:10:54.260525 systemd[1]: Stopped target swap.target - Swaps. Jul 11 00:10:54.261778 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 11 00:10:54.261901 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 11 00:10:54.263810 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:10:54.265469 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:10:54.267206 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 11 00:10:54.270423 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:10:54.271642 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 11 00:10:54.271771 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 11 00:10:54.273969 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 11 00:10:54.274089 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 00:10:54.275812 systemd[1]: Stopped target paths.target - Path Units. Jul 11 00:10:54.277138 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 11 00:10:54.281409 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:10:54.282634 systemd[1]: Stopped target slices.target - Slice Units. Jul 11 00:10:54.284308 systemd[1]: Stopped target sockets.target - Socket Units. Jul 11 00:10:54.285670 systemd[1]: iscsid.socket: Deactivated successfully. Jul 11 00:10:54.285773 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 00:10:54.287089 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 11 00:10:54.287172 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 00:10:54.288496 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 11 00:10:54.288605 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 00:10:54.290144 systemd[1]: ignition-files.service: Deactivated successfully. Jul 11 00:10:54.290249 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 11 00:10:54.302595 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 11 00:10:54.303504 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 11 00:10:54.303639 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:10:54.306126 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 11 00:10:54.307702 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 11 00:10:54.307843 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:10:54.309455 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 11 00:10:54.309575 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 00:10:54.313866 ignition[996]: INFO : Ignition 2.19.0 Jul 11 00:10:54.313866 ignition[996]: INFO : Stage: umount Jul 11 00:10:54.313866 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:10:54.313866 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:10:54.319153 ignition[996]: INFO : umount: umount passed Jul 11 00:10:54.319153 ignition[996]: INFO : Ignition finished successfully Jul 11 00:10:54.315110 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 11 00:10:54.315201 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 11 00:10:54.318926 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 11 00:10:54.319013 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 11 00:10:54.322243 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 11 00:10:54.322617 systemd[1]: Stopped target network.target - Network. Jul 11 00:10:54.323813 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 11 00:10:54.323866 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 11 00:10:54.325915 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 11 00:10:54.325959 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 11 00:10:54.327433 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 11 00:10:54.327473 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 11 00:10:54.328972 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 11 00:10:54.329014 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 11 00:10:54.330656 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 11 00:10:54.332123 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 11 00:10:54.333774 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 11 00:10:54.333862 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 11 00:10:54.335413 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 11 00:10:54.335497 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 11 00:10:54.340423 systemd-networkd[765]: eth0: DHCPv6 lease lost Jul 11 00:10:54.342198 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 11 00:10:54.342313 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 11 00:10:54.344371 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 11 00:10:54.344403 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:10:54.355454 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 11 00:10:54.356091 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 11 00:10:54.356147 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 00:10:54.358228 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:10:54.359987 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 11 00:10:54.360305 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 11 00:10:54.363887 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 11 00:10:54.363954 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:10:54.364829 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 11 00:10:54.364869 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 11 00:10:54.366168 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 11 00:10:54.366205 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:10:54.369601 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 11 00:10:54.369708 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 11 00:10:54.373561 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 11 00:10:54.374329 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:10:54.376426 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 11 00:10:54.376473 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 11 00:10:54.378044 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 11 00:10:54.378075 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:10:54.378831 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 11 00:10:54.378875 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 11 00:10:54.381333 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 11 00:10:54.381395 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 11 00:10:54.383940 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 00:10:54.383985 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:10:54.397505 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 11 00:10:54.398318 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 11 00:10:54.398395 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:10:54.400327 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:10:54.400417 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:10:54.402503 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 11 00:10:54.403406 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 11 00:10:54.405436 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 11 00:10:54.407667 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 11 00:10:54.416825 systemd[1]: Switching root. Jul 11 00:10:54.434926 systemd-journald[237]: Journal stopped Jul 11 00:10:55.146648 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jul 11 00:10:55.146710 kernel: SELinux: policy capability network_peer_controls=1 Jul 11 00:10:55.146738 kernel: SELinux: policy capability open_perms=1 Jul 11 00:10:55.146752 kernel: SELinux: policy capability extended_socket_class=1 Jul 11 00:10:55.146765 kernel: SELinux: policy capability always_check_network=0 Jul 11 00:10:55.146774 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 11 00:10:55.146784 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 11 00:10:55.146793 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 11 00:10:55.146803 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 11 00:10:55.146812 kernel: audit: type=1403 audit(1752192654.629:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 11 00:10:55.146823 systemd[1]: Successfully loaded SELinux policy in 31.158ms. Jul 11 00:10:55.146839 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.418ms. Jul 11 00:10:55.146851 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 11 00:10:55.146863 systemd[1]: Detected virtualization kvm. Jul 11 00:10:55.146874 systemd[1]: Detected architecture arm64. Jul 11 00:10:55.146884 systemd[1]: Detected first boot. Jul 11 00:10:55.146894 systemd[1]: Initializing machine ID from VM UUID. Jul 11 00:10:55.146905 zram_generator::config[1061]: No configuration found. Jul 11 00:10:55.146916 systemd[1]: Populated /etc with preset unit settings. Jul 11 00:10:55.146930 systemd[1]: Queued start job for default target multi-user.target. Jul 11 00:10:55.146941 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 11 00:10:55.146953 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 11 00:10:55.146964 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 11 00:10:55.146974 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 11 00:10:55.146985 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 11 00:10:55.146995 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 11 00:10:55.147005 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 11 00:10:55.147016 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 11 00:10:55.147026 systemd[1]: Created slice user.slice - User and Session Slice. Jul 11 00:10:55.147038 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:10:55.147051 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:10:55.147061 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 11 00:10:55.147071 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 11 00:10:55.147082 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 11 00:10:55.147092 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 00:10:55.147103 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 11 00:10:55.147114 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:10:55.147124 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 11 00:10:55.147136 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:10:55.147146 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 00:10:55.147157 systemd[1]: Reached target slices.target - Slice Units. Jul 11 00:10:55.147168 systemd[1]: Reached target swap.target - Swaps. Jul 11 00:10:55.147178 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 11 00:10:55.147188 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 11 00:10:55.147199 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 11 00:10:55.147209 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 11 00:10:55.147219 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:10:55.147231 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 00:10:55.147242 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:10:55.147252 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 11 00:10:55.147263 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 11 00:10:55.147274 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 11 00:10:55.147284 systemd[1]: Mounting media.mount - External Media Directory... Jul 11 00:10:55.147295 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 11 00:10:55.147305 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 11 00:10:55.147315 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 11 00:10:55.147327 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 11 00:10:55.147338 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:10:55.147359 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 00:10:55.147375 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 11 00:10:55.147385 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:10:55.147395 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 00:10:55.147405 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:10:55.147416 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 11 00:10:55.147428 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:10:55.147440 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 11 00:10:55.147450 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 11 00:10:55.147461 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jul 11 00:10:55.147472 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 00:10:55.147482 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 00:10:55.147493 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 11 00:10:55.147503 kernel: ACPI: bus type drm_connector registered Jul 11 00:10:55.147515 kernel: fuse: init (API version 7.39) Jul 11 00:10:55.147526 kernel: loop: module loaded Jul 11 00:10:55.147536 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 11 00:10:55.147546 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 00:10:55.147556 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 11 00:10:55.147571 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 11 00:10:55.147581 systemd[1]: Mounted media.mount - External Media Directory. Jul 11 00:10:55.147591 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 11 00:10:55.147602 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 11 00:10:55.147612 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 11 00:10:55.147644 systemd-journald[1139]: Collecting audit messages is disabled. Jul 11 00:10:55.147666 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:10:55.147677 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 11 00:10:55.147687 systemd-journald[1139]: Journal started Jul 11 00:10:55.147708 systemd-journald[1139]: Runtime Journal (/run/log/journal/e249803290a0401092d6ebc0a6e55b29) is 5.9M, max 47.3M, 41.4M free. Jul 11 00:10:55.148697 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 11 00:10:55.151375 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 00:10:55.152031 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:10:55.152184 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:10:55.153474 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 11 00:10:55.154544 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 00:10:55.154699 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 00:10:55.155874 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:10:55.156038 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:10:55.157198 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 11 00:10:55.157455 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 11 00:10:55.158495 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:10:55.158705 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:10:55.160109 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 00:10:55.161347 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 00:10:55.162565 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 11 00:10:55.173627 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 11 00:10:55.179504 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 11 00:10:55.181275 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 11 00:10:55.182130 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 11 00:10:55.184520 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 11 00:10:55.187512 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 11 00:10:55.188526 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:10:55.190533 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 11 00:10:55.191436 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 00:10:55.195223 systemd-journald[1139]: Time spent on flushing to /var/log/journal/e249803290a0401092d6ebc0a6e55b29 is 15.197ms for 844 entries. Jul 11 00:10:55.195223 systemd-journald[1139]: System Journal (/var/log/journal/e249803290a0401092d6ebc0a6e55b29) is 8.0M, max 195.6M, 187.6M free. Jul 11 00:10:55.219681 systemd-journald[1139]: Received client request to flush runtime journal. Jul 11 00:10:55.192579 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:10:55.196504 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 11 00:10:55.198645 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:10:55.199835 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 11 00:10:55.200946 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 11 00:10:55.209759 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 11 00:10:55.213922 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 11 00:10:55.214990 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 11 00:10:55.216637 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:10:55.224638 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Jul 11 00:10:55.224657 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Jul 11 00:10:55.230291 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 11 00:10:55.231616 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 00:10:55.236536 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 11 00:10:55.239298 udevadm[1197]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 11 00:10:55.256916 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 11 00:10:55.266497 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 00:10:55.277005 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Jul 11 00:10:55.277025 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Jul 11 00:10:55.280538 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:10:55.616054 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 11 00:10:55.632489 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:10:55.652018 systemd-udevd[1220]: Using default interface naming scheme 'v255'. Jul 11 00:10:55.664138 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:10:55.672495 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 00:10:55.689512 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 11 00:10:55.692287 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Jul 11 00:10:55.725427 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1224) Jul 11 00:10:55.751146 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 11 00:10:55.768832 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 00:10:55.795615 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:10:55.802677 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 11 00:10:55.805984 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 11 00:10:55.819600 systemd-networkd[1227]: lo: Link UP Jul 11 00:10:55.819609 systemd-networkd[1227]: lo: Gained carrier Jul 11 00:10:55.820480 systemd-networkd[1227]: Enumeration completed Jul 11 00:10:55.821141 systemd-networkd[1227]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:10:55.821145 systemd-networkd[1227]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 00:10:55.821776 systemd-networkd[1227]: eth0: Link UP Jul 11 00:10:55.821786 systemd-networkd[1227]: eth0: Gained carrier Jul 11 00:10:55.821798 systemd-networkd[1227]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:10:55.821807 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 00:10:55.824068 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 11 00:10:55.834778 lvm[1256]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 11 00:10:55.839260 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:10:55.844463 systemd-networkd[1227]: eth0: DHCPv4 address 10.0.0.50/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 00:10:55.863580 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 11 00:10:55.864623 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:10:55.878512 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 11 00:10:55.881710 lvm[1266]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 11 00:10:55.924654 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 11 00:10:55.925696 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 11 00:10:55.926596 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 11 00:10:55.926625 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 00:10:55.927306 systemd[1]: Reached target machines.target - Containers. Jul 11 00:10:55.928957 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 11 00:10:55.940464 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 11 00:10:55.942274 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 11 00:10:55.943191 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:10:55.944042 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 11 00:10:55.947445 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 11 00:10:55.950828 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 11 00:10:55.952298 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 11 00:10:55.958776 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 11 00:10:55.962369 kernel: loop0: detected capacity change from 0 to 114432 Jul 11 00:10:55.969011 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 11 00:10:55.969653 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 11 00:10:55.974731 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 11 00:10:56.014372 kernel: loop1: detected capacity change from 0 to 203944 Jul 11 00:10:56.043368 kernel: loop2: detected capacity change from 0 to 114328 Jul 11 00:10:56.076388 kernel: loop3: detected capacity change from 0 to 114432 Jul 11 00:10:56.084387 kernel: loop4: detected capacity change from 0 to 203944 Jul 11 00:10:56.094380 kernel: loop5: detected capacity change from 0 to 114328 Jul 11 00:10:56.100630 (sd-merge)[1287]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 11 00:10:56.101061 (sd-merge)[1287]: Merged extensions into '/usr'. Jul 11 00:10:56.104499 systemd[1]: Reloading requested from client PID 1274 ('systemd-sysext') (unit systemd-sysext.service)... Jul 11 00:10:56.104514 systemd[1]: Reloading... Jul 11 00:10:56.148880 zram_generator::config[1315]: No configuration found. Jul 11 00:10:56.174159 ldconfig[1270]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 11 00:10:56.241821 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:10:56.284903 systemd[1]: Reloading finished in 180 ms. Jul 11 00:10:56.300143 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 11 00:10:56.301326 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 11 00:10:56.321503 systemd[1]: Starting ensure-sysext.service... Jul 11 00:10:56.323143 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 00:10:56.328163 systemd[1]: Reloading requested from client PID 1357 ('systemctl') (unit ensure-sysext.service)... Jul 11 00:10:56.328257 systemd[1]: Reloading... Jul 11 00:10:56.338839 systemd-tmpfiles[1358]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 11 00:10:56.339098 systemd-tmpfiles[1358]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 11 00:10:56.339740 systemd-tmpfiles[1358]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 11 00:10:56.339956 systemd-tmpfiles[1358]: ACLs are not supported, ignoring. Jul 11 00:10:56.340000 systemd-tmpfiles[1358]: ACLs are not supported, ignoring. Jul 11 00:10:56.342194 systemd-tmpfiles[1358]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 00:10:56.342208 systemd-tmpfiles[1358]: Skipping /boot Jul 11 00:10:56.349107 systemd-tmpfiles[1358]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 00:10:56.349125 systemd-tmpfiles[1358]: Skipping /boot Jul 11 00:10:56.364651 zram_generator::config[1383]: No configuration found. Jul 11 00:10:56.458973 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:10:56.502371 systemd[1]: Reloading finished in 173 ms. Jul 11 00:10:56.521997 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:10:56.542646 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 11 00:10:56.544966 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 11 00:10:56.546987 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 11 00:10:56.551653 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 00:10:56.556673 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 11 00:10:56.559625 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:10:56.564637 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:10:56.568667 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:10:56.570390 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:10:56.573519 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:10:56.574323 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:10:56.574497 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:10:56.577867 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:10:56.578003 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:10:56.582282 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 11 00:10:56.586811 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:10:56.586967 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:10:56.590736 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:10:56.603596 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:10:56.605435 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:10:56.606301 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:10:56.610311 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 11 00:10:56.611863 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 11 00:10:56.613662 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 11 00:10:56.615036 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:10:56.615170 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:10:56.616412 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:10:56.616573 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:10:56.620167 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 11 00:10:56.626450 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:10:56.630554 systemd-resolved[1433]: Positive Trust Anchors: Jul 11 00:10:56.630572 systemd-resolved[1433]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:10:56.637585 augenrules[1477]: No rules Jul 11 00:10:56.630608 systemd-resolved[1433]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 00:10:56.636742 systemd-resolved[1433]: Defaulting to hostname 'linux'. Jul 11 00:10:56.637596 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:10:56.639282 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 00:10:56.640915 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:10:56.642733 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:10:56.643681 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:10:56.643740 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 11 00:10:56.644033 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 00:10:56.645296 systemd[1]: Finished ensure-sysext.service. Jul 11 00:10:56.646243 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 11 00:10:56.647419 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:10:56.647542 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:10:56.648716 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 00:10:56.648861 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 00:10:56.649904 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:10:56.650035 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:10:56.651214 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:10:56.651400 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:10:56.657182 systemd[1]: Reached target network.target - Network. Jul 11 00:10:56.657964 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:10:56.658858 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:10:56.658921 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 00:10:56.671514 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 11 00:10:56.713074 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 11 00:10:56.713823 systemd-timesyncd[1498]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 11 00:10:56.713878 systemd-timesyncd[1498]: Initial clock synchronization to Fri 2025-07-11 00:10:56.345511 UTC. Jul 11 00:10:56.715002 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 00:10:56.716038 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 11 00:10:56.717101 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 11 00:10:56.718145 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 11 00:10:56.719245 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 11 00:10:56.719373 systemd[1]: Reached target paths.target - Path Units. Jul 11 00:10:56.720110 systemd[1]: Reached target time-set.target - System Time Set. Jul 11 00:10:56.721129 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 11 00:10:56.722176 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 11 00:10:56.723139 systemd[1]: Reached target timers.target - Timer Units. Jul 11 00:10:56.724565 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 11 00:10:56.726851 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 11 00:10:56.729043 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 11 00:10:56.733364 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 11 00:10:56.734203 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 00:10:56.734993 systemd[1]: Reached target basic.target - Basic System. Jul 11 00:10:56.735847 systemd[1]: System is tainted: cgroupsv1 Jul 11 00:10:56.735897 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 11 00:10:56.735918 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 11 00:10:56.737044 systemd[1]: Starting containerd.service - containerd container runtime... Jul 11 00:10:56.738894 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 11 00:10:56.741166 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 11 00:10:56.744716 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 11 00:10:56.745503 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 11 00:10:56.749534 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 11 00:10:56.753555 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 11 00:10:56.758662 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 11 00:10:56.759102 jq[1504]: false Jul 11 00:10:56.762947 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 11 00:10:56.766792 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 11 00:10:56.770054 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 11 00:10:56.771179 extend-filesystems[1506]: Found loop3 Jul 11 00:10:56.772005 extend-filesystems[1506]: Found loop4 Jul 11 00:10:56.772630 extend-filesystems[1506]: Found loop5 Jul 11 00:10:56.773228 extend-filesystems[1506]: Found vda Jul 11 00:10:56.774095 extend-filesystems[1506]: Found vda1 Jul 11 00:10:56.774095 extend-filesystems[1506]: Found vda2 Jul 11 00:10:56.774095 extend-filesystems[1506]: Found vda3 Jul 11 00:10:56.774095 extend-filesystems[1506]: Found usr Jul 11 00:10:56.774095 extend-filesystems[1506]: Found vda4 Jul 11 00:10:56.774095 extend-filesystems[1506]: Found vda6 Jul 11 00:10:56.774095 extend-filesystems[1506]: Found vda7 Jul 11 00:10:56.774095 extend-filesystems[1506]: Found vda9 Jul 11 00:10:56.774095 extend-filesystems[1506]: Checking size of /dev/vda9 Jul 11 00:10:56.775865 systemd[1]: Starting update-engine.service - Update Engine... Jul 11 00:10:56.778101 dbus-daemon[1503]: [system] SELinux support is enabled Jul 11 00:10:56.778174 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 11 00:10:56.781622 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 11 00:10:56.791129 jq[1526]: true Jul 11 00:10:56.787677 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 11 00:10:56.787899 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 11 00:10:56.788139 systemd[1]: motdgen.service: Deactivated successfully. Jul 11 00:10:56.788318 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 11 00:10:56.792866 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 11 00:10:56.793079 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 11 00:10:56.808954 (ntainerd)[1534]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 11 00:10:56.809922 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 11 00:10:56.809959 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 11 00:10:56.811885 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 11 00:10:56.811909 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 11 00:10:56.812166 extend-filesystems[1506]: Resized partition /dev/vda9 Jul 11 00:10:56.823147 tar[1530]: linux-arm64/helm Jul 11 00:10:56.823447 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 11 00:10:56.823471 extend-filesystems[1547]: resize2fs 1.47.1 (20-May-2024) Jul 11 00:10:56.824371 jq[1533]: true Jul 11 00:10:56.834206 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1226) Jul 11 00:10:56.849361 update_engine[1521]: I20250711 00:10:56.848515 1521 main.cc:92] Flatcar Update Engine starting Jul 11 00:10:56.851436 systemd-logind[1517]: Watching system buttons on /dev/input/event0 (Power Button) Jul 11 00:10:56.851954 systemd-logind[1517]: New seat seat0. Jul 11 00:10:56.853987 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 11 00:10:56.856824 systemd[1]: Started systemd-logind.service - User Login Management. Jul 11 00:10:56.860007 update_engine[1521]: I20250711 00:10:56.859337 1521 update_check_scheduler.cc:74] Next update check in 5m13s Jul 11 00:10:56.859524 systemd[1]: Started update-engine.service - Update Engine. Jul 11 00:10:56.863841 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 11 00:10:56.870874 extend-filesystems[1547]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 11 00:10:56.870874 extend-filesystems[1547]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 11 00:10:56.870874 extend-filesystems[1547]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 11 00:10:56.874838 extend-filesystems[1506]: Resized filesystem in /dev/vda9 Jul 11 00:10:56.874625 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 11 00:10:56.876015 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 11 00:10:56.876242 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 11 00:10:56.928760 locksmithd[1558]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 11 00:10:56.944051 bash[1572]: Updated "/home/core/.ssh/authorized_keys" Jul 11 00:10:56.945131 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 11 00:10:56.954997 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 11 00:10:57.031778 containerd[1534]: time="2025-07-11T00:10:57.031656807Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 11 00:10:57.056774 containerd[1534]: time="2025-07-11T00:10:57.056674877Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:10:57.058092 containerd[1534]: time="2025-07-11T00:10:57.058058026Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:10:57.059114 containerd[1534]: time="2025-07-11T00:10:57.058202878Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 11 00:10:57.059114 containerd[1534]: time="2025-07-11T00:10:57.058237641Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 11 00:10:57.059114 containerd[1534]: time="2025-07-11T00:10:57.058390811Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 11 00:10:57.059114 containerd[1534]: time="2025-07-11T00:10:57.058407601Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 11 00:10:57.059114 containerd[1534]: time="2025-07-11T00:10:57.058453621Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:10:57.059114 containerd[1534]: time="2025-07-11T00:10:57.058466022Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:10:57.059114 containerd[1534]: time="2025-07-11T00:10:57.058639989Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:10:57.059114 containerd[1534]: time="2025-07-11T00:10:57.058654184Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 11 00:10:57.059114 containerd[1534]: time="2025-07-11T00:10:57.058665632Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:10:57.059114 containerd[1534]: time="2025-07-11T00:10:57.058674981Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 11 00:10:57.059114 containerd[1534]: time="2025-07-11T00:10:57.058740386Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:10:57.059114 containerd[1534]: time="2025-07-11T00:10:57.058924846Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:10:57.059347 containerd[1534]: time="2025-07-11T00:10:57.059039399Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:10:57.060335 containerd[1534]: time="2025-07-11T00:10:57.060310666Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 11 00:10:57.060728 containerd[1534]: time="2025-07-11T00:10:57.060706490Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 11 00:10:57.060905 containerd[1534]: time="2025-07-11T00:10:57.060887745Z" level=info msg="metadata content store policy set" policy=shared Jul 11 00:10:57.063882 containerd[1534]: time="2025-07-11T00:10:57.063857507Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 11 00:10:57.064038 containerd[1534]: time="2025-07-11T00:10:57.064022392Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 11 00:10:57.064154 containerd[1534]: time="2025-07-11T00:10:57.064140227Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 11 00:10:57.064271 containerd[1534]: time="2025-07-11T00:10:57.064255773Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 11 00:10:57.064549 containerd[1534]: time="2025-07-11T00:10:57.064318506Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 11 00:10:57.064549 containerd[1534]: time="2025-07-11T00:10:57.064479079Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 11 00:10:57.065072 containerd[1534]: time="2025-07-11T00:10:57.065041276Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 11 00:10:57.065191 containerd[1534]: time="2025-07-11T00:10:57.065170749Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 11 00:10:57.065217 containerd[1534]: time="2025-07-11T00:10:57.065193072Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 11 00:10:57.065217 containerd[1534]: time="2025-07-11T00:10:57.065206161Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 11 00:10:57.065256 containerd[1534]: time="2025-07-11T00:10:57.065223599Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 11 00:10:57.065256 containerd[1534]: time="2025-07-11T00:10:57.065236688Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 11 00:10:57.065256 containerd[1534]: time="2025-07-11T00:10:57.065247983Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 11 00:10:57.065317 containerd[1534]: time="2025-07-11T00:10:57.065260575Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 11 00:10:57.065317 containerd[1534]: time="2025-07-11T00:10:57.065274351Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 11 00:10:57.065317 containerd[1534]: time="2025-07-11T00:10:57.065287249Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 11 00:10:57.065317 containerd[1534]: time="2025-07-11T00:10:57.065299154Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 11 00:10:57.065317 containerd[1534]: time="2025-07-11T00:10:57.065310144Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 11 00:10:57.065418 containerd[1534]: time="2025-07-11T00:10:57.065340290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 11 00:10:57.065418 containerd[1534]: time="2025-07-11T00:10:57.065365322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 11 00:10:57.065418 containerd[1534]: time="2025-07-11T00:10:57.065377609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 11 00:10:57.065418 containerd[1534]: time="2025-07-11T00:10:57.065389438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 11 00:10:57.065418 containerd[1534]: time="2025-07-11T00:10:57.065403672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 11 00:10:57.065501 containerd[1534]: time="2025-07-11T00:10:57.065418935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 11 00:10:57.065501 containerd[1534]: time="2025-07-11T00:10:57.065430001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 11 00:10:57.065501 containerd[1534]: time="2025-07-11T00:10:57.065440953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 11 00:10:57.065501 containerd[1534]: time="2025-07-11T00:10:57.065452477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 11 00:10:57.065501 containerd[1534]: time="2025-07-11T00:10:57.065465031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 11 00:10:57.065501 containerd[1534]: time="2025-07-11T00:10:57.065474876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 11 00:10:57.065501 containerd[1534]: time="2025-07-11T00:10:57.065487927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 11 00:10:57.065501 containerd[1534]: time="2025-07-11T00:10:57.065498840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 11 00:10:57.065633 containerd[1534]: time="2025-07-11T00:10:57.065515859Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 11 00:10:57.065633 containerd[1534]: time="2025-07-11T00:10:57.065534175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 11 00:10:57.065633 containerd[1534]: time="2025-07-11T00:10:57.065544707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 11 00:10:57.065633 containerd[1534]: time="2025-07-11T00:10:57.065556002Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 11 00:10:57.065950 containerd[1534]: time="2025-07-11T00:10:57.065669716Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 11 00:10:57.065950 containerd[1534]: time="2025-07-11T00:10:57.065687880Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 11 00:10:57.065950 containerd[1534]: time="2025-07-11T00:10:57.065697763Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 11 00:10:57.065950 containerd[1534]: time="2025-07-11T00:10:57.065709783Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 11 00:10:57.065950 containerd[1534]: time="2025-07-11T00:10:57.065718789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 11 00:10:57.065950 containerd[1534]: time="2025-07-11T00:10:57.065729778Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 11 00:10:57.065950 containerd[1534]: time="2025-07-11T00:10:57.065738746Z" level=info msg="NRI interface is disabled by configuration." Jul 11 00:10:57.065950 containerd[1534]: time="2025-07-11T00:10:57.065752292Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 11 00:10:57.066241 containerd[1534]: time="2025-07-11T00:10:57.066054702Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 11 00:10:57.066241 containerd[1534]: time="2025-07-11T00:10:57.066107362Z" level=info msg="Connect containerd service" Jul 11 00:10:57.066241 containerd[1534]: time="2025-07-11T00:10:57.066132089Z" level=info msg="using legacy CRI server" Jul 11 00:10:57.066241 containerd[1534]: time="2025-07-11T00:10:57.066138194Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 11 00:10:57.066241 containerd[1534]: time="2025-07-11T00:10:57.066210429Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 11 00:10:57.066783 containerd[1534]: time="2025-07-11T00:10:57.066757667Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 00:10:57.067128 containerd[1534]: time="2025-07-11T00:10:57.067039739Z" level=info msg="Start subscribing containerd event" Jul 11 00:10:57.067242 containerd[1534]: time="2025-07-11T00:10:57.067194626Z" level=info msg="Start recovering state" Jul 11 00:10:57.067394 containerd[1534]: time="2025-07-11T00:10:57.067317078Z" level=info msg="Start event monitor" Jul 11 00:10:57.067694 containerd[1534]: time="2025-07-11T00:10:57.067479712Z" level=info msg="Start snapshots syncer" Jul 11 00:10:57.067694 containerd[1534]: time="2025-07-11T00:10:57.067511575Z" level=info msg="Start cni network conf syncer for default" Jul 11 00:10:57.067694 containerd[1534]: time="2025-07-11T00:10:57.067519436Z" level=info msg="Start streaming server" Jul 11 00:10:57.067694 containerd[1534]: time="2025-07-11T00:10:57.067197335Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 11 00:10:57.067694 containerd[1534]: time="2025-07-11T00:10:57.067651008Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 11 00:10:57.067793 containerd[1534]: time="2025-07-11T00:10:57.067698935Z" level=info msg="containerd successfully booted in 0.038203s" Jul 11 00:10:57.067829 systemd[1]: Started containerd.service - containerd container runtime. Jul 11 00:10:57.194413 tar[1530]: linux-arm64/LICENSE Jul 11 00:10:57.194906 tar[1530]: linux-arm64/README.md Jul 11 00:10:57.207632 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 11 00:10:57.290718 sshd_keygen[1525]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 11 00:10:57.308235 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 11 00:10:57.316646 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 11 00:10:57.321462 systemd[1]: issuegen.service: Deactivated successfully. Jul 11 00:10:57.321666 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 11 00:10:57.323894 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 11 00:10:57.333812 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 11 00:10:57.346590 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 11 00:10:57.348163 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 11 00:10:57.349093 systemd[1]: Reached target getty.target - Login Prompts. Jul 11 00:10:57.447449 systemd-networkd[1227]: eth0: Gained IPv6LL Jul 11 00:10:57.450085 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 11 00:10:57.451605 systemd[1]: Reached target network-online.target - Network is Online. Jul 11 00:10:57.465550 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 11 00:10:57.467711 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:10:57.469562 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 11 00:10:57.484848 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 11 00:10:57.485104 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 11 00:10:57.486395 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 11 00:10:57.490846 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 11 00:10:57.983161 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:10:57.984464 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 11 00:10:57.986649 (kubelet)[1642]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:10:57.988407 systemd[1]: Startup finished in 6.462s (kernel) + 3.389s (userspace) = 9.852s. Jul 11 00:10:58.384308 kubelet[1642]: E0711 00:10:58.384189 1642 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:10:58.386197 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:10:58.386439 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:11:00.779462 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 11 00:11:00.792558 systemd[1]: Started sshd@0-10.0.0.50:22-10.0.0.1:36592.service - OpenSSH per-connection server daemon (10.0.0.1:36592). Jul 11 00:11:00.839219 sshd[1655]: Accepted publickey for core from 10.0.0.1 port 36592 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:11:00.840958 sshd[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:11:00.847937 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 11 00:11:00.857555 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 11 00:11:00.859411 systemd-logind[1517]: New session 1 of user core. Jul 11 00:11:00.865910 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 11 00:11:00.871500 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 11 00:11:00.877299 (systemd)[1661]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:11:00.963583 systemd[1661]: Queued start job for default target default.target. Jul 11 00:11:00.963912 systemd[1661]: Created slice app.slice - User Application Slice. Jul 11 00:11:00.963934 systemd[1661]: Reached target paths.target - Paths. Jul 11 00:11:00.963944 systemd[1661]: Reached target timers.target - Timers. Jul 11 00:11:00.976423 systemd[1661]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 11 00:11:00.981603 systemd[1661]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 11 00:11:00.981659 systemd[1661]: Reached target sockets.target - Sockets. Jul 11 00:11:00.981669 systemd[1661]: Reached target basic.target - Basic System. Jul 11 00:11:00.981702 systemd[1661]: Reached target default.target - Main User Target. Jul 11 00:11:00.981724 systemd[1661]: Startup finished in 99ms. Jul 11 00:11:00.982081 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 11 00:11:00.983335 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 11 00:11:01.045627 systemd[1]: Started sshd@1-10.0.0.50:22-10.0.0.1:36596.service - OpenSSH per-connection server daemon (10.0.0.1:36596). Jul 11 00:11:01.080134 sshd[1673]: Accepted publickey for core from 10.0.0.1 port 36596 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:11:01.081408 sshd[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:11:01.085223 systemd-logind[1517]: New session 2 of user core. Jul 11 00:11:01.095610 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 11 00:11:01.145517 sshd[1673]: pam_unix(sshd:session): session closed for user core Jul 11 00:11:01.155560 systemd[1]: Started sshd@2-10.0.0.50:22-10.0.0.1:36604.service - OpenSSH per-connection server daemon (10.0.0.1:36604). Jul 11 00:11:01.155898 systemd[1]: sshd@1-10.0.0.50:22-10.0.0.1:36596.service: Deactivated successfully. Jul 11 00:11:01.158104 systemd-logind[1517]: Session 2 logged out. Waiting for processes to exit. Jul 11 00:11:01.158224 systemd[1]: session-2.scope: Deactivated successfully. Jul 11 00:11:01.159379 systemd-logind[1517]: Removed session 2. Jul 11 00:11:01.185708 sshd[1678]: Accepted publickey for core from 10.0.0.1 port 36604 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:11:01.186821 sshd[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:11:01.190628 systemd-logind[1517]: New session 3 of user core. Jul 11 00:11:01.207579 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 11 00:11:01.254219 sshd[1678]: pam_unix(sshd:session): session closed for user core Jul 11 00:11:01.265566 systemd[1]: Started sshd@3-10.0.0.50:22-10.0.0.1:36618.service - OpenSSH per-connection server daemon (10.0.0.1:36618). Jul 11 00:11:01.265904 systemd[1]: sshd@2-10.0.0.50:22-10.0.0.1:36604.service: Deactivated successfully. Jul 11 00:11:01.267524 systemd-logind[1517]: Session 3 logged out. Waiting for processes to exit. Jul 11 00:11:01.268054 systemd[1]: session-3.scope: Deactivated successfully. Jul 11 00:11:01.269254 systemd-logind[1517]: Removed session 3. Jul 11 00:11:01.296146 sshd[1686]: Accepted publickey for core from 10.0.0.1 port 36618 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:11:01.297388 sshd[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:11:01.301026 systemd-logind[1517]: New session 4 of user core. Jul 11 00:11:01.316639 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 11 00:11:01.367518 sshd[1686]: pam_unix(sshd:session): session closed for user core Jul 11 00:11:01.379559 systemd[1]: Started sshd@4-10.0.0.50:22-10.0.0.1:36630.service - OpenSSH per-connection server daemon (10.0.0.1:36630). Jul 11 00:11:01.379894 systemd[1]: sshd@3-10.0.0.50:22-10.0.0.1:36618.service: Deactivated successfully. Jul 11 00:11:01.382212 systemd[1]: session-4.scope: Deactivated successfully. Jul 11 00:11:01.382213 systemd-logind[1517]: Session 4 logged out. Waiting for processes to exit. Jul 11 00:11:01.383308 systemd-logind[1517]: Removed session 4. Jul 11 00:11:01.409827 sshd[1694]: Accepted publickey for core from 10.0.0.1 port 36630 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:11:01.410945 sshd[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:11:01.413947 systemd-logind[1517]: New session 5 of user core. Jul 11 00:11:01.425541 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 11 00:11:01.489392 sudo[1701]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 11 00:11:01.491503 sudo[1701]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:11:01.505058 sudo[1701]: pam_unix(sudo:session): session closed for user root Jul 11 00:11:01.506694 sshd[1694]: pam_unix(sshd:session): session closed for user core Jul 11 00:11:01.519606 systemd[1]: Started sshd@5-10.0.0.50:22-10.0.0.1:36646.service - OpenSSH per-connection server daemon (10.0.0.1:36646). Jul 11 00:11:01.519919 systemd[1]: sshd@4-10.0.0.50:22-10.0.0.1:36630.service: Deactivated successfully. Jul 11 00:11:01.522039 systemd[1]: session-5.scope: Deactivated successfully. Jul 11 00:11:01.522573 systemd-logind[1517]: Session 5 logged out. Waiting for processes to exit. Jul 11 00:11:01.523383 systemd-logind[1517]: Removed session 5. Jul 11 00:11:01.549969 sshd[1703]: Accepted publickey for core from 10.0.0.1 port 36646 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:11:01.551091 sshd[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:11:01.554405 systemd-logind[1517]: New session 6 of user core. Jul 11 00:11:01.566540 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 11 00:11:01.614666 sudo[1711]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 11 00:11:01.615158 sudo[1711]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:11:01.617929 sudo[1711]: pam_unix(sudo:session): session closed for user root Jul 11 00:11:01.621926 sudo[1710]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 11 00:11:01.622171 sudo[1710]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:11:01.641545 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 11 00:11:01.642765 auditctl[1714]: No rules Jul 11 00:11:01.643484 systemd[1]: audit-rules.service: Deactivated successfully. Jul 11 00:11:01.643685 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 11 00:11:01.645093 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 11 00:11:01.666312 augenrules[1733]: No rules Jul 11 00:11:01.667548 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 11 00:11:01.669504 sudo[1710]: pam_unix(sudo:session): session closed for user root Jul 11 00:11:01.670867 sshd[1703]: pam_unix(sshd:session): session closed for user core Jul 11 00:11:01.679542 systemd[1]: Started sshd@6-10.0.0.50:22-10.0.0.1:36652.service - OpenSSH per-connection server daemon (10.0.0.1:36652). Jul 11 00:11:01.679862 systemd[1]: sshd@5-10.0.0.50:22-10.0.0.1:36646.service: Deactivated successfully. Jul 11 00:11:01.681466 systemd-logind[1517]: Session 6 logged out. Waiting for processes to exit. Jul 11 00:11:01.682003 systemd[1]: session-6.scope: Deactivated successfully. Jul 11 00:11:01.683111 systemd-logind[1517]: Removed session 6. Jul 11 00:11:01.711170 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 36652 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:11:01.712234 sshd[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:11:01.715281 systemd-logind[1517]: New session 7 of user core. Jul 11 00:11:01.728604 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 11 00:11:01.776758 sudo[1746]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 11 00:11:01.777014 sudo[1746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:11:02.089559 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 11 00:11:02.089781 (dockerd)[1764]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 11 00:11:02.335534 dockerd[1764]: time="2025-07-11T00:11:02.335480278Z" level=info msg="Starting up" Jul 11 00:11:02.587020 dockerd[1764]: time="2025-07-11T00:11:02.586685079Z" level=info msg="Loading containers: start." Jul 11 00:11:02.677371 kernel: Initializing XFRM netlink socket Jul 11 00:11:02.735136 systemd-networkd[1227]: docker0: Link UP Jul 11 00:11:02.751410 dockerd[1764]: time="2025-07-11T00:11:02.751369791Z" level=info msg="Loading containers: done." Jul 11 00:11:02.765553 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1008122331-merged.mount: Deactivated successfully. Jul 11 00:11:02.766263 dockerd[1764]: time="2025-07-11T00:11:02.765840760Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 11 00:11:02.766263 dockerd[1764]: time="2025-07-11T00:11:02.765921449Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 11 00:11:02.766263 dockerd[1764]: time="2025-07-11T00:11:02.766008778Z" level=info msg="Daemon has completed initialization" Jul 11 00:11:02.793259 dockerd[1764]: time="2025-07-11T00:11:02.792929817Z" level=info msg="API listen on /run/docker.sock" Jul 11 00:11:02.793257 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 11 00:11:03.322239 containerd[1534]: time="2025-07-11T00:11:03.322186718Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 11 00:11:04.013529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount578863123.mount: Deactivated successfully. Jul 11 00:11:05.346758 containerd[1534]: time="2025-07-11T00:11:05.346690382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:11:05.347249 containerd[1534]: time="2025-07-11T00:11:05.347219677Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=25651795" Jul 11 00:11:05.347960 containerd[1534]: time="2025-07-11T00:11:05.347925377Z" level=info msg="ImageCreate event name:\"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:11:05.350928 containerd[1534]: time="2025-07-11T00:11:05.350874115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:11:05.352197 containerd[1534]: time="2025-07-11T00:11:05.352056318Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"25648593\" in 2.029817444s" Jul 11 00:11:05.352197 containerd[1534]: time="2025-07-11T00:11:05.352094268Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 11 00:11:05.355129 containerd[1534]: time="2025-07-11T00:11:05.355090247Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 11 00:11:06.579010 containerd[1534]: time="2025-07-11T00:11:06.578961583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:11:06.580015 containerd[1534]: time="2025-07-11T00:11:06.579989795Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=22459679" Jul 11 00:11:06.580680 containerd[1534]: time="2025-07-11T00:11:06.580657742Z" level=info msg="ImageCreate event name:\"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:11:06.584233 containerd[1534]: time="2025-07-11T00:11:06.584195185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:11:06.585889 containerd[1534]: time="2025-07-11T00:11:06.585846770Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"23995467\" in 1.230703361s" Jul 11 00:11:06.585889 containerd[1534]: time="2025-07-11T00:11:06.585886019Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 11 00:11:06.586445 containerd[1534]: time="2025-07-11T00:11:06.586378548Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 11 00:11:07.751583 containerd[1534]: time="2025-07-11T00:11:07.751537087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:11:07.752168 containerd[1534]: time="2025-07-11T00:11:07.752141638Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=17125068" Jul 11 00:11:07.752952 containerd[1534]: time="2025-07-11T00:11:07.752925313Z" level=info msg="ImageCreate event name:\"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:11:07.755690 containerd[1534]: time="2025-07-11T00:11:07.755663354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:11:07.756803 containerd[1534]: time="2025-07-11T00:11:07.756766592Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"18660874\" in 1.17035509s" Jul 11 00:11:07.756803 containerd[1534]: time="2025-07-11T00:11:07.756800299Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 11 00:11:07.757445 containerd[1534]: time="2025-07-11T00:11:07.757271998Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 11 00:11:08.472460 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 11 00:11:08.489563 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:11:08.589203 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:11:08.594336 (kubelet)[1990]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:11:08.636238 kubelet[1990]: E0711 00:11:08.636189 1990 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:11:08.638679 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:11:08.638816 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:11:08.731713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4187919525.mount: Deactivated successfully. Jul 11 00:11:09.068031 containerd[1534]: time="2025-07-11T00:11:09.067750262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:11:09.068867 containerd[1534]: time="2025-07-11T00:11:09.068667123Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=26915959" Jul 11 00:11:09.069458 containerd[1534]: time="2025-07-11T00:11:09.069422099Z" level=info msg="ImageCreate event name:\"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:11:09.071596 containerd[1534]: time="2025-07-11T00:11:09.071563264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:11:09.072257 containerd[1534]: time="2025-07-11T00:11:09.072172166Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"26914976\" in 1.3148658s" Jul 11 00:11:09.072257 containerd[1534]: time="2025-07-11T00:11:09.072207476Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 11 00:11:09.072750 containerd[1534]: time="2025-07-11T00:11:09.072726103Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 11 00:11:09.680293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3466835630.mount: Deactivated successfully. Jul 11 00:11:10.629945 containerd[1534]: time="2025-07-11T00:11:10.629892742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:11:10.631116 containerd[1534]: time="2025-07-11T00:11:10.631086182Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 11 00:11:10.631860 containerd[1534]: time="2025-07-11T00:11:10.631834542Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:11:10.637375 containerd[1534]: time="2025-07-11T00:11:10.635764841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:11:10.637375 containerd[1534]: time="2025-07-11T00:11:10.636944078Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.564189671s" Jul 11 00:11:10.637375 containerd[1534]: time="2025-07-11T00:11:10.636971414Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 11 00:11:10.637903 containerd[1534]: time="2025-07-11T00:11:10.637877008Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 11 00:11:11.194775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3566712311.mount: Deactivated successfully. Jul 11 00:11:11.199284 containerd[1534]: time="2025-07-11T00:11:11.199238157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:11:11.200008 containerd[1534]: time="2025-07-11T00:11:11.199968019Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 11 00:11:11.200584 containerd[1534]: time="2025-07-11T00:11:11.200549026Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:11:11.202955 containerd[1534]: time="2025-07-11T00:11:11.202920872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:11:11.203945 containerd[1534]: time="2025-07-11T00:11:11.203914529Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 566.001884ms" Jul 11 00:11:11.203982 containerd[1534]: time="2025-07-11T00:11:11.203949360Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 11 00:11:11.204693 containerd[1534]: time="2025-07-11T00:11:11.204667149Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 11 00:11:11.851705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1283566538.mount: Deactivated successfully. Jul 11 00:11:13.959256 containerd[1534]: time="2025-07-11T00:11:13.959212532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:11:13.960452 containerd[1534]: time="2025-07-11T00:11:13.960421804Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" Jul 11 00:11:13.961387 containerd[1534]: time="2025-07-11T00:11:13.961331594Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:11:13.965249 containerd[1534]: time="2025-07-11T00:11:13.965215095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:11:13.966585 containerd[1534]: time="2025-07-11T00:11:13.966541369Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.761836512s" Jul 11 00:11:13.966635 containerd[1534]: time="2025-07-11T00:11:13.966592092Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 11 00:11:18.889149 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 11 00:11:18.899776 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:11:19.076520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:11:19.079359 (kubelet)[2149]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:11:19.113818 kubelet[2149]: E0711 00:11:19.113763 2149 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:11:19.116265 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:11:19.116467 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:11:19.770863 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:11:19.779626 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:11:19.799294 systemd[1]: Reloading requested from client PID 2166 ('systemctl') (unit session-7.scope)... Jul 11 00:11:19.799311 systemd[1]: Reloading... Jul 11 00:11:19.851451 zram_generator::config[2205]: No configuration found. Jul 11 00:11:20.034782 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:11:20.085418 systemd[1]: Reloading finished in 285 ms. Jul 11 00:11:20.127814 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 11 00:11:20.127889 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 11 00:11:20.128141 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:11:20.130306 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:11:20.227508 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:11:20.231098 (kubelet)[2263]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 00:11:20.263755 kubelet[2263]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:11:20.263755 kubelet[2263]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 11 00:11:20.263755 kubelet[2263]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:11:20.263755 kubelet[2263]: I0711 00:11:20.263612 2263 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 00:11:20.821618 kubelet[2263]: I0711 00:11:20.821570 2263 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 11 00:11:20.821618 kubelet[2263]: I0711 00:11:20.821605 2263 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 00:11:20.821859 kubelet[2263]: I0711 00:11:20.821833 2263 server.go:934] "Client rotation is on, will bootstrap in background" Jul 11 00:11:20.851126 kubelet[2263]: E0711 00:11:20.851071 2263 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.50:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:11:20.851572 kubelet[2263]: I0711 00:11:20.851508 2263 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:11:20.858998 kubelet[2263]: E0711 00:11:20.858947 2263 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 11 00:11:20.858998 kubelet[2263]: I0711 00:11:20.858975 2263 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 11 00:11:20.862226 kubelet[2263]: I0711 00:11:20.862210 2263 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 00:11:20.863204 kubelet[2263]: I0711 00:11:20.863173 2263 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 11 00:11:20.863339 kubelet[2263]: I0711 00:11:20.863309 2263 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 00:11:20.863511 kubelet[2263]: I0711 00:11:20.863337 2263 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 11 00:11:20.863595 kubelet[2263]: I0711 00:11:20.863576 2263 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 00:11:20.863595 kubelet[2263]: I0711 00:11:20.863585 2263 container_manager_linux.go:300] "Creating device plugin manager" Jul 11 00:11:20.863815 kubelet[2263]: I0711 00:11:20.863793 2263 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:11:20.867447 kubelet[2263]: I0711 00:11:20.867423 2263 kubelet.go:408] "Attempting to sync node with API server" Jul 11 00:11:20.867478 kubelet[2263]: I0711 00:11:20.867451 2263 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 00:11:20.867478 kubelet[2263]: I0711 00:11:20.867470 2263 kubelet.go:314] "Adding apiserver pod source" Jul 11 00:11:20.867516 kubelet[2263]: I0711 00:11:20.867479 2263 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 00:11:20.870595 kubelet[2263]: W0711 00:11:20.870502 2263 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Jul 11 00:11:20.870595 kubelet[2263]: E0711 00:11:20.870573 2263 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:11:20.870709 kubelet[2263]: W0711 00:11:20.870603 2263 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Jul 11 00:11:20.870709 kubelet[2263]: E0711 00:11:20.870686 2263 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:11:20.871463 kubelet[2263]: I0711 00:11:20.871442 2263 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 11 00:11:20.872160 kubelet[2263]: I0711 00:11:20.872145 2263 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 00:11:20.872326 kubelet[2263]: W0711 00:11:20.872314 2263 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 11 00:11:20.873960 kubelet[2263]: I0711 00:11:20.873945 2263 server.go:1274] "Started kubelet" Jul 11 00:11:20.875517 kubelet[2263]: I0711 00:11:20.874680 2263 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 00:11:20.875517 kubelet[2263]: I0711 00:11:20.874551 2263 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 00:11:20.875517 kubelet[2263]: I0711 00:11:20.874957 2263 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 00:11:20.875814 kubelet[2263]: I0711 00:11:20.875781 2263 server.go:449] "Adding debug handlers to kubelet server" Jul 11 00:11:20.877397 kubelet[2263]: I0711 00:11:20.877288 2263 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 00:11:20.878400 kubelet[2263]: I0711 00:11:20.877929 2263 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 00:11:20.879455 kubelet[2263]: E0711 00:11:20.879424 2263 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 00:11:20.879993 kubelet[2263]: I0711 00:11:20.879757 2263 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 11 00:11:20.880198 kubelet[2263]: I0711 00:11:20.880186 2263 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 11 00:11:20.880367 kubelet[2263]: I0711 00:11:20.880341 2263 reconciler.go:26] "Reconciler: start to sync state" Jul 11 00:11:20.880489 kubelet[2263]: W0711 00:11:20.880175 2263 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Jul 11 00:11:20.880607 kubelet[2263]: E0711 00:11:20.880588 2263 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:11:20.880659 kubelet[2263]: I0711 00:11:20.880393 2263 factory.go:221] Registration of the systemd container factory successfully Jul 11 00:11:20.880801 kubelet[2263]: I0711 00:11:20.880773 2263 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 00:11:20.880968 kubelet[2263]: E0711 00:11:20.880930 2263 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="200ms" Jul 11 00:11:20.881018 kubelet[2263]: E0711 00:11:20.880833 2263 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:11:20.882395 kubelet[2263]: I0711 00:11:20.882380 2263 factory.go:221] Registration of the containerd container factory successfully Jul 11 00:11:20.885118 kubelet[2263]: E0711 00:11:20.881470 2263 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.50:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.50:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185109f5b18797ff default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-11 00:11:20.873924607 +0000 UTC m=+0.640109892,LastTimestamp:2025-07-11 00:11:20.873924607 +0000 UTC m=+0.640109892,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 11 00:11:20.893608 kubelet[2263]: I0711 00:11:20.893564 2263 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 00:11:20.894818 kubelet[2263]: I0711 00:11:20.894798 2263 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 00:11:20.894882 kubelet[2263]: I0711 00:11:20.894822 2263 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 11 00:11:20.894882 kubelet[2263]: I0711 00:11:20.894837 2263 kubelet.go:2321] "Starting kubelet main sync loop" Jul 11 00:11:20.894882 kubelet[2263]: E0711 00:11:20.894873 2263 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 00:11:20.895220 kubelet[2263]: W0711 00:11:20.895196 2263 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Jul 11 00:11:20.895257 kubelet[2263]: E0711 00:11:20.895228 2263 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:11:20.899252 kubelet[2263]: I0711 00:11:20.899220 2263 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 11 00:11:20.899252 kubelet[2263]: I0711 00:11:20.899237 2263 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 11 00:11:20.899252 kubelet[2263]: I0711 00:11:20.899256 2263 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:11:20.971651 kubelet[2263]: I0711 00:11:20.971626 2263 policy_none.go:49] "None policy: Start" Jul 11 00:11:20.972460 kubelet[2263]: I0711 00:11:20.972441 2263 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 11 00:11:20.972516 kubelet[2263]: I0711 00:11:20.972470 2263 state_mem.go:35] "Initializing new in-memory state store" Jul 11 00:11:20.976671 kubelet[2263]: I0711 00:11:20.976520 2263 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 00:11:20.976730 kubelet[2263]: I0711 00:11:20.976703 2263 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 00:11:20.976766 kubelet[2263]: I0711 00:11:20.976724 2263 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 00:11:20.977106 kubelet[2263]: I0711 00:11:20.976971 2263 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 00:11:20.978211 kubelet[2263]: E0711 00:11:20.978175 2263 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 11 00:11:21.077863 kubelet[2263]: I0711 00:11:21.077765 2263 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:11:21.078635 kubelet[2263]: E0711 00:11:21.078591 2263 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Jul 11 00:11:21.082172 kubelet[2263]: E0711 00:11:21.082135 2263 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="400ms" Jul 11 00:11:21.181670 kubelet[2263]: I0711 00:11:21.181543 2263 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/340b4bd135cb22647000a879e7534cb6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"340b4bd135cb22647000a879e7534cb6\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:11:21.181670 kubelet[2263]: I0711 00:11:21.181573 2263 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:11:21.181670 kubelet[2263]: I0711 00:11:21.181589 2263 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/340b4bd135cb22647000a879e7534cb6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"340b4bd135cb22647000a879e7534cb6\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:11:21.181670 kubelet[2263]: I0711 00:11:21.181616 2263 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/340b4bd135cb22647000a879e7534cb6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"340b4bd135cb22647000a879e7534cb6\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:11:21.181670 kubelet[2263]: I0711 00:11:21.181634 2263 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:11:21.181915 kubelet[2263]: I0711 00:11:21.181649 2263 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:11:21.181915 kubelet[2263]: I0711 00:11:21.181698 2263 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:11:21.181987 kubelet[2263]: I0711 00:11:21.181753 2263 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:11:21.181987 kubelet[2263]: I0711 00:11:21.181956 2263 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 11 00:11:21.280464 kubelet[2263]: I0711 00:11:21.280435 2263 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:11:21.280762 kubelet[2263]: E0711 00:11:21.280699 2263 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Jul 11 00:11:21.300119 kubelet[2263]: E0711 00:11:21.300086 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:21.300717 containerd[1534]: time="2025-07-11T00:11:21.300676722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:340b4bd135cb22647000a879e7534cb6,Namespace:kube-system,Attempt:0,}" Jul 11 00:11:21.301111 kubelet[2263]: E0711 00:11:21.301068 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:21.301562 containerd[1534]: time="2025-07-11T00:11:21.301389988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 11 00:11:21.301998 kubelet[2263]: E0711 00:11:21.301796 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:21.302490 containerd[1534]: time="2025-07-11T00:11:21.302093392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 11 00:11:21.482554 kubelet[2263]: E0711 00:11:21.482471 2263 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="800ms" Jul 11 00:11:21.593386 kubelet[2263]: E0711 00:11:21.593273 2263 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.50:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.50:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185109f5b18797ff default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-11 00:11:20.873924607 +0000 UTC m=+0.640109892,LastTimestamp:2025-07-11 00:11:20.873924607 +0000 UTC m=+0.640109892,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 11 00:11:21.682595 kubelet[2263]: I0711 00:11:21.682556 2263 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:11:21.682909 kubelet[2263]: E0711 00:11:21.682880 2263 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Jul 11 00:11:21.717530 kubelet[2263]: W0711 00:11:21.717425 2263 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Jul 11 00:11:21.717530 kubelet[2263]: E0711 00:11:21.717497 2263 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:11:21.813931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount651482274.mount: Deactivated successfully. Jul 11 00:11:21.818711 containerd[1534]: time="2025-07-11T00:11:21.818659383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:11:21.820805 containerd[1534]: time="2025-07-11T00:11:21.820768558Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 11 00:11:21.822205 containerd[1534]: time="2025-07-11T00:11:21.822153647Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:11:21.823308 containerd[1534]: time="2025-07-11T00:11:21.823277025Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:11:21.824654 containerd[1534]: time="2025-07-11T00:11:21.824583501Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 11 00:11:21.824868 containerd[1534]: time="2025-07-11T00:11:21.824836947Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:11:21.825322 containerd[1534]: time="2025-07-11T00:11:21.825295170Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 11 00:11:21.826497 containerd[1534]: time="2025-07-11T00:11:21.826432722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:11:21.827158 containerd[1534]: time="2025-07-11T00:11:21.826910428Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 525.468218ms" Jul 11 00:11:21.830651 containerd[1534]: time="2025-07-11T00:11:21.830456635Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 527.936641ms" Jul 11 00:11:21.833368 containerd[1534]: time="2025-07-11T00:11:21.833280831Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 532.529609ms" Jul 11 00:11:21.966170 containerd[1534]: time="2025-07-11T00:11:21.966029501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:11:21.966170 containerd[1534]: time="2025-07-11T00:11:21.966144327Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:11:21.966170 containerd[1534]: time="2025-07-11T00:11:21.966167364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:11:21.966342 containerd[1534]: time="2025-07-11T00:11:21.966257595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:11:21.966758 containerd[1534]: time="2025-07-11T00:11:21.965860537Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:11:21.966758 containerd[1534]: time="2025-07-11T00:11:21.966507966Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:11:21.966758 containerd[1534]: time="2025-07-11T00:11:21.966523857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:11:21.966758 containerd[1534]: time="2025-07-11T00:11:21.966612052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:11:21.966868 containerd[1534]: time="2025-07-11T00:11:21.966645389Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:11:21.966868 containerd[1534]: time="2025-07-11T00:11:21.966759017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:11:21.966868 containerd[1534]: time="2025-07-11T00:11:21.966781016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:11:21.966935 containerd[1534]: time="2025-07-11T00:11:21.966859668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:11:22.020095 containerd[1534]: time="2025-07-11T00:11:22.020048542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"82ab200d966e23ea351bdc69134e3e4d4e2d72d5990f87860fbf7f353b0853dc\"" Jul 11 00:11:22.021285 kubelet[2263]: E0711 00:11:22.021258 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:22.022811 containerd[1534]: time="2025-07-11T00:11:22.022782948Z" level=info msg="CreateContainer within sandbox \"82ab200d966e23ea351bdc69134e3e4d4e2d72d5990f87860fbf7f353b0853dc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 11 00:11:22.023724 containerd[1534]: time="2025-07-11T00:11:22.023661909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:340b4bd135cb22647000a879e7534cb6,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd03b3e87c32d79822121217ddf49f3dd219885a98840c7624e4b9626fa32113\"" Jul 11 00:11:22.024811 kubelet[2263]: E0711 00:11:22.024704 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:22.025648 containerd[1534]: time="2025-07-11T00:11:22.025561920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff366c1a2fed2866414525a30014836a65e3712d8ede132458c3cb120e55ced1\"" Jul 11 00:11:22.026236 kubelet[2263]: E0711 00:11:22.026209 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:22.027240 containerd[1534]: time="2025-07-11T00:11:22.026950128Z" level=info msg="CreateContainer within sandbox \"bd03b3e87c32d79822121217ddf49f3dd219885a98840c7624e4b9626fa32113\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 11 00:11:22.028118 containerd[1534]: time="2025-07-11T00:11:22.028089823Z" level=info msg="CreateContainer within sandbox \"ff366c1a2fed2866414525a30014836a65e3712d8ede132458c3cb120e55ced1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 11 00:11:22.042336 containerd[1534]: time="2025-07-11T00:11:22.042290584Z" level=info msg="CreateContainer within sandbox \"82ab200d966e23ea351bdc69134e3e4d4e2d72d5990f87860fbf7f353b0853dc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"43fb0e70f8ab392cba723444c8dfadf0db45223aeafa1b3451f02cf0e214e5f8\"" Jul 11 00:11:22.042817 containerd[1534]: time="2025-07-11T00:11:22.042786293Z" level=info msg="StartContainer for \"43fb0e70f8ab392cba723444c8dfadf0db45223aeafa1b3451f02cf0e214e5f8\"" Jul 11 00:11:22.047247 containerd[1534]: time="2025-07-11T00:11:22.047214766Z" level=info msg="CreateContainer within sandbox \"ff366c1a2fed2866414525a30014836a65e3712d8ede132458c3cb120e55ced1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fd9683e6abc8b97c69db596e6f16f8039814794f6580d5afde6aa1acb3e30fb0\"" Jul 11 00:11:22.047699 containerd[1534]: time="2025-07-11T00:11:22.047648137Z" level=info msg="StartContainer for \"fd9683e6abc8b97c69db596e6f16f8039814794f6580d5afde6aa1acb3e30fb0\"" Jul 11 00:11:22.048458 containerd[1534]: time="2025-07-11T00:11:22.048341163Z" level=info msg="CreateContainer within sandbox \"bd03b3e87c32d79822121217ddf49f3dd219885a98840c7624e4b9626fa32113\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f05b8c33b8e118324f076c29f7230e2df26614cba8ac58c5dd1cb36231144967\"" Jul 11 00:11:22.048875 containerd[1534]: time="2025-07-11T00:11:22.048846855Z" level=info msg="StartContainer for \"f05b8c33b8e118324f076c29f7230e2df26614cba8ac58c5dd1cb36231144967\"" Jul 11 00:11:22.096168 containerd[1534]: time="2025-07-11T00:11:22.096041383Z" level=info msg="StartContainer for \"fd9683e6abc8b97c69db596e6f16f8039814794f6580d5afde6aa1acb3e30fb0\" returns successfully" Jul 11 00:11:22.125992 containerd[1534]: time="2025-07-11T00:11:22.123215793Z" level=info msg="StartContainer for \"43fb0e70f8ab392cba723444c8dfadf0db45223aeafa1b3451f02cf0e214e5f8\" returns successfully" Jul 11 00:11:22.125992 containerd[1534]: time="2025-07-11T00:11:22.123216712Z" level=info msg="StartContainer for \"f05b8c33b8e118324f076c29f7230e2df26614cba8ac58c5dd1cb36231144967\" returns successfully" Jul 11 00:11:22.216047 kubelet[2263]: W0711 00:11:22.215837 2263 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Jul 11 00:11:22.216047 kubelet[2263]: E0711 00:11:22.215918 2263 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:11:22.484909 kubelet[2263]: I0711 00:11:22.484785 2263 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:11:22.903120 kubelet[2263]: E0711 00:11:22.902937 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:22.905412 kubelet[2263]: E0711 00:11:22.905297 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:22.907486 kubelet[2263]: E0711 00:11:22.907370 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:23.797273 kubelet[2263]: E0711 00:11:23.797236 2263 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 11 00:11:23.872533 kubelet[2263]: I0711 00:11:23.872448 2263 apiserver.go:52] "Watching apiserver" Jul 11 00:11:23.880865 kubelet[2263]: I0711 00:11:23.880832 2263 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 11 00:11:23.909207 kubelet[2263]: E0711 00:11:23.909179 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:23.983898 kubelet[2263]: I0711 00:11:23.983745 2263 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 11 00:11:23.983898 kubelet[2263]: E0711 00:11:23.983775 2263 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 11 00:11:24.919832 kubelet[2263]: E0711 00:11:24.919803 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:25.428921 kubelet[2263]: E0711 00:11:25.428876 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:25.694674 systemd[1]: Reloading requested from client PID 2539 ('systemctl') (unit session-7.scope)... Jul 11 00:11:25.694691 systemd[1]: Reloading... Jul 11 00:11:25.756382 zram_generator::config[2579]: No configuration found. Jul 11 00:11:25.911891 kubelet[2263]: E0711 00:11:25.911787 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:25.912755 kubelet[2263]: E0711 00:11:25.912729 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:25.935175 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:11:25.996841 systemd[1]: Reloading finished in 301 ms. Jul 11 00:11:26.028907 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:11:26.047144 systemd[1]: kubelet.service: Deactivated successfully. Jul 11 00:11:26.047501 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:11:26.055574 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:11:26.158093 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:11:26.161834 (kubelet)[2631]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 00:11:26.194867 kubelet[2631]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:11:26.194867 kubelet[2631]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 11 00:11:26.194867 kubelet[2631]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:11:26.194867 kubelet[2631]: I0711 00:11:26.194845 2631 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 00:11:26.203544 kubelet[2631]: I0711 00:11:26.203383 2631 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 11 00:11:26.203544 kubelet[2631]: I0711 00:11:26.203414 2631 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 00:11:26.203705 kubelet[2631]: I0711 00:11:26.203687 2631 server.go:934] "Client rotation is on, will bootstrap in background" Jul 11 00:11:26.205228 kubelet[2631]: I0711 00:11:26.205206 2631 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 11 00:11:26.207890 kubelet[2631]: I0711 00:11:26.207825 2631 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:11:26.210484 kubelet[2631]: E0711 00:11:26.210459 2631 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 11 00:11:26.210484 kubelet[2631]: I0711 00:11:26.210483 2631 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 11 00:11:26.212762 kubelet[2631]: I0711 00:11:26.212746 2631 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 00:11:26.213085 kubelet[2631]: I0711 00:11:26.213073 2631 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 11 00:11:26.213192 kubelet[2631]: I0711 00:11:26.213171 2631 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 00:11:26.213344 kubelet[2631]: I0711 00:11:26.213193 2631 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 11 00:11:26.213435 kubelet[2631]: I0711 00:11:26.213367 2631 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 00:11:26.213435 kubelet[2631]: I0711 00:11:26.213378 2631 container_manager_linux.go:300] "Creating device plugin manager" Jul 11 00:11:26.213435 kubelet[2631]: I0711 00:11:26.213410 2631 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:11:26.213507 kubelet[2631]: I0711 00:11:26.213492 2631 kubelet.go:408] "Attempting to sync node with API server" Jul 11 00:11:26.213507 kubelet[2631]: I0711 00:11:26.213506 2631 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 00:11:26.213548 kubelet[2631]: I0711 00:11:26.213524 2631 kubelet.go:314] "Adding apiserver pod source" Jul 11 00:11:26.213548 kubelet[2631]: I0711 00:11:26.213538 2631 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 00:11:26.220678 kubelet[2631]: I0711 00:11:26.214636 2631 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 11 00:11:26.220678 kubelet[2631]: I0711 00:11:26.215188 2631 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 00:11:26.220678 kubelet[2631]: I0711 00:11:26.215599 2631 server.go:1274] "Started kubelet" Jul 11 00:11:26.220678 kubelet[2631]: I0711 00:11:26.216315 2631 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 00:11:26.220678 kubelet[2631]: I0711 00:11:26.217739 2631 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 00:11:26.222092 kubelet[2631]: I0711 00:11:26.222065 2631 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 00:11:26.222163 kubelet[2631]: I0711 00:11:26.222133 2631 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 00:11:26.223955 kubelet[2631]: I0711 00:11:26.223919 2631 server.go:449] "Adding debug handlers to kubelet server" Jul 11 00:11:26.225637 kubelet[2631]: I0711 00:11:26.225600 2631 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 00:11:26.226553 kubelet[2631]: I0711 00:11:26.226403 2631 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 11 00:11:26.226673 kubelet[2631]: E0711 00:11:26.226649 2631 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:11:26.226740 kubelet[2631]: I0711 00:11:26.226725 2631 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 11 00:11:26.226881 kubelet[2631]: I0711 00:11:26.226855 2631 reconciler.go:26] "Reconciler: start to sync state" Jul 11 00:11:26.228404 kubelet[2631]: I0711 00:11:26.228299 2631 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 00:11:26.230582 kubelet[2631]: I0711 00:11:26.230452 2631 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 00:11:26.230582 kubelet[2631]: I0711 00:11:26.230559 2631 factory.go:221] Registration of the containerd container factory successfully Jul 11 00:11:26.230582 kubelet[2631]: I0711 00:11:26.230570 2631 factory.go:221] Registration of the systemd container factory successfully Jul 11 00:11:26.233070 kubelet[2631]: I0711 00:11:26.232775 2631 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 00:11:26.233070 kubelet[2631]: I0711 00:11:26.232799 2631 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 11 00:11:26.233070 kubelet[2631]: I0711 00:11:26.232817 2631 kubelet.go:2321] "Starting kubelet main sync loop" Jul 11 00:11:26.233070 kubelet[2631]: E0711 00:11:26.232863 2631 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 00:11:26.277634 kubelet[2631]: I0711 00:11:26.277536 2631 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 11 00:11:26.277634 kubelet[2631]: I0711 00:11:26.277558 2631 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 11 00:11:26.277634 kubelet[2631]: I0711 00:11:26.277579 2631 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:11:26.277772 kubelet[2631]: I0711 00:11:26.277722 2631 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 11 00:11:26.277772 kubelet[2631]: I0711 00:11:26.277734 2631 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 11 00:11:26.277772 kubelet[2631]: I0711 00:11:26.277751 2631 policy_none.go:49] "None policy: Start" Jul 11 00:11:26.279103 kubelet[2631]: I0711 00:11:26.279048 2631 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 11 00:11:26.279103 kubelet[2631]: I0711 00:11:26.279078 2631 state_mem.go:35] "Initializing new in-memory state store" Jul 11 00:11:26.279258 kubelet[2631]: I0711 00:11:26.279221 2631 state_mem.go:75] "Updated machine memory state" Jul 11 00:11:26.281102 kubelet[2631]: I0711 00:11:26.280272 2631 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 00:11:26.281102 kubelet[2631]: I0711 00:11:26.280466 2631 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 00:11:26.281102 kubelet[2631]: I0711 00:11:26.280478 2631 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 00:11:26.281102 kubelet[2631]: I0711 00:11:26.280654 2631 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 00:11:26.338906 kubelet[2631]: E0711 00:11:26.338858 2631 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 11 00:11:26.339485 kubelet[2631]: E0711 00:11:26.339464 2631 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 11 00:11:26.384962 kubelet[2631]: I0711 00:11:26.384936 2631 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:11:26.391972 kubelet[2631]: I0711 00:11:26.391947 2631 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 11 00:11:26.392084 kubelet[2631]: I0711 00:11:26.392019 2631 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 11 00:11:26.529168 kubelet[2631]: I0711 00:11:26.528754 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/340b4bd135cb22647000a879e7534cb6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"340b4bd135cb22647000a879e7534cb6\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:11:26.529168 kubelet[2631]: I0711 00:11:26.528795 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:11:26.529168 kubelet[2631]: I0711 00:11:26.528821 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:11:26.529168 kubelet[2631]: I0711 00:11:26.528838 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 11 00:11:26.529168 kubelet[2631]: I0711 00:11:26.528854 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/340b4bd135cb22647000a879e7534cb6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"340b4bd135cb22647000a879e7534cb6\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:11:26.529337 kubelet[2631]: I0711 00:11:26.528868 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/340b4bd135cb22647000a879e7534cb6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"340b4bd135cb22647000a879e7534cb6\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:11:26.529337 kubelet[2631]: I0711 00:11:26.528883 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:11:26.529337 kubelet[2631]: I0711 00:11:26.528898 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:11:26.529337 kubelet[2631]: I0711 00:11:26.528911 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:11:26.639661 kubelet[2631]: E0711 00:11:26.639621 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:26.639770 kubelet[2631]: E0711 00:11:26.639677 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:26.639770 kubelet[2631]: E0711 00:11:26.639727 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:26.697888 sudo[2666]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 11 00:11:26.698325 sudo[2666]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 11 00:11:27.128909 sudo[2666]: pam_unix(sudo:session): session closed for user root Jul 11 00:11:27.214902 kubelet[2631]: I0711 00:11:27.214644 2631 apiserver.go:52] "Watching apiserver" Jul 11 00:11:27.229378 kubelet[2631]: I0711 00:11:27.227376 2631 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 11 00:11:27.259405 kubelet[2631]: E0711 00:11:27.259375 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:27.264321 kubelet[2631]: E0711 00:11:27.264255 2631 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 11 00:11:27.264785 kubelet[2631]: E0711 00:11:27.264748 2631 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 11 00:11:27.264927 kubelet[2631]: E0711 00:11:27.264855 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:27.264927 kubelet[2631]: E0711 00:11:27.264898 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:27.286375 kubelet[2631]: I0711 00:11:27.285258 2631 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.284873549 podStartE2EDuration="3.284873549s" podCreationTimestamp="2025-07-11 00:11:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:11:27.278424198 +0000 UTC m=+1.113222678" watchObservedRunningTime="2025-07-11 00:11:27.284873549 +0000 UTC m=+1.119672029" Jul 11 00:11:27.286375 kubelet[2631]: I0711 00:11:27.285423 2631 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.285417893 podStartE2EDuration="1.285417893s" podCreationTimestamp="2025-07-11 00:11:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:11:27.285374329 +0000 UTC m=+1.120172809" watchObservedRunningTime="2025-07-11 00:11:27.285417893 +0000 UTC m=+1.120216373" Jul 11 00:11:27.300548 kubelet[2631]: I0711 00:11:27.300476 2631 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.300464194 podStartE2EDuration="2.300464194s" podCreationTimestamp="2025-07-11 00:11:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:11:27.293036543 +0000 UTC m=+1.127835063" watchObservedRunningTime="2025-07-11 00:11:27.300464194 +0000 UTC m=+1.135262674" Jul 11 00:11:28.260627 kubelet[2631]: E0711 00:11:28.260567 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:28.260946 kubelet[2631]: E0711 00:11:28.260643 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:28.698571 sudo[1746]: pam_unix(sudo:session): session closed for user root Jul 11 00:11:28.700699 sshd[1739]: pam_unix(sshd:session): session closed for user core Jul 11 00:11:28.704028 systemd-logind[1517]: Session 7 logged out. Waiting for processes to exit. Jul 11 00:11:28.704988 systemd[1]: sshd@6-10.0.0.50:22-10.0.0.1:36652.service: Deactivated successfully. Jul 11 00:11:28.707289 systemd[1]: session-7.scope: Deactivated successfully. Jul 11 00:11:28.708198 systemd-logind[1517]: Removed session 7. Jul 11 00:11:29.262330 kubelet[2631]: E0711 00:11:29.262270 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:30.486873 kubelet[2631]: I0711 00:11:30.486834 2631 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 11 00:11:30.487588 containerd[1534]: time="2025-07-11T00:11:30.487500076Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 11 00:11:30.487878 kubelet[2631]: I0711 00:11:30.487704 2631 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 11 00:11:31.478431 kubelet[2631]: I0711 00:11:31.474809 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/98d7c007-f381-4c30-bc42-3c41fe72d679-clustermesh-secrets\") pod \"cilium-np75t\" (UID: \"98d7c007-f381-4c30-bc42-3c41fe72d679\") " pod="kube-system/cilium-np75t" Jul 11 00:11:31.478431 kubelet[2631]: I0711 00:11:31.474860 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/98d7c007-f381-4c30-bc42-3c41fe72d679-cilium-run\") pod \"cilium-np75t\" (UID: \"98d7c007-f381-4c30-bc42-3c41fe72d679\") " pod="kube-system/cilium-np75t" Jul 11 00:11:31.478431 kubelet[2631]: I0711 00:11:31.474880 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/98d7c007-f381-4c30-bc42-3c41fe72d679-hostproc\") pod \"cilium-np75t\" (UID: \"98d7c007-f381-4c30-bc42-3c41fe72d679\") " pod="kube-system/cilium-np75t" Jul 11 00:11:31.478431 kubelet[2631]: I0711 00:11:31.474899 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98d7c007-f381-4c30-bc42-3c41fe72d679-xtables-lock\") pod \"cilium-np75t\" (UID: \"98d7c007-f381-4c30-bc42-3c41fe72d679\") " pod="kube-system/cilium-np75t" Jul 11 00:11:31.478431 kubelet[2631]: I0711 00:11:31.474924 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxx5t\" (UniqueName: \"kubernetes.io/projected/19fd3d2a-ae07-4244-ba9d-05f812d627d6-kube-api-access-mxx5t\") pod \"kube-proxy-x2xgw\" (UID: \"19fd3d2a-ae07-4244-ba9d-05f812d627d6\") " pod="kube-system/kube-proxy-x2xgw" Jul 11 00:11:31.478747 kubelet[2631]: I0711 00:11:31.474941 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8lnq\" (UniqueName: \"kubernetes.io/projected/98d7c007-f381-4c30-bc42-3c41fe72d679-kube-api-access-t8lnq\") pod \"cilium-np75t\" (UID: \"98d7c007-f381-4c30-bc42-3c41fe72d679\") " pod="kube-system/cilium-np75t" Jul 11 00:11:31.478747 kubelet[2631]: I0711 00:11:31.474960 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/98d7c007-f381-4c30-bc42-3c41fe72d679-cni-path\") pod \"cilium-np75t\" (UID: \"98d7c007-f381-4c30-bc42-3c41fe72d679\") " pod="kube-system/cilium-np75t" Jul 11 00:11:31.478747 kubelet[2631]: I0711 00:11:31.474981 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/98d7c007-f381-4c30-bc42-3c41fe72d679-hubble-tls\") pod \"cilium-np75t\" (UID: \"98d7c007-f381-4c30-bc42-3c41fe72d679\") " pod="kube-system/cilium-np75t" Jul 11 00:11:31.478747 kubelet[2631]: I0711 00:11:31.474999 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/98d7c007-f381-4c30-bc42-3c41fe72d679-cilium-config-path\") pod \"cilium-np75t\" (UID: \"98d7c007-f381-4c30-bc42-3c41fe72d679\") " pod="kube-system/cilium-np75t" Jul 11 00:11:31.478747 kubelet[2631]: I0711 00:11:31.475014 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19fd3d2a-ae07-4244-ba9d-05f812d627d6-xtables-lock\") pod \"kube-proxy-x2xgw\" (UID: \"19fd3d2a-ae07-4244-ba9d-05f812d627d6\") " pod="kube-system/kube-proxy-x2xgw" Jul 11 00:11:31.478747 kubelet[2631]: I0711 00:11:31.475031 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/19fd3d2a-ae07-4244-ba9d-05f812d627d6-kube-proxy\") pod \"kube-proxy-x2xgw\" (UID: \"19fd3d2a-ae07-4244-ba9d-05f812d627d6\") " pod="kube-system/kube-proxy-x2xgw" Jul 11 00:11:31.478886 kubelet[2631]: I0711 00:11:31.475051 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19fd3d2a-ae07-4244-ba9d-05f812d627d6-lib-modules\") pod \"kube-proxy-x2xgw\" (UID: \"19fd3d2a-ae07-4244-ba9d-05f812d627d6\") " pod="kube-system/kube-proxy-x2xgw" Jul 11 00:11:31.478886 kubelet[2631]: I0711 00:11:31.475069 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/98d7c007-f381-4c30-bc42-3c41fe72d679-etc-cni-netd\") pod \"cilium-np75t\" (UID: \"98d7c007-f381-4c30-bc42-3c41fe72d679\") " pod="kube-system/cilium-np75t" Jul 11 00:11:31.478886 kubelet[2631]: I0711 00:11:31.475091 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/98d7c007-f381-4c30-bc42-3c41fe72d679-host-proc-sys-net\") pod \"cilium-np75t\" (UID: \"98d7c007-f381-4c30-bc42-3c41fe72d679\") " pod="kube-system/cilium-np75t" Jul 11 00:11:31.478886 kubelet[2631]: I0711 00:11:31.475106 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/98d7c007-f381-4c30-bc42-3c41fe72d679-host-proc-sys-kernel\") pod \"cilium-np75t\" (UID: \"98d7c007-f381-4c30-bc42-3c41fe72d679\") " pod="kube-system/cilium-np75t" Jul 11 00:11:31.478886 kubelet[2631]: I0711 00:11:31.475126 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/98d7c007-f381-4c30-bc42-3c41fe72d679-bpf-maps\") pod \"cilium-np75t\" (UID: \"98d7c007-f381-4c30-bc42-3c41fe72d679\") " pod="kube-system/cilium-np75t" Jul 11 00:11:31.478886 kubelet[2631]: I0711 00:11:31.475145 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/98d7c007-f381-4c30-bc42-3c41fe72d679-cilium-cgroup\") pod \"cilium-np75t\" (UID: \"98d7c007-f381-4c30-bc42-3c41fe72d679\") " pod="kube-system/cilium-np75t" Jul 11 00:11:31.479002 kubelet[2631]: I0711 00:11:31.475165 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98d7c007-f381-4c30-bc42-3c41fe72d679-lib-modules\") pod \"cilium-np75t\" (UID: \"98d7c007-f381-4c30-bc42-3c41fe72d679\") " pod="kube-system/cilium-np75t" Jul 11 00:11:31.537929 kubelet[2631]: E0711 00:11:31.537891 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:31.677757 kubelet[2631]: I0711 00:11:31.677701 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vctr9\" (UniqueName: \"kubernetes.io/projected/a16dffc6-d0f3-43c2-bddb-2878eb043b7c-kube-api-access-vctr9\") pod \"cilium-operator-5d85765b45-7wt4n\" (UID: \"a16dffc6-d0f3-43c2-bddb-2878eb043b7c\") " pod="kube-system/cilium-operator-5d85765b45-7wt4n" Jul 11 00:11:31.677757 kubelet[2631]: I0711 00:11:31.677748 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a16dffc6-d0f3-43c2-bddb-2878eb043b7c-cilium-config-path\") pod \"cilium-operator-5d85765b45-7wt4n\" (UID: \"a16dffc6-d0f3-43c2-bddb-2878eb043b7c\") " pod="kube-system/cilium-operator-5d85765b45-7wt4n" Jul 11 00:11:31.728765 kubelet[2631]: E0711 00:11:31.728678 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:31.729061 kubelet[2631]: E0711 00:11:31.728826 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:31.729825 containerd[1534]: time="2025-07-11T00:11:31.729271617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-np75t,Uid:98d7c007-f381-4c30-bc42-3c41fe72d679,Namespace:kube-system,Attempt:0,}" Jul 11 00:11:31.729825 containerd[1534]: time="2025-07-11T00:11:31.729380690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x2xgw,Uid:19fd3d2a-ae07-4244-ba9d-05f812d627d6,Namespace:kube-system,Attempt:0,}" Jul 11 00:11:31.750545 containerd[1534]: time="2025-07-11T00:11:31.750461228Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:11:31.750545 containerd[1534]: time="2025-07-11T00:11:31.750515684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:11:31.750723 containerd[1534]: time="2025-07-11T00:11:31.750530008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:11:31.750723 containerd[1534]: time="2025-07-11T00:11:31.750618195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:11:31.755767 containerd[1534]: time="2025-07-11T00:11:31.755695152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:11:31.755767 containerd[1534]: time="2025-07-11T00:11:31.755748808Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:11:31.755906 containerd[1534]: time="2025-07-11T00:11:31.755766453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:11:31.755906 containerd[1534]: time="2025-07-11T00:11:31.755858520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:11:31.784026 containerd[1534]: time="2025-07-11T00:11:31.783993486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x2xgw,Uid:19fd3d2a-ae07-4244-ba9d-05f812d627d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"47d7f796bea025d2716f8af6c2fff9d93272c9ddc5738a85ae06dce5a49af7e0\"" Jul 11 00:11:31.784946 kubelet[2631]: E0711 00:11:31.784755 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:31.786499 containerd[1534]: time="2025-07-11T00:11:31.786473947Z" level=info msg="CreateContainer within sandbox \"47d7f796bea025d2716f8af6c2fff9d93272c9ddc5738a85ae06dce5a49af7e0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 11 00:11:31.791692 containerd[1534]: time="2025-07-11T00:11:31.791646772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-np75t,Uid:98d7c007-f381-4c30-bc42-3c41fe72d679,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6e9a7a915de369f086fa68e8acf2e1e6a82c3186dc40d5153cc166570164a2e\"" Jul 11 00:11:31.792592 kubelet[2631]: E0711 00:11:31.792550 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:31.793518 containerd[1534]: time="2025-07-11T00:11:31.793491203Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 11 00:11:31.802322 containerd[1534]: time="2025-07-11T00:11:31.802244458Z" level=info msg="CreateContainer within sandbox \"47d7f796bea025d2716f8af6c2fff9d93272c9ddc5738a85ae06dce5a49af7e0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f4832f5892f086defe51fd262e57afadff5ec60f7902eb7de6989d05ae4345ae\"" Jul 11 00:11:31.802826 containerd[1534]: time="2025-07-11T00:11:31.802793702Z" level=info msg="StartContainer for \"f4832f5892f086defe51fd262e57afadff5ec60f7902eb7de6989d05ae4345ae\"" Jul 11 00:11:31.845729 containerd[1534]: time="2025-07-11T00:11:31.845681435Z" level=info msg="StartContainer for \"f4832f5892f086defe51fd262e57afadff5ec60f7902eb7de6989d05ae4345ae\" returns successfully" Jul 11 00:11:31.925202 kubelet[2631]: E0711 00:11:31.925098 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:31.926735 containerd[1534]: time="2025-07-11T00:11:31.926674752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-7wt4n,Uid:a16dffc6-d0f3-43c2-bddb-2878eb043b7c,Namespace:kube-system,Attempt:0,}" Jul 11 00:11:31.964977 containerd[1534]: time="2025-07-11T00:11:31.964848837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:11:31.964977 containerd[1534]: time="2025-07-11T00:11:31.964925740Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:11:31.964977 containerd[1534]: time="2025-07-11T00:11:31.964937183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:11:31.965634 containerd[1534]: time="2025-07-11T00:11:31.965556528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:11:32.011339 containerd[1534]: time="2025-07-11T00:11:32.010849208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-7wt4n,Uid:a16dffc6-d0f3-43c2-bddb-2878eb043b7c,Namespace:kube-system,Attempt:0,} returns sandbox id \"2439f8111171e65f54fa23c1a9057f6913241c07dd1749d720c5bcfea54d59a9\"" Jul 11 00:11:32.012385 kubelet[2631]: E0711 00:11:32.012020 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:32.186011 kubelet[2631]: E0711 00:11:32.185895 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:32.268924 kubelet[2631]: E0711 00:11:32.268810 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:32.270153 kubelet[2631]: E0711 00:11:32.270136 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:32.279047 kubelet[2631]: I0711 00:11:32.279004 2631 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-x2xgw" podStartSLOduration=1.278992415 podStartE2EDuration="1.278992415s" podCreationTimestamp="2025-07-11 00:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:11:32.278760189 +0000 UTC m=+6.113558669" watchObservedRunningTime="2025-07-11 00:11:32.278992415 +0000 UTC m=+6.113790895" Jul 11 00:11:34.874535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount7621011.mount: Deactivated successfully. Jul 11 00:11:36.095836 containerd[1534]: time="2025-07-11T00:11:36.095783515Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:11:36.096221 containerd[1534]: time="2025-07-11T00:11:36.096187887Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 11 00:11:36.097012 containerd[1534]: time="2025-07-11T00:11:36.096979746Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:11:36.098500 containerd[1534]: time="2025-07-11T00:11:36.098464122Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 4.304939949s" Jul 11 00:11:36.098538 containerd[1534]: time="2025-07-11T00:11:36.098504532Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 11 00:11:36.107228 containerd[1534]: time="2025-07-11T00:11:36.107203623Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 11 00:11:36.118614 containerd[1534]: time="2025-07-11T00:11:36.118516387Z" level=info msg="CreateContainer within sandbox \"a6e9a7a915de369f086fa68e8acf2e1e6a82c3186dc40d5153cc166570164a2e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 11 00:11:36.136646 containerd[1534]: time="2025-07-11T00:11:36.136526989Z" level=info msg="CreateContainer within sandbox \"a6e9a7a915de369f086fa68e8acf2e1e6a82c3186dc40d5153cc166570164a2e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c7f22f2e048c17110319fc29e27824ec5d39144197fd0fee1a749260ae3e0318\"" Jul 11 00:11:36.137020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1907120208.mount: Deactivated successfully. Jul 11 00:11:36.139414 containerd[1534]: time="2025-07-11T00:11:36.139018393Z" level=info msg="StartContainer for \"c7f22f2e048c17110319fc29e27824ec5d39144197fd0fee1a749260ae3e0318\"" Jul 11 00:11:36.260264 containerd[1534]: time="2025-07-11T00:11:36.260185654Z" level=info msg="StartContainer for \"c7f22f2e048c17110319fc29e27824ec5d39144197fd0fee1a749260ae3e0318\" returns successfully" Jul 11 00:11:36.280882 kubelet[2631]: E0711 00:11:36.280853 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:36.287155 containerd[1534]: time="2025-07-11T00:11:36.283337701Z" level=info msg="shim disconnected" id=c7f22f2e048c17110319fc29e27824ec5d39144197fd0fee1a749260ae3e0318 namespace=k8s.io Jul 11 00:11:36.287155 containerd[1534]: time="2025-07-11T00:11:36.287144924Z" level=warning msg="cleaning up after shim disconnected" id=c7f22f2e048c17110319fc29e27824ec5d39144197fd0fee1a749260ae3e0318 namespace=k8s.io Jul 11 00:11:36.287155 containerd[1534]: time="2025-07-11T00:11:36.287157086Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:11:37.134746 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7f22f2e048c17110319fc29e27824ec5d39144197fd0fee1a749260ae3e0318-rootfs.mount: Deactivated successfully. Jul 11 00:11:37.161730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3695622632.mount: Deactivated successfully. Jul 11 00:11:37.283330 kubelet[2631]: E0711 00:11:37.283297 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:37.285578 containerd[1534]: time="2025-07-11T00:11:37.285455762Z" level=info msg="CreateContainer within sandbox \"a6e9a7a915de369f086fa68e8acf2e1e6a82c3186dc40d5153cc166570164a2e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 11 00:11:37.298247 containerd[1534]: time="2025-07-11T00:11:37.298202740Z" level=info msg="CreateContainer within sandbox \"a6e9a7a915de369f086fa68e8acf2e1e6a82c3186dc40d5153cc166570164a2e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6f73e7ec0c34651874d53b8edd31f7afd60b5b4df8a9966020d95fe8c012dd4f\"" Jul 11 00:11:37.300068 containerd[1534]: time="2025-07-11T00:11:37.300033453Z" level=info msg="StartContainer for \"6f73e7ec0c34651874d53b8edd31f7afd60b5b4df8a9966020d95fe8c012dd4f\"" Jul 11 00:11:37.348115 containerd[1534]: time="2025-07-11T00:11:37.347999756Z" level=info msg="StartContainer for \"6f73e7ec0c34651874d53b8edd31f7afd60b5b4df8a9966020d95fe8c012dd4f\" returns successfully" Jul 11 00:11:37.373383 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 11 00:11:37.373638 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:11:37.373694 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:11:37.381497 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:11:37.400778 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:11:37.429199 containerd[1534]: time="2025-07-11T00:11:37.429146226Z" level=info msg="shim disconnected" id=6f73e7ec0c34651874d53b8edd31f7afd60b5b4df8a9966020d95fe8c012dd4f namespace=k8s.io Jul 11 00:11:37.429579 containerd[1534]: time="2025-07-11T00:11:37.429403721Z" level=warning msg="cleaning up after shim disconnected" id=6f73e7ec0c34651874d53b8edd31f7afd60b5b4df8a9966020d95fe8c012dd4f namespace=k8s.io Jul 11 00:11:37.429579 containerd[1534]: time="2025-07-11T00:11:37.429422645Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:11:37.484987 containerd[1534]: time="2025-07-11T00:11:37.484763892Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:11:37.485505 containerd[1534]: time="2025-07-11T00:11:37.485479686Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 11 00:11:37.488805 containerd[1534]: time="2025-07-11T00:11:37.487662115Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:11:37.489202 containerd[1534]: time="2025-07-11T00:11:37.489160837Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.381919685s" Jul 11 00:11:37.489252 containerd[1534]: time="2025-07-11T00:11:37.489207767Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 11 00:11:37.491553 containerd[1534]: time="2025-07-11T00:11:37.491523344Z" level=info msg="CreateContainer within sandbox \"2439f8111171e65f54fa23c1a9057f6913241c07dd1749d720c5bcfea54d59a9\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 11 00:11:37.501668 containerd[1534]: time="2025-07-11T00:11:37.501589306Z" level=info msg="CreateContainer within sandbox \"2439f8111171e65f54fa23c1a9057f6913241c07dd1749d720c5bcfea54d59a9\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"913c861cb1678f94fc1a1b2d7c5a72f8376db4f71a5d2cae8e234c89f65d1c44\"" Jul 11 00:11:37.502371 containerd[1534]: time="2025-07-11T00:11:37.502195716Z" level=info msg="StartContainer for \"913c861cb1678f94fc1a1b2d7c5a72f8376db4f71a5d2cae8e234c89f65d1c44\"" Jul 11 00:11:37.544168 containerd[1534]: time="2025-07-11T00:11:37.544121242Z" level=info msg="StartContainer for \"913c861cb1678f94fc1a1b2d7c5a72f8376db4f71a5d2cae8e234c89f65d1c44\" returns successfully" Jul 11 00:11:38.135849 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f73e7ec0c34651874d53b8edd31f7afd60b5b4df8a9966020d95fe8c012dd4f-rootfs.mount: Deactivated successfully. Jul 11 00:11:38.290874 kubelet[2631]: E0711 00:11:38.290827 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:38.294854 containerd[1534]: time="2025-07-11T00:11:38.294817952Z" level=info msg="CreateContainer within sandbox \"a6e9a7a915de369f086fa68e8acf2e1e6a82c3186dc40d5153cc166570164a2e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 11 00:11:38.295170 kubelet[2631]: E0711 00:11:38.295088 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:38.309928 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2799823239.mount: Deactivated successfully. Jul 11 00:11:38.317194 kubelet[2631]: I0711 00:11:38.317121 2631 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-7wt4n" podStartSLOduration=1.840271904 podStartE2EDuration="7.317104011s" podCreationTimestamp="2025-07-11 00:11:31 +0000 UTC" firstStartedPulling="2025-07-11 00:11:32.013275133 +0000 UTC m=+5.848073613" lastFinishedPulling="2025-07-11 00:11:37.49010728 +0000 UTC m=+11.324905720" observedRunningTime="2025-07-11 00:11:38.316225352 +0000 UTC m=+12.151023832" watchObservedRunningTime="2025-07-11 00:11:38.317104011 +0000 UTC m=+12.151902491" Jul 11 00:11:38.328300 containerd[1534]: time="2025-07-11T00:11:38.328244240Z" level=info msg="CreateContainer within sandbox \"a6e9a7a915de369f086fa68e8acf2e1e6a82c3186dc40d5153cc166570164a2e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"33c14f3689eee73409de1cb8b77f6a7305e969fc614a2c3a904c842af3184c97\"" Jul 11 00:11:38.328925 containerd[1534]: time="2025-07-11T00:11:38.328888572Z" level=info msg="StartContainer for \"33c14f3689eee73409de1cb8b77f6a7305e969fc614a2c3a904c842af3184c97\"" Jul 11 00:11:38.368743 containerd[1534]: time="2025-07-11T00:11:38.368704762Z" level=info msg="StartContainer for \"33c14f3689eee73409de1cb8b77f6a7305e969fc614a2c3a904c842af3184c97\" returns successfully" Jul 11 00:11:38.431789 containerd[1534]: time="2025-07-11T00:11:38.431638742Z" level=info msg="shim disconnected" id=33c14f3689eee73409de1cb8b77f6a7305e969fc614a2c3a904c842af3184c97 namespace=k8s.io Jul 11 00:11:38.431789 containerd[1534]: time="2025-07-11T00:11:38.431709236Z" level=warning msg="cleaning up after shim disconnected" id=33c14f3689eee73409de1cb8b77f6a7305e969fc614a2c3a904c842af3184c97 namespace=k8s.io Jul 11 00:11:38.431789 containerd[1534]: time="2025-07-11T00:11:38.431720678Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:11:39.134860 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33c14f3689eee73409de1cb8b77f6a7305e969fc614a2c3a904c842af3184c97-rootfs.mount: Deactivated successfully. Jul 11 00:11:39.182100 kubelet[2631]: E0711 00:11:39.182020 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:39.299301 kubelet[2631]: E0711 00:11:39.299222 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:39.300250 kubelet[2631]: E0711 00:11:39.299891 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:39.302407 containerd[1534]: time="2025-07-11T00:11:39.302131330Z" level=info msg="CreateContainer within sandbox \"a6e9a7a915de369f086fa68e8acf2e1e6a82c3186dc40d5153cc166570164a2e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 11 00:11:39.315625 containerd[1534]: time="2025-07-11T00:11:39.315571928Z" level=info msg="CreateContainer within sandbox \"a6e9a7a915de369f086fa68e8acf2e1e6a82c3186dc40d5153cc166570164a2e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ca0c7002848b558d6ff4dfbc34c53bb37f66cacdd2535fd28c70e9fd8c22fac8\"" Jul 11 00:11:39.317634 containerd[1534]: time="2025-07-11T00:11:39.316923029Z" level=info msg="StartContainer for \"ca0c7002848b558d6ff4dfbc34c53bb37f66cacdd2535fd28c70e9fd8c22fac8\"" Jul 11 00:11:39.360670 containerd[1534]: time="2025-07-11T00:11:39.360624676Z" level=info msg="StartContainer for \"ca0c7002848b558d6ff4dfbc34c53bb37f66cacdd2535fd28c70e9fd8c22fac8\" returns successfully" Jul 11 00:11:39.380108 containerd[1534]: time="2025-07-11T00:11:39.380026987Z" level=info msg="shim disconnected" id=ca0c7002848b558d6ff4dfbc34c53bb37f66cacdd2535fd28c70e9fd8c22fac8 namespace=k8s.io Jul 11 00:11:39.380108 containerd[1534]: time="2025-07-11T00:11:39.380080757Z" level=warning msg="cleaning up after shim disconnected" id=ca0c7002848b558d6ff4dfbc34c53bb37f66cacdd2535fd28c70e9fd8c22fac8 namespace=k8s.io Jul 11 00:11:39.380108 containerd[1534]: time="2025-07-11T00:11:39.380090199Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:11:40.134997 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca0c7002848b558d6ff4dfbc34c53bb37f66cacdd2535fd28c70e9fd8c22fac8-rootfs.mount: Deactivated successfully. Jul 11 00:11:40.302427 kubelet[2631]: E0711 00:11:40.302403 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:40.304425 containerd[1534]: time="2025-07-11T00:11:40.304391425Z" level=info msg="CreateContainer within sandbox \"a6e9a7a915de369f086fa68e8acf2e1e6a82c3186dc40d5153cc166570164a2e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 11 00:11:40.338581 containerd[1534]: time="2025-07-11T00:11:40.338528811Z" level=info msg="CreateContainer within sandbox \"a6e9a7a915de369f086fa68e8acf2e1e6a82c3186dc40d5153cc166570164a2e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8bc299b9c5bb1e34badfdfa9985fcc00bc092bf7567aa0d4a468c7d3cd2dbb04\"" Jul 11 00:11:40.339012 containerd[1534]: time="2025-07-11T00:11:40.338983494Z" level=info msg="StartContainer for \"8bc299b9c5bb1e34badfdfa9985fcc00bc092bf7567aa0d4a468c7d3cd2dbb04\"" Jul 11 00:11:40.379507 containerd[1534]: time="2025-07-11T00:11:40.379452482Z" level=info msg="StartContainer for \"8bc299b9c5bb1e34badfdfa9985fcc00bc092bf7567aa0d4a468c7d3cd2dbb04\" returns successfully" Jul 11 00:11:40.553459 kubelet[2631]: I0711 00:11:40.553345 2631 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 11 00:11:40.634570 kubelet[2631]: I0711 00:11:40.634510 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/628dbdd1-aa1b-4194-941b-9028bd7f4da7-config-volume\") pod \"coredns-7c65d6cfc9-f4ddw\" (UID: \"628dbdd1-aa1b-4194-941b-9028bd7f4da7\") " pod="kube-system/coredns-7c65d6cfc9-f4ddw" Jul 11 00:11:40.634570 kubelet[2631]: I0711 00:11:40.634555 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d2zh\" (UniqueName: \"kubernetes.io/projected/628dbdd1-aa1b-4194-941b-9028bd7f4da7-kube-api-access-2d2zh\") pod \"coredns-7c65d6cfc9-f4ddw\" (UID: \"628dbdd1-aa1b-4194-941b-9028bd7f4da7\") " pod="kube-system/coredns-7c65d6cfc9-f4ddw" Jul 11 00:11:40.634570 kubelet[2631]: I0711 00:11:40.634583 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn9nt\" (UniqueName: \"kubernetes.io/projected/2aa8fc18-2432-42be-8c1f-b8bf88223ab1-kube-api-access-qn9nt\") pod \"coredns-7c65d6cfc9-dvp56\" (UID: \"2aa8fc18-2432-42be-8c1f-b8bf88223ab1\") " pod="kube-system/coredns-7c65d6cfc9-dvp56" Jul 11 00:11:40.634847 kubelet[2631]: I0711 00:11:40.634613 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2aa8fc18-2432-42be-8c1f-b8bf88223ab1-config-volume\") pod \"coredns-7c65d6cfc9-dvp56\" (UID: \"2aa8fc18-2432-42be-8c1f-b8bf88223ab1\") " pod="kube-system/coredns-7c65d6cfc9-dvp56" Jul 11 00:11:40.886664 kubelet[2631]: E0711 00:11:40.886619 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:40.887377 containerd[1534]: time="2025-07-11T00:11:40.887304774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-f4ddw,Uid:628dbdd1-aa1b-4194-941b-9028bd7f4da7,Namespace:kube-system,Attempt:0,}" Jul 11 00:11:40.891407 kubelet[2631]: E0711 00:11:40.891201 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:40.891865 containerd[1534]: time="2025-07-11T00:11:40.891824923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dvp56,Uid:2aa8fc18-2432-42be-8c1f-b8bf88223ab1,Namespace:kube-system,Attempt:0,}" Jul 11 00:11:41.307880 kubelet[2631]: E0711 00:11:41.307770 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:41.547410 kubelet[2631]: E0711 00:11:41.547344 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:41.557011 kubelet[2631]: I0711 00:11:41.556943 2631 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-np75t" podStartSLOduration=6.243010214 podStartE2EDuration="10.556926427s" podCreationTimestamp="2025-07-11 00:11:31 +0000 UTC" firstStartedPulling="2025-07-11 00:11:31.793120052 +0000 UTC m=+5.627918572" lastFinishedPulling="2025-07-11 00:11:36.107036305 +0000 UTC m=+9.941834785" observedRunningTime="2025-07-11 00:11:41.32815729 +0000 UTC m=+15.162955770" watchObservedRunningTime="2025-07-11 00:11:41.556926427 +0000 UTC m=+15.391724907" Jul 11 00:11:42.149622 update_engine[1521]: I20250711 00:11:42.149544 1521 update_attempter.cc:509] Updating boot flags... Jul 11 00:11:42.170421 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3472) Jul 11 00:11:42.198237 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3472) Jul 11 00:11:42.221917 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3472) Jul 11 00:11:42.309144 kubelet[2631]: E0711 00:11:42.309111 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:42.583116 systemd-networkd[1227]: cilium_host: Link UP Jul 11 00:11:42.583243 systemd-networkd[1227]: cilium_net: Link UP Jul 11 00:11:42.583246 systemd-networkd[1227]: cilium_net: Gained carrier Jul 11 00:11:42.583421 systemd-networkd[1227]: cilium_host: Gained carrier Jul 11 00:11:42.666098 systemd-networkd[1227]: cilium_vxlan: Link UP Jul 11 00:11:42.666105 systemd-networkd[1227]: cilium_vxlan: Gained carrier Jul 11 00:11:42.959403 kernel: NET: Registered PF_ALG protocol family Jul 11 00:11:43.207544 systemd-networkd[1227]: cilium_net: Gained IPv6LL Jul 11 00:11:43.311452 kubelet[2631]: E0711 00:11:43.311169 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:43.520867 systemd-networkd[1227]: lxc_health: Link UP Jul 11 00:11:43.530138 systemd-networkd[1227]: lxc_health: Gained carrier Jul 11 00:11:43.591479 systemd-networkd[1227]: cilium_host: Gained IPv6LL Jul 11 00:11:44.017624 systemd-networkd[1227]: lxcc351fb2ed11a: Link UP Jul 11 00:11:44.018743 systemd-networkd[1227]: lxc95016065eecc: Link UP Jul 11 00:11:44.038394 kernel: eth0: renamed from tmp9887d Jul 11 00:11:44.046396 systemd-networkd[1227]: lxcc351fb2ed11a: Gained carrier Jul 11 00:11:44.049415 kernel: eth0: renamed from tmpe92ff Jul 11 00:11:44.056906 systemd-networkd[1227]: lxc95016065eecc: Gained carrier Jul 11 00:11:44.313295 kubelet[2631]: E0711 00:11:44.313179 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:44.681346 systemd-networkd[1227]: cilium_vxlan: Gained IPv6LL Jul 11 00:11:45.255509 systemd-networkd[1227]: lxcc351fb2ed11a: Gained IPv6LL Jul 11 00:11:45.319455 systemd-networkd[1227]: lxc_health: Gained IPv6LL Jul 11 00:11:45.767499 systemd-networkd[1227]: lxc95016065eecc: Gained IPv6LL Jul 11 00:11:47.545858 containerd[1534]: time="2025-07-11T00:11:47.545726047Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:11:47.545858 containerd[1534]: time="2025-07-11T00:11:47.545777773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:11:47.545858 containerd[1534]: time="2025-07-11T00:11:47.545788575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:11:47.546706 containerd[1534]: time="2025-07-11T00:11:47.546545433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:11:47.546912 containerd[1534]: time="2025-07-11T00:11:47.546841192Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:11:47.546959 containerd[1534]: time="2025-07-11T00:11:47.546897879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:11:47.546959 containerd[1534]: time="2025-07-11T00:11:47.546915122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:11:47.547029 containerd[1534]: time="2025-07-11T00:11:47.546991852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:11:47.568633 systemd-resolved[1433]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:11:47.568768 systemd-resolved[1433]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:11:47.587475 containerd[1534]: time="2025-07-11T00:11:47.587435123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dvp56,Uid:2aa8fc18-2432-42be-8c1f-b8bf88223ab1,Namespace:kube-system,Attempt:0,} returns sandbox id \"9887dd10d349399de91bfd580fb1df242a788fe4264654cd51ded7483ce915b5\"" Jul 11 00:11:47.588231 kubelet[2631]: E0711 00:11:47.588072 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:47.588523 containerd[1534]: time="2025-07-11T00:11:47.588234748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-f4ddw,Uid:628dbdd1-aa1b-4194-941b-9028bd7f4da7,Namespace:kube-system,Attempt:0,} returns sandbox id \"e92ffde3a3e6cff8efb633040259952bf1e54e6f348542073e91e24ad4004786\"" Jul 11 00:11:47.588984 kubelet[2631]: E0711 00:11:47.588964 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:47.589727 containerd[1534]: time="2025-07-11T00:11:47.589701939Z" level=info msg="CreateContainer within sandbox \"9887dd10d349399de91bfd580fb1df242a788fe4264654cd51ded7483ce915b5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 00:11:47.590783 containerd[1534]: time="2025-07-11T00:11:47.590754476Z" level=info msg="CreateContainer within sandbox \"e92ffde3a3e6cff8efb633040259952bf1e54e6f348542073e91e24ad4004786\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 00:11:47.606383 containerd[1534]: time="2025-07-11T00:11:47.605275369Z" level=info msg="CreateContainer within sandbox \"e92ffde3a3e6cff8efb633040259952bf1e54e6f348542073e91e24ad4004786\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"699b2056f06526efce48e0a38f498fbde5bc0a315aae983a7889c846f745e96e\"" Jul 11 00:11:47.606383 containerd[1534]: time="2025-07-11T00:11:47.605984661Z" level=info msg="StartContainer for \"699b2056f06526efce48e0a38f498fbde5bc0a315aae983a7889c846f745e96e\"" Jul 11 00:11:47.611741 containerd[1534]: time="2025-07-11T00:11:47.611707447Z" level=info msg="CreateContainer within sandbox \"9887dd10d349399de91bfd580fb1df242a788fe4264654cd51ded7483ce915b5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4f5390137b658cc9d927536693d6573dc96f25550dd0e4ed2125efc11777ab17\"" Jul 11 00:11:47.612361 containerd[1534]: time="2025-07-11T00:11:47.612324488Z" level=info msg="StartContainer for \"4f5390137b658cc9d927536693d6573dc96f25550dd0e4ed2125efc11777ab17\"" Jul 11 00:11:47.659678 containerd[1534]: time="2025-07-11T00:11:47.657832380Z" level=info msg="StartContainer for \"699b2056f06526efce48e0a38f498fbde5bc0a315aae983a7889c846f745e96e\" returns successfully" Jul 11 00:11:47.667106 containerd[1534]: time="2025-07-11T00:11:47.667072264Z" level=info msg="StartContainer for \"4f5390137b658cc9d927536693d6573dc96f25550dd0e4ed2125efc11777ab17\" returns successfully" Jul 11 00:11:48.320197 kubelet[2631]: E0711 00:11:48.319783 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:48.320197 kubelet[2631]: E0711 00:11:48.325046 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:48.333035 kubelet[2631]: I0711 00:11:48.332967 2631 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-dvp56" podStartSLOduration=17.332951969 podStartE2EDuration="17.332951969s" podCreationTimestamp="2025-07-11 00:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:11:48.331710894 +0000 UTC m=+22.166509374" watchObservedRunningTime="2025-07-11 00:11:48.332951969 +0000 UTC m=+22.167750409" Jul 11 00:11:48.342786 kubelet[2631]: I0711 00:11:48.342524 2631 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-f4ddw" podStartSLOduration=17.34243815 podStartE2EDuration="17.34243815s" podCreationTimestamp="2025-07-11 00:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:11:48.342220603 +0000 UTC m=+22.177019083" watchObservedRunningTime="2025-07-11 00:11:48.34243815 +0000 UTC m=+22.177236630" Jul 11 00:11:49.257992 kubelet[2631]: I0711 00:11:49.257870 2631 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:11:49.258881 kubelet[2631]: E0711 00:11:49.258483 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:49.327659 kubelet[2631]: E0711 00:11:49.326797 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:49.327659 kubelet[2631]: E0711 00:11:49.326967 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:49.331405 kubelet[2631]: E0711 00:11:49.331348 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:50.328643 kubelet[2631]: E0711 00:11:50.328604 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:50.328995 kubelet[2631]: E0711 00:11:50.328613 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:11:52.890604 systemd[1]: Started sshd@7-10.0.0.50:22-10.0.0.1:41438.service - OpenSSH per-connection server daemon (10.0.0.1:41438). Jul 11 00:11:52.922913 sshd[4027]: Accepted publickey for core from 10.0.0.1 port 41438 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:11:52.926148 sshd[4027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:11:52.930083 systemd-logind[1517]: New session 8 of user core. Jul 11 00:11:52.938647 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 11 00:11:53.058023 sshd[4027]: pam_unix(sshd:session): session closed for user core Jul 11 00:11:53.061791 systemd[1]: sshd@7-10.0.0.50:22-10.0.0.1:41438.service: Deactivated successfully. Jul 11 00:11:53.062147 systemd-logind[1517]: Session 8 logged out. Waiting for processes to exit. Jul 11 00:11:53.065096 systemd[1]: session-8.scope: Deactivated successfully. Jul 11 00:11:53.066659 systemd-logind[1517]: Removed session 8. Jul 11 00:11:58.071646 systemd[1]: Started sshd@8-10.0.0.50:22-10.0.0.1:41442.service - OpenSSH per-connection server daemon (10.0.0.1:41442). Jul 11 00:11:58.102801 sshd[4044]: Accepted publickey for core from 10.0.0.1 port 41442 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:11:58.104022 sshd[4044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:11:58.108056 systemd-logind[1517]: New session 9 of user core. Jul 11 00:11:58.116602 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 11 00:11:58.224112 sshd[4044]: pam_unix(sshd:session): session closed for user core Jul 11 00:11:58.227471 systemd[1]: sshd@8-10.0.0.50:22-10.0.0.1:41442.service: Deactivated successfully. Jul 11 00:11:58.229551 systemd[1]: session-9.scope: Deactivated successfully. Jul 11 00:11:58.229573 systemd-logind[1517]: Session 9 logged out. Waiting for processes to exit. Jul 11 00:11:58.231777 systemd-logind[1517]: Removed session 9. Jul 11 00:12:03.243808 systemd[1]: Started sshd@9-10.0.0.50:22-10.0.0.1:54642.service - OpenSSH per-connection server daemon (10.0.0.1:54642). Jul 11 00:12:03.275795 sshd[4063]: Accepted publickey for core from 10.0.0.1 port 54642 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:12:03.277205 sshd[4063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:12:03.281073 systemd-logind[1517]: New session 10 of user core. Jul 11 00:12:03.298731 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 11 00:12:03.407232 sshd[4063]: pam_unix(sshd:session): session closed for user core Jul 11 00:12:03.418591 systemd[1]: Started sshd@10-10.0.0.50:22-10.0.0.1:54648.service - OpenSSH per-connection server daemon (10.0.0.1:54648). Jul 11 00:12:03.418983 systemd[1]: sshd@9-10.0.0.50:22-10.0.0.1:54642.service: Deactivated successfully. Jul 11 00:12:03.420937 systemd[1]: session-10.scope: Deactivated successfully. Jul 11 00:12:03.422321 systemd-logind[1517]: Session 10 logged out. Waiting for processes to exit. Jul 11 00:12:03.424756 systemd-logind[1517]: Removed session 10. Jul 11 00:12:03.450291 sshd[4078]: Accepted publickey for core from 10.0.0.1 port 54648 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:12:03.451505 sshd[4078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:12:03.455950 systemd-logind[1517]: New session 11 of user core. Jul 11 00:12:03.464603 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 11 00:12:03.611593 sshd[4078]: pam_unix(sshd:session): session closed for user core Jul 11 00:12:03.621632 systemd[1]: Started sshd@11-10.0.0.50:22-10.0.0.1:54662.service - OpenSSH per-connection server daemon (10.0.0.1:54662). Jul 11 00:12:03.622038 systemd[1]: sshd@10-10.0.0.50:22-10.0.0.1:54648.service: Deactivated successfully. Jul 11 00:12:03.623520 systemd[1]: session-11.scope: Deactivated successfully. Jul 11 00:12:03.627816 systemd-logind[1517]: Session 11 logged out. Waiting for processes to exit. Jul 11 00:12:03.629839 systemd-logind[1517]: Removed session 11. Jul 11 00:12:03.659800 sshd[4091]: Accepted publickey for core from 10.0.0.1 port 54662 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:12:03.661169 sshd[4091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:12:03.665225 systemd-logind[1517]: New session 12 of user core. Jul 11 00:12:03.678629 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 11 00:12:03.794930 sshd[4091]: pam_unix(sshd:session): session closed for user core Jul 11 00:12:03.797894 systemd[1]: sshd@11-10.0.0.50:22-10.0.0.1:54662.service: Deactivated successfully. Jul 11 00:12:03.799746 systemd-logind[1517]: Session 12 logged out. Waiting for processes to exit. Jul 11 00:12:03.799880 systemd[1]: session-12.scope: Deactivated successfully. Jul 11 00:12:03.801490 systemd-logind[1517]: Removed session 12. Jul 11 00:12:08.810609 systemd[1]: Started sshd@12-10.0.0.50:22-10.0.0.1:54668.service - OpenSSH per-connection server daemon (10.0.0.1:54668). Jul 11 00:12:08.842531 sshd[4110]: Accepted publickey for core from 10.0.0.1 port 54668 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:12:08.843710 sshd[4110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:12:08.847454 systemd-logind[1517]: New session 13 of user core. Jul 11 00:12:08.855713 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 11 00:12:08.963338 sshd[4110]: pam_unix(sshd:session): session closed for user core Jul 11 00:12:08.966506 systemd[1]: sshd@12-10.0.0.50:22-10.0.0.1:54668.service: Deactivated successfully. Jul 11 00:12:08.968501 systemd[1]: session-13.scope: Deactivated successfully. Jul 11 00:12:08.968519 systemd-logind[1517]: Session 13 logged out. Waiting for processes to exit. Jul 11 00:12:08.969710 systemd-logind[1517]: Removed session 13. Jul 11 00:12:13.977590 systemd[1]: Started sshd@13-10.0.0.50:22-10.0.0.1:40324.service - OpenSSH per-connection server daemon (10.0.0.1:40324). Jul 11 00:12:14.008709 sshd[4126]: Accepted publickey for core from 10.0.0.1 port 40324 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:12:14.009904 sshd[4126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:12:14.013869 systemd-logind[1517]: New session 14 of user core. Jul 11 00:12:14.024638 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 11 00:12:14.132640 sshd[4126]: pam_unix(sshd:session): session closed for user core Jul 11 00:12:14.142582 systemd[1]: Started sshd@14-10.0.0.50:22-10.0.0.1:40326.service - OpenSSH per-connection server daemon (10.0.0.1:40326). Jul 11 00:12:14.142952 systemd[1]: sshd@13-10.0.0.50:22-10.0.0.1:40324.service: Deactivated successfully. Jul 11 00:12:14.145519 systemd[1]: session-14.scope: Deactivated successfully. Jul 11 00:12:14.145692 systemd-logind[1517]: Session 14 logged out. Waiting for processes to exit. Jul 11 00:12:14.146698 systemd-logind[1517]: Removed session 14. Jul 11 00:12:14.173496 sshd[4139]: Accepted publickey for core from 10.0.0.1 port 40326 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:12:14.174704 sshd[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:12:14.178448 systemd-logind[1517]: New session 15 of user core. Jul 11 00:12:14.187706 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 11 00:12:14.410852 sshd[4139]: pam_unix(sshd:session): session closed for user core Jul 11 00:12:14.420640 systemd[1]: Started sshd@15-10.0.0.50:22-10.0.0.1:40328.service - OpenSSH per-connection server daemon (10.0.0.1:40328). Jul 11 00:12:14.421022 systemd[1]: sshd@14-10.0.0.50:22-10.0.0.1:40326.service: Deactivated successfully. Jul 11 00:12:14.423441 systemd[1]: session-15.scope: Deactivated successfully. Jul 11 00:12:14.423946 systemd-logind[1517]: Session 15 logged out. Waiting for processes to exit. Jul 11 00:12:14.425074 systemd-logind[1517]: Removed session 15. Jul 11 00:12:14.456487 sshd[4152]: Accepted publickey for core from 10.0.0.1 port 40328 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:12:14.457793 sshd[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:12:14.461277 systemd-logind[1517]: New session 16 of user core. Jul 11 00:12:14.475630 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 11 00:12:15.732883 sshd[4152]: pam_unix(sshd:session): session closed for user core Jul 11 00:12:15.741689 systemd[1]: Started sshd@16-10.0.0.50:22-10.0.0.1:40342.service - OpenSSH per-connection server daemon (10.0.0.1:40342). Jul 11 00:12:15.742125 systemd[1]: sshd@15-10.0.0.50:22-10.0.0.1:40328.service: Deactivated successfully. Jul 11 00:12:15.746886 systemd[1]: session-16.scope: Deactivated successfully. Jul 11 00:12:15.749992 systemd-logind[1517]: Session 16 logged out. Waiting for processes to exit. Jul 11 00:12:15.751835 systemd-logind[1517]: Removed session 16. Jul 11 00:12:15.781510 sshd[4172]: Accepted publickey for core from 10.0.0.1 port 40342 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:12:15.782713 sshd[4172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:12:15.786849 systemd-logind[1517]: New session 17 of user core. Jul 11 00:12:15.802614 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 11 00:12:16.011799 sshd[4172]: pam_unix(sshd:session): session closed for user core Jul 11 00:12:16.020603 systemd[1]: Started sshd@17-10.0.0.50:22-10.0.0.1:40356.service - OpenSSH per-connection server daemon (10.0.0.1:40356). Jul 11 00:12:16.020980 systemd[1]: sshd@16-10.0.0.50:22-10.0.0.1:40342.service: Deactivated successfully. Jul 11 00:12:16.024863 systemd[1]: session-17.scope: Deactivated successfully. Jul 11 00:12:16.026017 systemd-logind[1517]: Session 17 logged out. Waiting for processes to exit. Jul 11 00:12:16.026983 systemd-logind[1517]: Removed session 17. Jul 11 00:12:16.053250 sshd[4187]: Accepted publickey for core from 10.0.0.1 port 40356 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:12:16.054556 sshd[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:12:16.058143 systemd-logind[1517]: New session 18 of user core. Jul 11 00:12:16.064652 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 11 00:12:16.168712 sshd[4187]: pam_unix(sshd:session): session closed for user core Jul 11 00:12:16.171980 systemd-logind[1517]: Session 18 logged out. Waiting for processes to exit. Jul 11 00:12:16.172450 systemd[1]: sshd@17-10.0.0.50:22-10.0.0.1:40356.service: Deactivated successfully. Jul 11 00:12:16.174419 systemd[1]: session-18.scope: Deactivated successfully. Jul 11 00:12:16.175868 systemd-logind[1517]: Removed session 18. Jul 11 00:12:21.180604 systemd[1]: Started sshd@18-10.0.0.50:22-10.0.0.1:40366.service - OpenSSH per-connection server daemon (10.0.0.1:40366). Jul 11 00:12:21.211451 sshd[4209]: Accepted publickey for core from 10.0.0.1 port 40366 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:12:21.212656 sshd[4209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:12:21.216120 systemd-logind[1517]: New session 19 of user core. Jul 11 00:12:21.224604 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 11 00:12:21.328460 sshd[4209]: pam_unix(sshd:session): session closed for user core Jul 11 00:12:21.332173 systemd[1]: sshd@18-10.0.0.50:22-10.0.0.1:40366.service: Deactivated successfully. Jul 11 00:12:21.335936 systemd[1]: session-19.scope: Deactivated successfully. Jul 11 00:12:21.336674 systemd-logind[1517]: Session 19 logged out. Waiting for processes to exit. Jul 11 00:12:21.337407 systemd-logind[1517]: Removed session 19. Jul 11 00:12:26.344635 systemd[1]: Started sshd@19-10.0.0.50:22-10.0.0.1:53982.service - OpenSSH per-connection server daemon (10.0.0.1:53982). Jul 11 00:12:26.375518 sshd[4226]: Accepted publickey for core from 10.0.0.1 port 53982 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:12:26.376643 sshd[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:12:26.380404 systemd-logind[1517]: New session 20 of user core. Jul 11 00:12:26.384647 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 11 00:12:26.487887 sshd[4226]: pam_unix(sshd:session): session closed for user core Jul 11 00:12:26.491658 systemd[1]: sshd@19-10.0.0.50:22-10.0.0.1:53982.service: Deactivated successfully. Jul 11 00:12:26.494874 systemd[1]: session-20.scope: Deactivated successfully. Jul 11 00:12:26.495623 systemd-logind[1517]: Session 20 logged out. Waiting for processes to exit. Jul 11 00:12:26.496585 systemd-logind[1517]: Removed session 20. Jul 11 00:12:31.501593 systemd[1]: Started sshd@20-10.0.0.50:22-10.0.0.1:53992.service - OpenSSH per-connection server daemon (10.0.0.1:53992). Jul 11 00:12:31.532249 sshd[4241]: Accepted publickey for core from 10.0.0.1 port 53992 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:12:31.533430 sshd[4241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:12:31.537474 systemd-logind[1517]: New session 21 of user core. Jul 11 00:12:31.546590 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 11 00:12:31.651184 sshd[4241]: pam_unix(sshd:session): session closed for user core Jul 11 00:12:31.662679 systemd[1]: Started sshd@21-10.0.0.50:22-10.0.0.1:54004.service - OpenSSH per-connection server daemon (10.0.0.1:54004). Jul 11 00:12:31.663024 systemd[1]: sshd@20-10.0.0.50:22-10.0.0.1:53992.service: Deactivated successfully. Jul 11 00:12:31.664933 systemd-logind[1517]: Session 21 logged out. Waiting for processes to exit. Jul 11 00:12:31.665569 systemd[1]: session-21.scope: Deactivated successfully. Jul 11 00:12:31.666971 systemd-logind[1517]: Removed session 21. Jul 11 00:12:31.693409 sshd[4253]: Accepted publickey for core from 10.0.0.1 port 54004 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:12:31.694526 sshd[4253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:12:31.698570 systemd-logind[1517]: New session 22 of user core. Jul 11 00:12:31.712642 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 11 00:12:33.568692 containerd[1534]: time="2025-07-11T00:12:33.568642973Z" level=info msg="StopContainer for \"913c861cb1678f94fc1a1b2d7c5a72f8376db4f71a5d2cae8e234c89f65d1c44\" with timeout 30 (s)" Jul 11 00:12:33.569087 containerd[1534]: time="2025-07-11T00:12:33.568963704Z" level=info msg="Stop container \"913c861cb1678f94fc1a1b2d7c5a72f8376db4f71a5d2cae8e234c89f65d1c44\" with signal terminated" Jul 11 00:12:33.599279 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-913c861cb1678f94fc1a1b2d7c5a72f8376db4f71a5d2cae8e234c89f65d1c44-rootfs.mount: Deactivated successfully. Jul 11 00:12:33.602266 containerd[1534]: time="2025-07-11T00:12:33.602225349Z" level=info msg="StopContainer for \"8bc299b9c5bb1e34badfdfa9985fcc00bc092bf7567aa0d4a468c7d3cd2dbb04\" with timeout 2 (s)" Jul 11 00:12:33.602579 containerd[1534]: time="2025-07-11T00:12:33.602546039Z" level=info msg="Stop container \"8bc299b9c5bb1e34badfdfa9985fcc00bc092bf7567aa0d4a468c7d3cd2dbb04\" with signal terminated" Jul 11 00:12:33.605528 containerd[1534]: time="2025-07-11T00:12:33.605469458Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 00:12:33.607401 containerd[1534]: time="2025-07-11T00:12:33.606734661Z" level=info msg="shim disconnected" id=913c861cb1678f94fc1a1b2d7c5a72f8376db4f71a5d2cae8e234c89f65d1c44 namespace=k8s.io Jul 11 00:12:33.607401 containerd[1534]: time="2025-07-11T00:12:33.606779383Z" level=warning msg="cleaning up after shim disconnected" id=913c861cb1678f94fc1a1b2d7c5a72f8376db4f71a5d2cae8e234c89f65d1c44 namespace=k8s.io Jul 11 00:12:33.607401 containerd[1534]: time="2025-07-11T00:12:33.606790263Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:12:33.608894 systemd-networkd[1227]: lxc_health: Link DOWN Jul 11 00:12:33.608901 systemd-networkd[1227]: lxc_health: Lost carrier Jul 11 00:12:33.650200 containerd[1534]: time="2025-07-11T00:12:33.650149969Z" level=info msg="StopContainer for \"913c861cb1678f94fc1a1b2d7c5a72f8376db4f71a5d2cae8e234c89f65d1c44\" returns successfully" Jul 11 00:12:33.651035 containerd[1534]: time="2025-07-11T00:12:33.651009758Z" level=info msg="StopPodSandbox for \"2439f8111171e65f54fa23c1a9057f6913241c07dd1749d720c5bcfea54d59a9\"" Jul 11 00:12:33.651077 containerd[1534]: time="2025-07-11T00:12:33.651048159Z" level=info msg="Container to stop \"913c861cb1678f94fc1a1b2d7c5a72f8376db4f71a5d2cae8e234c89f65d1c44\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:12:33.652932 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2439f8111171e65f54fa23c1a9057f6913241c07dd1749d720c5bcfea54d59a9-shm.mount: Deactivated successfully. Jul 11 00:12:33.665888 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8bc299b9c5bb1e34badfdfa9985fcc00bc092bf7567aa0d4a468c7d3cd2dbb04-rootfs.mount: Deactivated successfully. Jul 11 00:12:33.671140 containerd[1534]: time="2025-07-11T00:12:33.671076116Z" level=info msg="shim disconnected" id=8bc299b9c5bb1e34badfdfa9985fcc00bc092bf7567aa0d4a468c7d3cd2dbb04 namespace=k8s.io Jul 11 00:12:33.671140 containerd[1534]: time="2025-07-11T00:12:33.671138799Z" level=warning msg="cleaning up after shim disconnected" id=8bc299b9c5bb1e34badfdfa9985fcc00bc092bf7567aa0d4a468c7d3cd2dbb04 namespace=k8s.io Jul 11 00:12:33.671430 containerd[1534]: time="2025-07-11T00:12:33.671149159Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:12:33.686093 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2439f8111171e65f54fa23c1a9057f6913241c07dd1749d720c5bcfea54d59a9-rootfs.mount: Deactivated successfully. Jul 11 00:12:33.687598 containerd[1534]: time="2025-07-11T00:12:33.687466071Z" level=info msg="StopContainer for \"8bc299b9c5bb1e34badfdfa9985fcc00bc092bf7567aa0d4a468c7d3cd2dbb04\" returns successfully" Jul 11 00:12:33.688384 containerd[1534]: time="2025-07-11T00:12:33.688289978Z" level=info msg="StopPodSandbox for \"a6e9a7a915de369f086fa68e8acf2e1e6a82c3186dc40d5153cc166570164a2e\"" Jul 11 00:12:33.688470 containerd[1534]: time="2025-07-11T00:12:33.688395742Z" level=info msg="shim disconnected" id=2439f8111171e65f54fa23c1a9057f6913241c07dd1749d720c5bcfea54d59a9 namespace=k8s.io Jul 11 00:12:33.688470 containerd[1534]: time="2025-07-11T00:12:33.688437983Z" level=warning msg="cleaning up after shim disconnected" id=2439f8111171e65f54fa23c1a9057f6913241c07dd1749d720c5bcfea54d59a9 namespace=k8s.io Jul 11 00:12:33.688470 containerd[1534]: time="2025-07-11T00:12:33.688445944Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:12:33.706614 containerd[1534]: time="2025-07-11T00:12:33.706561036Z" level=warning msg="cleanup warnings time=\"2025-07-11T00:12:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 11 00:12:33.707771 containerd[1534]: time="2025-07-11T00:12:33.707738396Z" level=info msg="TearDown network for sandbox \"2439f8111171e65f54fa23c1a9057f6913241c07dd1749d720c5bcfea54d59a9\" successfully" Jul 11 00:12:33.707771 containerd[1534]: time="2025-07-11T00:12:33.707764157Z" level=info msg="StopPodSandbox for \"2439f8111171e65f54fa23c1a9057f6913241c07dd1749d720c5bcfea54d59a9\" returns successfully" Jul 11 00:12:33.707872 containerd[1534]: time="2025-07-11T00:12:33.707817559Z" level=info msg="Container to stop \"ca0c7002848b558d6ff4dfbc34c53bb37f66cacdd2535fd28c70e9fd8c22fac8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:12:33.707872 containerd[1534]: time="2025-07-11T00:12:33.707842600Z" level=info msg="Container to stop \"c7f22f2e048c17110319fc29e27824ec5d39144197fd0fee1a749260ae3e0318\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:12:33.707872 containerd[1534]: time="2025-07-11T00:12:33.707853360Z" level=info msg="Container to stop \"6f73e7ec0c34651874d53b8edd31f7afd60b5b4df8a9966020d95fe8c012dd4f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:12:33.707872 containerd[1534]: time="2025-07-11T00:12:33.707863080Z" level=info msg="Container to stop \"33c14f3689eee73409de1cb8b77f6a7305e969fc614a2c3a904c842af3184c97\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:12:33.707872 containerd[1534]: time="2025-07-11T00:12:33.707872881Z" level=info msg="Container to stop \"8bc299b9c5bb1e34badfdfa9985fcc00bc092bf7567aa0d4a468c7d3cd2dbb04\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:12:33.737148 kubelet[2631]: I0711 00:12:33.737107 2631 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a16dffc6-d0f3-43c2-bddb-2878eb043b7c-cilium-config-path\") pod \"a16dffc6-d0f3-43c2-bddb-2878eb043b7c\" (UID: \"a16dffc6-d0f3-43c2-bddb-2878eb043b7c\") " Jul 11 00:12:33.737148 kubelet[2631]: I0711 00:12:33.737150 2631 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vctr9\" (UniqueName: \"kubernetes.io/projected/a16dffc6-d0f3-43c2-bddb-2878eb043b7c-kube-api-access-vctr9\") pod \"a16dffc6-d0f3-43c2-bddb-2878eb043b7c\" (UID: \"a16dffc6-d0f3-43c2-bddb-2878eb043b7c\") " Jul 11 00:12:33.737648 containerd[1534]: time="2025-07-11T00:12:33.737480802Z" level=info msg="shim disconnected" id=a6e9a7a915de369f086fa68e8acf2e1e6a82c3186dc40d5153cc166570164a2e namespace=k8s.io Jul 11 00:12:33.737648 containerd[1534]: time="2025-07-11T00:12:33.737527803Z" level=warning msg="cleaning up after shim disconnected" id=a6e9a7a915de369f086fa68e8acf2e1e6a82c3186dc40d5153cc166570164a2e namespace=k8s.io Jul 11 00:12:33.737648 containerd[1534]: time="2025-07-11T00:12:33.737535803Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:12:33.745011 kubelet[2631]: I0711 00:12:33.744714 2631 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a16dffc6-d0f3-43c2-bddb-2878eb043b7c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a16dffc6-d0f3-43c2-bddb-2878eb043b7c" (UID: "a16dffc6-d0f3-43c2-bddb-2878eb043b7c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 11 00:12:33.746502 kubelet[2631]: I0711 00:12:33.745592 2631 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a16dffc6-d0f3-43c2-bddb-2878eb043b7c-kube-api-access-vctr9" (OuterVolumeSpecName: "kube-api-access-vctr9") pod "a16dffc6-d0f3-43c2-bddb-2878eb043b7c" (UID: "a16dffc6-d0f3-43c2-bddb-2878eb043b7c"). InnerVolumeSpecName "kube-api-access-vctr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 11 00:12:33.749145 containerd[1534]: time="2025-07-11T00:12:33.749111195Z" level=info msg="TearDown network for sandbox \"a6e9a7a915de369f086fa68e8acf2e1e6a82c3186dc40d5153cc166570164a2e\" successfully" Jul 11 00:12:33.749241 containerd[1534]: time="2025-07-11T00:12:33.749226919Z" level=info msg="StopPodSandbox for \"a6e9a7a915de369f086fa68e8acf2e1e6a82c3186dc40d5153cc166570164a2e\" returns successfully" Jul 11 00:12:33.837445 kubelet[2631]: I0711 00:12:33.837277 2631 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/98d7c007-f381-4c30-bc42-3c41fe72d679-cilium-config-path\") pod \"98d7c007-f381-4c30-bc42-3c41fe72d679\" (UID: \"98d7c007-f381-4c30-bc42-3c41fe72d679\") " Jul 11 00:12:33.837445 kubelet[2631]: I0711 00:12:33.837339 2631 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98d7c007-f381-4c30-bc42-3c41fe72d679-lib-modules\") pod \"98d7c007-f381-4c30-bc42-3c41fe72d679\" (UID: \"98d7c007-f381-4c30-bc42-3c41fe72d679\") " Jul 11 00:12:33.837445 kubelet[2631]: I0711 00:12:33.837395 2631 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/98d7c007-f381-4c30-bc42-3c41fe72d679-hostproc\") pod \"98d7c007-f381-4c30-bc42-3c41fe72d679\" (UID: \"98d7c007-f381-4c30-bc42-3c41fe72d679\") " Jul 11 00:12:33.837445 kubelet[2631]: I0711 00:12:33.837420 2631 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/98d7c007-f381-4c30-bc42-3c41fe72d679-cni-path\") pod \"98d7c007-f381-4c30-bc42-3c41fe72d679\" (UID: \"98d7c007-f381-4c30-bc42-3c41fe72d679\") " Jul 11 00:12:33.837445 kubelet[2631]: I0711 00:12:33.837445 2631 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/98d7c007-f381-4c30-bc42-3c41fe72d679-etc-cni-netd\") pod \"98d7c007-f381-4c30-bc42-3c41fe72d679\" (UID: \"98d7c007-f381-4c30-bc42-3c41fe72d679\") " Jul 11 00:12:33.837726 kubelet[2631]: I0711 00:12:33.837478 2631 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/98d7c007-f381-4c30-bc42-3c41fe72d679-hubble-tls\") pod \"98d7c007-f381-4c30-bc42-3c41fe72d679\" (UID: \"98d7c007-f381-4c30-bc42-3c41fe72d679\") " Jul 11 00:12:33.837726 kubelet[2631]: I0711 00:12:33.837504 2631 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98d7c007-f381-4c30-bc42-3c41fe72d679-xtables-lock\") pod \"98d7c007-f381-4c30-bc42-3c41fe72d679\" (UID: \"98d7c007-f381-4c30-bc42-3c41fe72d679\") " Jul 11 00:12:33.837726 kubelet[2631]: I0711 00:12:33.837527 2631 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/98d7c007-f381-4c30-bc42-3c41fe72d679-cilium-cgroup\") pod \"98d7c007-f381-4c30-bc42-3c41fe72d679\" (UID: \"98d7c007-f381-4c30-bc42-3c41fe72d679\") " Jul 11 00:12:33.837726 kubelet[2631]: I0711 00:12:33.837541 2631 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/98d7c007-f381-4c30-bc42-3c41fe72d679-cilium-run\") pod \"98d7c007-f381-4c30-bc42-3c41fe72d679\" (UID: \"98d7c007-f381-4c30-bc42-3c41fe72d679\") " Jul 11 00:12:33.837726 kubelet[2631]: I0711 00:12:33.837556 2631 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/98d7c007-f381-4c30-bc42-3c41fe72d679-host-proc-sys-kernel\") pod \"98d7c007-f381-4c30-bc42-3c41fe72d679\" (UID: \"98d7c007-f381-4c30-bc42-3c41fe72d679\") " Jul 11 00:12:33.837726 kubelet[2631]: I0711 00:12:33.837570 2631 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/98d7c007-f381-4c30-bc42-3c41fe72d679-bpf-maps\") pod \"98d7c007-f381-4c30-bc42-3c41fe72d679\" (UID: \"98d7c007-f381-4c30-bc42-3c41fe72d679\") " Jul 11 00:12:33.837852 kubelet[2631]: I0711 00:12:33.837595 2631 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/98d7c007-f381-4c30-bc42-3c41fe72d679-clustermesh-secrets\") pod \"98d7c007-f381-4c30-bc42-3c41fe72d679\" (UID: \"98d7c007-f381-4c30-bc42-3c41fe72d679\") " Jul 11 00:12:33.837852 kubelet[2631]: I0711 00:12:33.837611 2631 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8lnq\" (UniqueName: \"kubernetes.io/projected/98d7c007-f381-4c30-bc42-3c41fe72d679-kube-api-access-t8lnq\") pod \"98d7c007-f381-4c30-bc42-3c41fe72d679\" (UID: \"98d7c007-f381-4c30-bc42-3c41fe72d679\") " Jul 11 00:12:33.837852 kubelet[2631]: I0711 00:12:33.837624 2631 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/98d7c007-f381-4c30-bc42-3c41fe72d679-host-proc-sys-net\") pod \"98d7c007-f381-4c30-bc42-3c41fe72d679\" (UID: \"98d7c007-f381-4c30-bc42-3c41fe72d679\") " Jul 11 00:12:33.837852 kubelet[2631]: I0711 00:12:33.837651 2631 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a16dffc6-d0f3-43c2-bddb-2878eb043b7c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 11 00:12:33.837852 kubelet[2631]: I0711 00:12:33.837662 2631 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vctr9\" (UniqueName: \"kubernetes.io/projected/a16dffc6-d0f3-43c2-bddb-2878eb043b7c-kube-api-access-vctr9\") on node \"localhost\" DevicePath \"\"" Jul 11 00:12:33.837852 kubelet[2631]: I0711 00:12:33.837688 2631 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98d7c007-f381-4c30-bc42-3c41fe72d679-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "98d7c007-f381-4c30-bc42-3c41fe72d679" (UID: "98d7c007-f381-4c30-bc42-3c41fe72d679"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:12:33.837978 kubelet[2631]: I0711 00:12:33.837717 2631 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98d7c007-f381-4c30-bc42-3c41fe72d679-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "98d7c007-f381-4c30-bc42-3c41fe72d679" (UID: "98d7c007-f381-4c30-bc42-3c41fe72d679"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:12:33.837978 kubelet[2631]: I0711 00:12:33.837730 2631 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98d7c007-f381-4c30-bc42-3c41fe72d679-hostproc" (OuterVolumeSpecName: "hostproc") pod "98d7c007-f381-4c30-bc42-3c41fe72d679" (UID: "98d7c007-f381-4c30-bc42-3c41fe72d679"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:12:33.837978 kubelet[2631]: I0711 00:12:33.837762 2631 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98d7c007-f381-4c30-bc42-3c41fe72d679-cni-path" (OuterVolumeSpecName: "cni-path") pod "98d7c007-f381-4c30-bc42-3c41fe72d679" (UID: "98d7c007-f381-4c30-bc42-3c41fe72d679"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:12:33.837978 kubelet[2631]: I0711 00:12:33.837775 2631 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98d7c007-f381-4c30-bc42-3c41fe72d679-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "98d7c007-f381-4c30-bc42-3c41fe72d679" (UID: "98d7c007-f381-4c30-bc42-3c41fe72d679"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:12:33.840539 kubelet[2631]: I0711 00:12:33.838400 2631 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98d7c007-f381-4c30-bc42-3c41fe72d679-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "98d7c007-f381-4c30-bc42-3c41fe72d679" (UID: "98d7c007-f381-4c30-bc42-3c41fe72d679"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:12:33.840539 kubelet[2631]: I0711 00:12:33.838422 2631 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98d7c007-f381-4c30-bc42-3c41fe72d679-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "98d7c007-f381-4c30-bc42-3c41fe72d679" (UID: "98d7c007-f381-4c30-bc42-3c41fe72d679"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:12:33.840539 kubelet[2631]: I0711 00:12:33.838429 2631 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98d7c007-f381-4c30-bc42-3c41fe72d679-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "98d7c007-f381-4c30-bc42-3c41fe72d679" (UID: "98d7c007-f381-4c30-bc42-3c41fe72d679"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:12:33.840539 kubelet[2631]: I0711 00:12:33.838460 2631 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98d7c007-f381-4c30-bc42-3c41fe72d679-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "98d7c007-f381-4c30-bc42-3c41fe72d679" (UID: "98d7c007-f381-4c30-bc42-3c41fe72d679"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:12:33.840539 kubelet[2631]: I0711 00:12:33.838476 2631 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98d7c007-f381-4c30-bc42-3c41fe72d679-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "98d7c007-f381-4c30-bc42-3c41fe72d679" (UID: "98d7c007-f381-4c30-bc42-3c41fe72d679"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:12:33.840951 kubelet[2631]: I0711 00:12:33.840375 2631 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98d7c007-f381-4c30-bc42-3c41fe72d679-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "98d7c007-f381-4c30-bc42-3c41fe72d679" (UID: "98d7c007-f381-4c30-bc42-3c41fe72d679"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 11 00:12:33.840951 kubelet[2631]: I0711 00:12:33.840761 2631 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98d7c007-f381-4c30-bc42-3c41fe72d679-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "98d7c007-f381-4c30-bc42-3c41fe72d679" (UID: "98d7c007-f381-4c30-bc42-3c41fe72d679"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 11 00:12:33.840951 kubelet[2631]: I0711 00:12:33.840912 2631 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98d7c007-f381-4c30-bc42-3c41fe72d679-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "98d7c007-f381-4c30-bc42-3c41fe72d679" (UID: "98d7c007-f381-4c30-bc42-3c41fe72d679"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 11 00:12:33.841319 kubelet[2631]: I0711 00:12:33.841279 2631 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98d7c007-f381-4c30-bc42-3c41fe72d679-kube-api-access-t8lnq" (OuterVolumeSpecName: "kube-api-access-t8lnq") pod "98d7c007-f381-4c30-bc42-3c41fe72d679" (UID: "98d7c007-f381-4c30-bc42-3c41fe72d679"). InnerVolumeSpecName "kube-api-access-t8lnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 11 00:12:33.938621 kubelet[2631]: I0711 00:12:33.938587 2631 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/98d7c007-f381-4c30-bc42-3c41fe72d679-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 11 00:12:33.938621 kubelet[2631]: I0711 00:12:33.938615 2631 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8lnq\" (UniqueName: \"kubernetes.io/projected/98d7c007-f381-4c30-bc42-3c41fe72d679-kube-api-access-t8lnq\") on node \"localhost\" DevicePath \"\"" Jul 11 00:12:33.938621 kubelet[2631]: I0711 00:12:33.938627 2631 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/98d7c007-f381-4c30-bc42-3c41fe72d679-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 11 00:12:33.938761 kubelet[2631]: I0711 00:12:33.938656 2631 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/98d7c007-f381-4c30-bc42-3c41fe72d679-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 11 00:12:33.938761 kubelet[2631]: I0711 00:12:33.938665 2631 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98d7c007-f381-4c30-bc42-3c41fe72d679-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 11 00:12:33.938761 kubelet[2631]: I0711 00:12:33.938673 2631 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/98d7c007-f381-4c30-bc42-3c41fe72d679-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 11 00:12:33.938761 kubelet[2631]: I0711 00:12:33.938681 2631 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/98d7c007-f381-4c30-bc42-3c41fe72d679-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 11 00:12:33.938761 kubelet[2631]: I0711 00:12:33.938689 2631 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/98d7c007-f381-4c30-bc42-3c41fe72d679-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 11 00:12:33.938761 kubelet[2631]: I0711 00:12:33.938696 2631 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/98d7c007-f381-4c30-bc42-3c41fe72d679-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 11 00:12:33.938761 kubelet[2631]: I0711 00:12:33.938703 2631 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98d7c007-f381-4c30-bc42-3c41fe72d679-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 11 00:12:33.938761 kubelet[2631]: I0711 00:12:33.938710 2631 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/98d7c007-f381-4c30-bc42-3c41fe72d679-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 11 00:12:33.938916 kubelet[2631]: I0711 00:12:33.938717 2631 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/98d7c007-f381-4c30-bc42-3c41fe72d679-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 11 00:12:33.938916 kubelet[2631]: I0711 00:12:33.938724 2631 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/98d7c007-f381-4c30-bc42-3c41fe72d679-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 11 00:12:33.938916 kubelet[2631]: I0711 00:12:33.938731 2631 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/98d7c007-f381-4c30-bc42-3c41fe72d679-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 11 00:12:34.429174 kubelet[2631]: I0711 00:12:34.429148 2631 scope.go:117] "RemoveContainer" containerID="8bc299b9c5bb1e34badfdfa9985fcc00bc092bf7567aa0d4a468c7d3cd2dbb04" Jul 11 00:12:34.432347 containerd[1534]: time="2025-07-11T00:12:34.431864127Z" level=info msg="RemoveContainer for \"8bc299b9c5bb1e34badfdfa9985fcc00bc092bf7567aa0d4a468c7d3cd2dbb04\"" Jul 11 00:12:34.436731 containerd[1534]: time="2025-07-11T00:12:34.436701391Z" level=info msg="RemoveContainer for \"8bc299b9c5bb1e34badfdfa9985fcc00bc092bf7567aa0d4a468c7d3cd2dbb04\" returns successfully" Jul 11 00:12:34.437024 kubelet[2631]: I0711 00:12:34.436930 2631 scope.go:117] "RemoveContainer" containerID="ca0c7002848b558d6ff4dfbc34c53bb37f66cacdd2535fd28c70e9fd8c22fac8" Jul 11 00:12:34.437846 containerd[1534]: time="2025-07-11T00:12:34.437823590Z" level=info msg="RemoveContainer for \"ca0c7002848b558d6ff4dfbc34c53bb37f66cacdd2535fd28c70e9fd8c22fac8\"" Jul 11 00:12:34.439946 containerd[1534]: time="2025-07-11T00:12:34.439919794Z" level=info msg="RemoveContainer for \"ca0c7002848b558d6ff4dfbc34c53bb37f66cacdd2535fd28c70e9fd8c22fac8\" returns successfully" Jul 11 00:12:34.440158 kubelet[2631]: I0711 00:12:34.440067 2631 scope.go:117] "RemoveContainer" containerID="33c14f3689eee73409de1cb8b77f6a7305e969fc614a2c3a904c842af3184c97" Jul 11 00:12:34.441167 containerd[1534]: time="2025-07-11T00:12:34.440955796Z" level=info msg="RemoveContainer for \"33c14f3689eee73409de1cb8b77f6a7305e969fc614a2c3a904c842af3184c97\"" Jul 11 00:12:34.443615 containerd[1534]: time="2025-07-11T00:12:34.443580220Z" level=info msg="RemoveContainer for \"33c14f3689eee73409de1cb8b77f6a7305e969fc614a2c3a904c842af3184c97\" returns successfully" Jul 11 00:12:34.443888 kubelet[2631]: I0711 00:12:34.443867 2631 scope.go:117] "RemoveContainer" containerID="6f73e7ec0c34651874d53b8edd31f7afd60b5b4df8a9966020d95fe8c012dd4f" Jul 11 00:12:34.444750 containerd[1534]: time="2025-07-11T00:12:34.444731218Z" level=info msg="RemoveContainer for \"6f73e7ec0c34651874d53b8edd31f7afd60b5b4df8a9966020d95fe8c012dd4f\"" Jul 11 00:12:34.447197 containerd[1534]: time="2025-07-11T00:12:34.447165850Z" level=info msg="RemoveContainer for \"6f73e7ec0c34651874d53b8edd31f7afd60b5b4df8a9966020d95fe8c012dd4f\" returns successfully" Jul 11 00:12:34.447433 kubelet[2631]: I0711 00:12:34.447379 2631 scope.go:117] "RemoveContainer" containerID="c7f22f2e048c17110319fc29e27824ec5d39144197fd0fee1a749260ae3e0318" Jul 11 00:12:34.448981 containerd[1534]: time="2025-07-11T00:12:34.448523280Z" level=info msg="RemoveContainer for \"c7f22f2e048c17110319fc29e27824ec5d39144197fd0fee1a749260ae3e0318\"" Jul 11 00:12:34.450684 containerd[1534]: time="2025-07-11T00:12:34.450635403Z" level=info msg="RemoveContainer for \"c7f22f2e048c17110319fc29e27824ec5d39144197fd0fee1a749260ae3e0318\" returns successfully" Jul 11 00:12:34.450941 kubelet[2631]: I0711 00:12:34.450779 2631 scope.go:117] "RemoveContainer" containerID="8bc299b9c5bb1e34badfdfa9985fcc00bc092bf7567aa0d4a468c7d3cd2dbb04" Jul 11 00:12:34.451115 containerd[1534]: time="2025-07-11T00:12:34.450951192Z" level=error msg="ContainerStatus for \"8bc299b9c5bb1e34badfdfa9985fcc00bc092bf7567aa0d4a468c7d3cd2dbb04\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8bc299b9c5bb1e34badfdfa9985fcc00bc092bf7567aa0d4a468c7d3cd2dbb04\": not found" Jul 11 00:12:34.457376 kubelet[2631]: E0711 00:12:34.457339 2631 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8bc299b9c5bb1e34badfdfa9985fcc00bc092bf7567aa0d4a468c7d3cd2dbb04\": not found" containerID="8bc299b9c5bb1e34badfdfa9985fcc00bc092bf7567aa0d4a468c7d3cd2dbb04" Jul 11 00:12:34.457638 kubelet[2631]: I0711 00:12:34.457472 2631 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8bc299b9c5bb1e34badfdfa9985fcc00bc092bf7567aa0d4a468c7d3cd2dbb04"} err="failed to get container status \"8bc299b9c5bb1e34badfdfa9985fcc00bc092bf7567aa0d4a468c7d3cd2dbb04\": rpc error: code = NotFound desc = an error occurred when try to find container \"8bc299b9c5bb1e34badfdfa9985fcc00bc092bf7567aa0d4a468c7d3cd2dbb04\": not found" Jul 11 00:12:34.457638 kubelet[2631]: I0711 00:12:34.457550 2631 scope.go:117] "RemoveContainer" containerID="ca0c7002848b558d6ff4dfbc34c53bb37f66cacdd2535fd28c70e9fd8c22fac8" Jul 11 00:12:34.457781 containerd[1534]: time="2025-07-11T00:12:34.457748384Z" level=error msg="ContainerStatus for \"ca0c7002848b558d6ff4dfbc34c53bb37f66cacdd2535fd28c70e9fd8c22fac8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ca0c7002848b558d6ff4dfbc34c53bb37f66cacdd2535fd28c70e9fd8c22fac8\": not found" Jul 11 00:12:34.457889 kubelet[2631]: E0711 00:12:34.457856 2631 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ca0c7002848b558d6ff4dfbc34c53bb37f66cacdd2535fd28c70e9fd8c22fac8\": not found" containerID="ca0c7002848b558d6ff4dfbc34c53bb37f66cacdd2535fd28c70e9fd8c22fac8" Jul 11 00:12:34.457934 kubelet[2631]: I0711 00:12:34.457881 2631 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ca0c7002848b558d6ff4dfbc34c53bb37f66cacdd2535fd28c70e9fd8c22fac8"} err="failed to get container status \"ca0c7002848b558d6ff4dfbc34c53bb37f66cacdd2535fd28c70e9fd8c22fac8\": rpc error: code = NotFound desc = an error occurred when try to find container \"ca0c7002848b558d6ff4dfbc34c53bb37f66cacdd2535fd28c70e9fd8c22fac8\": not found" Jul 11 00:12:34.457934 kubelet[2631]: I0711 00:12:34.457915 2631 scope.go:117] "RemoveContainer" containerID="33c14f3689eee73409de1cb8b77f6a7305e969fc614a2c3a904c842af3184c97" Jul 11 00:12:34.458094 containerd[1534]: time="2025-07-11T00:12:34.458060493Z" level=error msg="ContainerStatus for \"33c14f3689eee73409de1cb8b77f6a7305e969fc614a2c3a904c842af3184c97\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"33c14f3689eee73409de1cb8b77f6a7305e969fc614a2c3a904c842af3184c97\": not found" Jul 11 00:12:34.458307 kubelet[2631]: E0711 00:12:34.458177 2631 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"33c14f3689eee73409de1cb8b77f6a7305e969fc614a2c3a904c842af3184c97\": not found" containerID="33c14f3689eee73409de1cb8b77f6a7305e969fc614a2c3a904c842af3184c97" Jul 11 00:12:34.458307 kubelet[2631]: I0711 00:12:34.458199 2631 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"33c14f3689eee73409de1cb8b77f6a7305e969fc614a2c3a904c842af3184c97"} err="failed to get container status \"33c14f3689eee73409de1cb8b77f6a7305e969fc614a2c3a904c842af3184c97\": rpc error: code = NotFound desc = an error occurred when try to find container \"33c14f3689eee73409de1cb8b77f6a7305e969fc614a2c3a904c842af3184c97\": not found" Jul 11 00:12:34.458307 kubelet[2631]: I0711 00:12:34.458214 2631 scope.go:117] "RemoveContainer" containerID="6f73e7ec0c34651874d53b8edd31f7afd60b5b4df8a9966020d95fe8c012dd4f" Jul 11 00:12:34.458625 containerd[1534]: time="2025-07-11T00:12:34.458546275Z" level=error msg="ContainerStatus for \"6f73e7ec0c34651874d53b8edd31f7afd60b5b4df8a9966020d95fe8c012dd4f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6f73e7ec0c34651874d53b8edd31f7afd60b5b4df8a9966020d95fe8c012dd4f\": not found" Jul 11 00:12:34.458677 kubelet[2631]: E0711 00:12:34.458653 2631 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6f73e7ec0c34651874d53b8edd31f7afd60b5b4df8a9966020d95fe8c012dd4f\": not found" containerID="6f73e7ec0c34651874d53b8edd31f7afd60b5b4df8a9966020d95fe8c012dd4f" Jul 11 00:12:34.458677 kubelet[2631]: I0711 00:12:34.458669 2631 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6f73e7ec0c34651874d53b8edd31f7afd60b5b4df8a9966020d95fe8c012dd4f"} err="failed to get container status \"6f73e7ec0c34651874d53b8edd31f7afd60b5b4df8a9966020d95fe8c012dd4f\": rpc error: code = NotFound desc = an error occurred when try to find container \"6f73e7ec0c34651874d53b8edd31f7afd60b5b4df8a9966020d95fe8c012dd4f\": not found" Jul 11 00:12:34.458737 kubelet[2631]: I0711 00:12:34.458681 2631 scope.go:117] "RemoveContainer" containerID="c7f22f2e048c17110319fc29e27824ec5d39144197fd0fee1a749260ae3e0318" Jul 11 00:12:34.458871 containerd[1534]: time="2025-07-11T00:12:34.458839624Z" level=error msg="ContainerStatus for \"c7f22f2e048c17110319fc29e27824ec5d39144197fd0fee1a749260ae3e0318\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c7f22f2e048c17110319fc29e27824ec5d39144197fd0fee1a749260ae3e0318\": not found" Jul 11 00:12:34.458937 kubelet[2631]: E0711 00:12:34.458928 2631 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c7f22f2e048c17110319fc29e27824ec5d39144197fd0fee1a749260ae3e0318\": not found" containerID="c7f22f2e048c17110319fc29e27824ec5d39144197fd0fee1a749260ae3e0318" Jul 11 00:12:34.458967 kubelet[2631]: I0711 00:12:34.458943 2631 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c7f22f2e048c17110319fc29e27824ec5d39144197fd0fee1a749260ae3e0318"} err="failed to get container status \"c7f22f2e048c17110319fc29e27824ec5d39144197fd0fee1a749260ae3e0318\": rpc error: code = NotFound desc = an error occurred when try to find container \"c7f22f2e048c17110319fc29e27824ec5d39144197fd0fee1a749260ae3e0318\": not found" Jul 11 00:12:34.458967 kubelet[2631]: I0711 00:12:34.458953 2631 scope.go:117] "RemoveContainer" containerID="913c861cb1678f94fc1a1b2d7c5a72f8376db4f71a5d2cae8e234c89f65d1c44" Jul 11 00:12:34.459767 containerd[1534]: time="2025-07-11T00:12:34.459727392Z" level=info msg="RemoveContainer for \"913c861cb1678f94fc1a1b2d7c5a72f8376db4f71a5d2cae8e234c89f65d1c44\"" Jul 11 00:12:34.461905 containerd[1534]: time="2025-07-11T00:12:34.461879953Z" level=info msg="RemoveContainer for \"913c861cb1678f94fc1a1b2d7c5a72f8376db4f71a5d2cae8e234c89f65d1c44\" returns successfully" Jul 11 00:12:34.462037 kubelet[2631]: I0711 00:12:34.462016 2631 scope.go:117] "RemoveContainer" containerID="913c861cb1678f94fc1a1b2d7c5a72f8376db4f71a5d2cae8e234c89f65d1c44" Jul 11 00:12:34.462337 containerd[1534]: time="2025-07-11T00:12:34.462148144Z" level=error msg="ContainerStatus for \"913c861cb1678f94fc1a1b2d7c5a72f8376db4f71a5d2cae8e234c89f65d1c44\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"913c861cb1678f94fc1a1b2d7c5a72f8376db4f71a5d2cae8e234c89f65d1c44\": not found" Jul 11 00:12:34.462425 kubelet[2631]: E0711 00:12:34.462263 2631 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"913c861cb1678f94fc1a1b2d7c5a72f8376db4f71a5d2cae8e234c89f65d1c44\": not found" containerID="913c861cb1678f94fc1a1b2d7c5a72f8376db4f71a5d2cae8e234c89f65d1c44" Jul 11 00:12:34.462425 kubelet[2631]: I0711 00:12:34.462288 2631 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"913c861cb1678f94fc1a1b2d7c5a72f8376db4f71a5d2cae8e234c89f65d1c44"} err="failed to get container status \"913c861cb1678f94fc1a1b2d7c5a72f8376db4f71a5d2cae8e234c89f65d1c44\": rpc error: code = NotFound desc = an error occurred when try to find container \"913c861cb1678f94fc1a1b2d7c5a72f8376db4f71a5d2cae8e234c89f65d1c44\": not found" Jul 11 00:12:34.582948 systemd[1]: var-lib-kubelet-pods-a16dffc6\x2dd0f3\x2d43c2\x2dbddb\x2d2878eb043b7c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvctr9.mount: Deactivated successfully. Jul 11 00:12:34.583092 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a6e9a7a915de369f086fa68e8acf2e1e6a82c3186dc40d5153cc166570164a2e-rootfs.mount: Deactivated successfully. Jul 11 00:12:34.583175 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a6e9a7a915de369f086fa68e8acf2e1e6a82c3186dc40d5153cc166570164a2e-shm.mount: Deactivated successfully. Jul 11 00:12:34.583254 systemd[1]: var-lib-kubelet-pods-98d7c007\x2df381\x2d4c30\x2dbc42\x2d3c41fe72d679-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt8lnq.mount: Deactivated successfully. Jul 11 00:12:34.583339 systemd[1]: var-lib-kubelet-pods-98d7c007\x2df381\x2d4c30\x2dbc42\x2d3c41fe72d679-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 11 00:12:34.583438 systemd[1]: var-lib-kubelet-pods-98d7c007\x2df381\x2d4c30\x2dbc42\x2d3c41fe72d679-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 11 00:12:35.533041 sshd[4253]: pam_unix(sshd:session): session closed for user core Jul 11 00:12:35.543634 systemd[1]: Started sshd@22-10.0.0.50:22-10.0.0.1:55098.service - OpenSSH per-connection server daemon (10.0.0.1:55098). Jul 11 00:12:35.544023 systemd[1]: sshd@21-10.0.0.50:22-10.0.0.1:54004.service: Deactivated successfully. Jul 11 00:12:35.546701 systemd[1]: session-22.scope: Deactivated successfully. Jul 11 00:12:35.547290 systemd-logind[1517]: Session 22 logged out. Waiting for processes to exit. Jul 11 00:12:35.548156 systemd-logind[1517]: Removed session 22. Jul 11 00:12:35.576269 sshd[4423]: Accepted publickey for core from 10.0.0.1 port 55098 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:12:35.577598 sshd[4423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:12:35.581417 systemd-logind[1517]: New session 23 of user core. Jul 11 00:12:35.589681 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 11 00:12:36.236491 kubelet[2631]: I0711 00:12:36.235736 2631 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98d7c007-f381-4c30-bc42-3c41fe72d679" path="/var/lib/kubelet/pods/98d7c007-f381-4c30-bc42-3c41fe72d679/volumes" Jul 11 00:12:36.236491 kubelet[2631]: I0711 00:12:36.236238 2631 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a16dffc6-d0f3-43c2-bddb-2878eb043b7c" path="/var/lib/kubelet/pods/a16dffc6-d0f3-43c2-bddb-2878eb043b7c/volumes" Jul 11 00:12:36.296434 kubelet[2631]: E0711 00:12:36.296407 2631 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 11 00:12:36.761782 sshd[4423]: pam_unix(sshd:session): session closed for user core Jul 11 00:12:36.769757 systemd[1]: Started sshd@23-10.0.0.50:22-10.0.0.1:55114.service - OpenSSH per-connection server daemon (10.0.0.1:55114). Jul 11 00:12:36.774212 systemd[1]: sshd@22-10.0.0.50:22-10.0.0.1:55098.service: Deactivated successfully. Jul 11 00:12:36.783716 kubelet[2631]: E0711 00:12:36.780540 2631 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="98d7c007-f381-4c30-bc42-3c41fe72d679" containerName="apply-sysctl-overwrites" Jul 11 00:12:36.783716 kubelet[2631]: E0711 00:12:36.780572 2631 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a16dffc6-d0f3-43c2-bddb-2878eb043b7c" containerName="cilium-operator" Jul 11 00:12:36.783716 kubelet[2631]: E0711 00:12:36.780580 2631 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="98d7c007-f381-4c30-bc42-3c41fe72d679" containerName="clean-cilium-state" Jul 11 00:12:36.783716 kubelet[2631]: E0711 00:12:36.780585 2631 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="98d7c007-f381-4c30-bc42-3c41fe72d679" containerName="cilium-agent" Jul 11 00:12:36.783716 kubelet[2631]: E0711 00:12:36.780590 2631 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="98d7c007-f381-4c30-bc42-3c41fe72d679" containerName="mount-cgroup" Jul 11 00:12:36.783716 kubelet[2631]: E0711 00:12:36.780597 2631 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="98d7c007-f381-4c30-bc42-3c41fe72d679" containerName="mount-bpf-fs" Jul 11 00:12:36.783716 kubelet[2631]: I0711 00:12:36.780627 2631 memory_manager.go:354] "RemoveStaleState removing state" podUID="98d7c007-f381-4c30-bc42-3c41fe72d679" containerName="cilium-agent" Jul 11 00:12:36.783716 kubelet[2631]: I0711 00:12:36.780636 2631 memory_manager.go:354] "RemoveStaleState removing state" podUID="a16dffc6-d0f3-43c2-bddb-2878eb043b7c" containerName="cilium-operator" Jul 11 00:12:36.781754 systemd[1]: session-23.scope: Deactivated successfully. Jul 11 00:12:36.788766 systemd-logind[1517]: Session 23 logged out. Waiting for processes to exit. Jul 11 00:12:36.794444 systemd-logind[1517]: Removed session 23. Jul 11 00:12:36.823803 sshd[4437]: Accepted publickey for core from 10.0.0.1 port 55114 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:12:36.825116 sshd[4437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:12:36.829453 systemd-logind[1517]: New session 24 of user core. Jul 11 00:12:36.839665 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 11 00:12:36.854005 kubelet[2631]: I0711 00:12:36.853928 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6798440c-0268-435c-a0b3-0be9af7f670a-host-proc-sys-net\") pod \"cilium-hpt8f\" (UID: \"6798440c-0268-435c-a0b3-0be9af7f670a\") " pod="kube-system/cilium-hpt8f" Jul 11 00:12:36.854286 kubelet[2631]: I0711 00:12:36.854147 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6798440c-0268-435c-a0b3-0be9af7f670a-cni-path\") pod \"cilium-hpt8f\" (UID: \"6798440c-0268-435c-a0b3-0be9af7f670a\") " pod="kube-system/cilium-hpt8f" Jul 11 00:12:36.854286 kubelet[2631]: I0711 00:12:36.854176 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6798440c-0268-435c-a0b3-0be9af7f670a-hubble-tls\") pod \"cilium-hpt8f\" (UID: \"6798440c-0268-435c-a0b3-0be9af7f670a\") " pod="kube-system/cilium-hpt8f" Jul 11 00:12:36.854286 kubelet[2631]: I0711 00:12:36.854247 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6798440c-0268-435c-a0b3-0be9af7f670a-xtables-lock\") pod \"cilium-hpt8f\" (UID: \"6798440c-0268-435c-a0b3-0be9af7f670a\") " pod="kube-system/cilium-hpt8f" Jul 11 00:12:36.854286 kubelet[2631]: I0711 00:12:36.854267 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6798440c-0268-435c-a0b3-0be9af7f670a-clustermesh-secrets\") pod \"cilium-hpt8f\" (UID: \"6798440c-0268-435c-a0b3-0be9af7f670a\") " pod="kube-system/cilium-hpt8f" Jul 11 00:12:36.854596 kubelet[2631]: I0711 00:12:36.854463 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6798440c-0268-435c-a0b3-0be9af7f670a-cilium-ipsec-secrets\") pod \"cilium-hpt8f\" (UID: \"6798440c-0268-435c-a0b3-0be9af7f670a\") " pod="kube-system/cilium-hpt8f" Jul 11 00:12:36.854596 kubelet[2631]: I0711 00:12:36.854504 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6798440c-0268-435c-a0b3-0be9af7f670a-etc-cni-netd\") pod \"cilium-hpt8f\" (UID: \"6798440c-0268-435c-a0b3-0be9af7f670a\") " pod="kube-system/cilium-hpt8f" Jul 11 00:12:36.854596 kubelet[2631]: I0711 00:12:36.854556 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6798440c-0268-435c-a0b3-0be9af7f670a-lib-modules\") pod \"cilium-hpt8f\" (UID: \"6798440c-0268-435c-a0b3-0be9af7f670a\") " pod="kube-system/cilium-hpt8f" Jul 11 00:12:36.854596 kubelet[2631]: I0711 00:12:36.854574 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6798440c-0268-435c-a0b3-0be9af7f670a-host-proc-sys-kernel\") pod \"cilium-hpt8f\" (UID: \"6798440c-0268-435c-a0b3-0be9af7f670a\") " pod="kube-system/cilium-hpt8f" Jul 11 00:12:36.854884 kubelet[2631]: I0711 00:12:36.854746 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kphj\" (UniqueName: \"kubernetes.io/projected/6798440c-0268-435c-a0b3-0be9af7f670a-kube-api-access-7kphj\") pod \"cilium-hpt8f\" (UID: \"6798440c-0268-435c-a0b3-0be9af7f670a\") " pod="kube-system/cilium-hpt8f" Jul 11 00:12:36.854884 kubelet[2631]: I0711 00:12:36.854780 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6798440c-0268-435c-a0b3-0be9af7f670a-bpf-maps\") pod \"cilium-hpt8f\" (UID: \"6798440c-0268-435c-a0b3-0be9af7f670a\") " pod="kube-system/cilium-hpt8f" Jul 11 00:12:36.854884 kubelet[2631]: I0711 00:12:36.854795 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6798440c-0268-435c-a0b3-0be9af7f670a-hostproc\") pod \"cilium-hpt8f\" (UID: \"6798440c-0268-435c-a0b3-0be9af7f670a\") " pod="kube-system/cilium-hpt8f" Jul 11 00:12:36.854884 kubelet[2631]: I0711 00:12:36.854845 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6798440c-0268-435c-a0b3-0be9af7f670a-cilium-cgroup\") pod \"cilium-hpt8f\" (UID: \"6798440c-0268-435c-a0b3-0be9af7f670a\") " pod="kube-system/cilium-hpt8f" Jul 11 00:12:36.854884 kubelet[2631]: I0711 00:12:36.854861 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6798440c-0268-435c-a0b3-0be9af7f670a-cilium-run\") pod \"cilium-hpt8f\" (UID: \"6798440c-0268-435c-a0b3-0be9af7f670a\") " pod="kube-system/cilium-hpt8f" Jul 11 00:12:36.855066 kubelet[2631]: I0711 00:12:36.855033 2631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6798440c-0268-435c-a0b3-0be9af7f670a-cilium-config-path\") pod \"cilium-hpt8f\" (UID: \"6798440c-0268-435c-a0b3-0be9af7f670a\") " pod="kube-system/cilium-hpt8f" Jul 11 00:12:36.893105 sshd[4437]: pam_unix(sshd:session): session closed for user core Jul 11 00:12:36.900702 systemd[1]: Started sshd@24-10.0.0.50:22-10.0.0.1:55128.service - OpenSSH per-connection server daemon (10.0.0.1:55128). Jul 11 00:12:36.901081 systemd[1]: sshd@23-10.0.0.50:22-10.0.0.1:55114.service: Deactivated successfully. Jul 11 00:12:36.903799 systemd[1]: session-24.scope: Deactivated successfully. Jul 11 00:12:36.904591 systemd-logind[1517]: Session 24 logged out. Waiting for processes to exit. Jul 11 00:12:36.906064 systemd-logind[1517]: Removed session 24. Jul 11 00:12:36.933777 sshd[4446]: Accepted publickey for core from 10.0.0.1 port 55128 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:12:36.935497 sshd[4446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:12:36.940611 systemd-logind[1517]: New session 25 of user core. Jul 11 00:12:36.949626 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 11 00:12:37.085041 kubelet[2631]: E0711 00:12:37.084906 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:12:37.086138 containerd[1534]: time="2025-07-11T00:12:37.085635695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hpt8f,Uid:6798440c-0268-435c-a0b3-0be9af7f670a,Namespace:kube-system,Attempt:0,}" Jul 11 00:12:37.104947 containerd[1534]: time="2025-07-11T00:12:37.104846109Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:12:37.104947 containerd[1534]: time="2025-07-11T00:12:37.104908987Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:12:37.104947 containerd[1534]: time="2025-07-11T00:12:37.104921147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:12:37.105217 containerd[1534]: time="2025-07-11T00:12:37.105037543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:12:37.136009 containerd[1534]: time="2025-07-11T00:12:37.135959640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hpt8f,Uid:6798440c-0268-435c-a0b3-0be9af7f670a,Namespace:kube-system,Attempt:0,} returns sandbox id \"16ef81e9ddd73e6b001c9b38c7567c7aa2011aa45759ea2336ff468e6ccbc6e7\"" Jul 11 00:12:37.136873 kubelet[2631]: E0711 00:12:37.136841 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:12:37.139085 containerd[1534]: time="2025-07-11T00:12:37.139047106Z" level=info msg="CreateContainer within sandbox \"16ef81e9ddd73e6b001c9b38c7567c7aa2011aa45759ea2336ff468e6ccbc6e7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 11 00:12:37.150150 containerd[1534]: time="2025-07-11T00:12:37.150078449Z" level=info msg="CreateContainer within sandbox \"16ef81e9ddd73e6b001c9b38c7567c7aa2011aa45759ea2336ff468e6ccbc6e7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cb3cc59030796ec69212426c52c5dd1e5cad3d0a1ce58f1903efce547b0f1369\"" Jul 11 00:12:37.150907 containerd[1534]: time="2025-07-11T00:12:37.150763109Z" level=info msg="StartContainer for \"cb3cc59030796ec69212426c52c5dd1e5cad3d0a1ce58f1903efce547b0f1369\"" Jul 11 00:12:37.197637 containerd[1534]: time="2025-07-11T00:12:37.197583280Z" level=info msg="StartContainer for \"cb3cc59030796ec69212426c52c5dd1e5cad3d0a1ce58f1903efce547b0f1369\" returns successfully" Jul 11 00:12:37.233852 kubelet[2631]: E0711 00:12:37.233811 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:12:37.249261 containerd[1534]: time="2025-07-11T00:12:37.249204626Z" level=info msg="shim disconnected" id=cb3cc59030796ec69212426c52c5dd1e5cad3d0a1ce58f1903efce547b0f1369 namespace=k8s.io Jul 11 00:12:37.249261 containerd[1534]: time="2025-07-11T00:12:37.249259784Z" level=warning msg="cleaning up after shim disconnected" id=cb3cc59030796ec69212426c52c5dd1e5cad3d0a1ce58f1903efce547b0f1369 namespace=k8s.io Jul 11 00:12:37.249261 containerd[1534]: time="2025-07-11T00:12:37.249271264Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:12:37.440756 kubelet[2631]: E0711 00:12:37.440726 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:12:37.444426 containerd[1534]: time="2025-07-11T00:12:37.444391272Z" level=info msg="CreateContainer within sandbox \"16ef81e9ddd73e6b001c9b38c7567c7aa2011aa45759ea2336ff468e6ccbc6e7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 11 00:12:37.453151 containerd[1534]: time="2025-07-11T00:12:37.453099287Z" level=info msg="CreateContainer within sandbox \"16ef81e9ddd73e6b001c9b38c7567c7aa2011aa45759ea2336ff468e6ccbc6e7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0d700d945e6e9e4e53319e321ab7a5fa4295a58310e2d8cc871b779debd146f1\"" Jul 11 00:12:37.454919 containerd[1534]: time="2025-07-11T00:12:37.454890952Z" level=info msg="StartContainer for \"0d700d945e6e9e4e53319e321ab7a5fa4295a58310e2d8cc871b779debd146f1\"" Jul 11 00:12:37.501630 containerd[1534]: time="2025-07-11T00:12:37.501592447Z" level=info msg="StartContainer for \"0d700d945e6e9e4e53319e321ab7a5fa4295a58310e2d8cc871b779debd146f1\" returns successfully" Jul 11 00:12:37.523792 containerd[1534]: time="2025-07-11T00:12:37.523595976Z" level=info msg="shim disconnected" id=0d700d945e6e9e4e53319e321ab7a5fa4295a58310e2d8cc871b779debd146f1 namespace=k8s.io Jul 11 00:12:37.523792 containerd[1534]: time="2025-07-11T00:12:37.523649695Z" level=warning msg="cleaning up after shim disconnected" id=0d700d945e6e9e4e53319e321ab7a5fa4295a58310e2d8cc871b779debd146f1 namespace=k8s.io Jul 11 00:12:37.523792 containerd[1534]: time="2025-07-11T00:12:37.523658294Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:12:37.884073 kubelet[2631]: I0711 00:12:37.884014 2631 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-11T00:12:37Z","lastTransitionTime":"2025-07-11T00:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 11 00:12:38.443994 kubelet[2631]: E0711 00:12:38.443939 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:12:38.446922 containerd[1534]: time="2025-07-11T00:12:38.446882483Z" level=info msg="CreateContainer within sandbox \"16ef81e9ddd73e6b001c9b38c7567c7aa2011aa45759ea2336ff468e6ccbc6e7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 11 00:12:38.465921 containerd[1534]: time="2025-07-11T00:12:38.465867859Z" level=info msg="CreateContainer within sandbox \"16ef81e9ddd73e6b001c9b38c7567c7aa2011aa45759ea2336ff468e6ccbc6e7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"999be4d39425bf0ceb6e9fc412dc424e5dffed92c21717f793b99342928ef962\"" Jul 11 00:12:38.466570 containerd[1534]: time="2025-07-11T00:12:38.466538400Z" level=info msg="StartContainer for \"999be4d39425bf0ceb6e9fc412dc424e5dffed92c21717f793b99342928ef962\"" Jul 11 00:12:38.511281 containerd[1534]: time="2025-07-11T00:12:38.511222080Z" level=info msg="StartContainer for \"999be4d39425bf0ceb6e9fc412dc424e5dffed92c21717f793b99342928ef962\" returns successfully" Jul 11 00:12:38.537311 containerd[1534]: time="2025-07-11T00:12:38.537082980Z" level=info msg="shim disconnected" id=999be4d39425bf0ceb6e9fc412dc424e5dffed92c21717f793b99342928ef962 namespace=k8s.io Jul 11 00:12:38.537311 containerd[1534]: time="2025-07-11T00:12:38.537134818Z" level=warning msg="cleaning up after shim disconnected" id=999be4d39425bf0ceb6e9fc412dc424e5dffed92c21717f793b99342928ef962 namespace=k8s.io Jul 11 00:12:38.537311 containerd[1534]: time="2025-07-11T00:12:38.537144178Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:12:38.961859 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-999be4d39425bf0ceb6e9fc412dc424e5dffed92c21717f793b99342928ef962-rootfs.mount: Deactivated successfully. Jul 11 00:12:39.451221 kubelet[2631]: E0711 00:12:39.450375 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:12:39.454399 containerd[1534]: time="2025-07-11T00:12:39.454333604Z" level=info msg="CreateContainer within sandbox \"16ef81e9ddd73e6b001c9b38c7567c7aa2011aa45759ea2336ff468e6ccbc6e7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 11 00:12:39.464824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount429756150.mount: Deactivated successfully. Jul 11 00:12:39.467516 containerd[1534]: time="2025-07-11T00:12:39.467476611Z" level=info msg="CreateContainer within sandbox \"16ef81e9ddd73e6b001c9b38c7567c7aa2011aa45759ea2336ff468e6ccbc6e7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a2b2a43310df9740111f7e48a80a68fe173fff370e8b29dd160ac66462c26a73\"" Jul 11 00:12:39.469427 containerd[1534]: time="2025-07-11T00:12:39.469358840Z" level=info msg="StartContainer for \"a2b2a43310df9740111f7e48a80a68fe173fff370e8b29dd160ac66462c26a73\"" Jul 11 00:12:39.509220 containerd[1534]: time="2025-07-11T00:12:39.509161132Z" level=info msg="StartContainer for \"a2b2a43310df9740111f7e48a80a68fe173fff370e8b29dd160ac66462c26a73\" returns successfully" Jul 11 00:12:39.524623 containerd[1534]: time="2025-07-11T00:12:39.524564599Z" level=info msg="shim disconnected" id=a2b2a43310df9740111f7e48a80a68fe173fff370e8b29dd160ac66462c26a73 namespace=k8s.io Jul 11 00:12:39.524623 containerd[1534]: time="2025-07-11T00:12:39.524612957Z" level=warning msg="cleaning up after shim disconnected" id=a2b2a43310df9740111f7e48a80a68fe173fff370e8b29dd160ac66462c26a73 namespace=k8s.io Jul 11 00:12:39.524623 containerd[1534]: time="2025-07-11T00:12:39.524623597Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:12:39.961915 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2b2a43310df9740111f7e48a80a68fe173fff370e8b29dd160ac66462c26a73-rootfs.mount: Deactivated successfully. Jul 11 00:12:40.454057 kubelet[2631]: E0711 00:12:40.454019 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:12:40.457114 containerd[1534]: time="2025-07-11T00:12:40.457059726Z" level=info msg="CreateContainer within sandbox \"16ef81e9ddd73e6b001c9b38c7567c7aa2011aa45759ea2336ff468e6ccbc6e7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 11 00:12:40.471604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount588822911.mount: Deactivated successfully. Jul 11 00:12:40.474693 containerd[1534]: time="2025-07-11T00:12:40.474644045Z" level=info msg="CreateContainer within sandbox \"16ef81e9ddd73e6b001c9b38c7567c7aa2011aa45759ea2336ff468e6ccbc6e7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cef45187ce5a33e44b69ab5a743f39a6ed7c042b88dc97473aee7f695d4ec058\"" Jul 11 00:12:40.475367 containerd[1534]: time="2025-07-11T00:12:40.475311908Z" level=info msg="StartContainer for \"cef45187ce5a33e44b69ab5a743f39a6ed7c042b88dc97473aee7f695d4ec058\"" Jul 11 00:12:40.523809 containerd[1534]: time="2025-07-11T00:12:40.523762372Z" level=info msg="StartContainer for \"cef45187ce5a33e44b69ab5a743f39a6ed7c042b88dc97473aee7f695d4ec058\" returns successfully" Jul 11 00:12:40.782395 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 11 00:12:41.458509 kubelet[2631]: E0711 00:12:41.458470 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:12:41.473479 kubelet[2631]: I0711 00:12:41.473428 2631 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hpt8f" podStartSLOduration=5.473413301 podStartE2EDuration="5.473413301s" podCreationTimestamp="2025-07-11 00:12:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:12:41.472553281 +0000 UTC m=+75.307351721" watchObservedRunningTime="2025-07-11 00:12:41.473413301 +0000 UTC m=+75.308211741" Jul 11 00:12:43.087777 kubelet[2631]: E0711 00:12:43.087739 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:12:43.316403 kubelet[2631]: E0711 00:12:43.316350 2631 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:46958->127.0.0.1:44615: write tcp 127.0.0.1:46958->127.0.0.1:44615: write: broken pipe Jul 11 00:12:43.524646 systemd-networkd[1227]: lxc_health: Link UP Jul 11 00:12:43.536514 systemd-networkd[1227]: lxc_health: Gained carrier Jul 11 00:12:44.235136 kubelet[2631]: E0711 00:12:44.234779 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:12:45.087382 kubelet[2631]: E0711 00:12:45.087325 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:12:45.467013 kubelet[2631]: E0711 00:12:45.466985 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:12:45.543607 systemd-networkd[1227]: lxc_health: Gained IPv6LL Jul 11 00:12:46.234761 kubelet[2631]: E0711 00:12:46.234717 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:12:46.468209 kubelet[2631]: E0711 00:12:46.468168 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:12:49.234111 kubelet[2631]: E0711 00:12:49.234071 2631 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:12:49.661573 sshd[4446]: pam_unix(sshd:session): session closed for user core Jul 11 00:12:49.665095 systemd[1]: sshd@24-10.0.0.50:22-10.0.0.1:55128.service: Deactivated successfully. Jul 11 00:12:49.667880 systemd-logind[1517]: Session 25 logged out. Waiting for processes to exit. Jul 11 00:12:49.667981 systemd[1]: session-25.scope: Deactivated successfully. Jul 11 00:12:49.670022 systemd-logind[1517]: Removed session 25.