Jul 11 00:21:37.897654 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 11 00:21:37.897675 kernel: Linux version 6.6.96-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Jul 10 22:41:52 -00 2025 Jul 11 00:21:37.897685 kernel: KASLR enabled Jul 11 00:21:37.897691 kernel: efi: EFI v2.7 by EDK II Jul 11 00:21:37.897697 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jul 11 00:21:37.897703 kernel: random: crng init done Jul 11 00:21:37.897710 kernel: ACPI: Early table checksum verification disabled Jul 11 00:21:37.897716 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jul 11 00:21:37.897722 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 11 00:21:37.897730 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:21:37.897736 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:21:37.897742 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:21:37.897748 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:21:37.897755 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:21:37.897762 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:21:37.897769 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:21:37.897776 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:21:37.897782 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:21:37.897789 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 11 00:21:37.897795 kernel: NUMA: Failed to initialise from firmware Jul 11 00:21:37.897801 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 11 00:21:37.897807 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jul 11 00:21:37.897814 kernel: Zone ranges: Jul 11 00:21:37.897820 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 11 00:21:37.897826 kernel: DMA32 empty Jul 11 00:21:37.897833 kernel: Normal empty Jul 11 00:21:37.897840 kernel: Movable zone start for each node Jul 11 00:21:37.897846 kernel: Early memory node ranges Jul 11 00:21:37.897852 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jul 11 00:21:37.897865 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 11 00:21:37.897872 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 11 00:21:37.897878 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 11 00:21:37.897884 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 11 00:21:37.897891 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 11 00:21:37.897897 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 11 00:21:37.897903 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 11 00:21:37.897910 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 11 00:21:37.897918 kernel: psci: probing for conduit method from ACPI. Jul 11 00:21:37.897924 kernel: psci: PSCIv1.1 detected in firmware. Jul 11 00:21:37.897931 kernel: psci: Using standard PSCI v0.2 function IDs Jul 11 00:21:37.897940 kernel: psci: Trusted OS migration not required Jul 11 00:21:37.897947 kernel: psci: SMC Calling Convention v1.1 Jul 11 00:21:37.897954 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 11 00:21:37.897962 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 11 00:21:37.897968 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 11 00:21:37.897990 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 11 00:21:37.897997 kernel: Detected PIPT I-cache on CPU0 Jul 11 00:21:37.898004 kernel: CPU features: detected: GIC system register CPU interface Jul 11 00:21:37.898012 kernel: CPU features: detected: Hardware dirty bit management Jul 11 00:21:37.898018 kernel: CPU features: detected: Spectre-v4 Jul 11 00:21:37.898025 kernel: CPU features: detected: Spectre-BHB Jul 11 00:21:37.898032 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 11 00:21:37.898039 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 11 00:21:37.898046 kernel: CPU features: detected: ARM erratum 1418040 Jul 11 00:21:37.898065 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 11 00:21:37.898072 kernel: alternatives: applying boot alternatives Jul 11 00:21:37.898079 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1479f76954ab5eb3c0ce800eb2a80ad04b273ff773a5af5c1fe82fb8feef2990 Jul 11 00:21:37.898087 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 11 00:21:37.898094 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 11 00:21:37.898101 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 11 00:21:37.898108 kernel: Fallback order for Node 0: 0 Jul 11 00:21:37.898116 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 11 00:21:37.898123 kernel: Policy zone: DMA Jul 11 00:21:37.898130 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 11 00:21:37.898139 kernel: software IO TLB: area num 4. Jul 11 00:21:37.898146 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 11 00:21:37.898153 kernel: Memory: 2386404K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185884K reserved, 0K cma-reserved) Jul 11 00:21:37.898161 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 11 00:21:37.898168 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 11 00:21:37.898175 kernel: rcu: RCU event tracing is enabled. Jul 11 00:21:37.898182 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 11 00:21:37.898189 kernel: Trampoline variant of Tasks RCU enabled. Jul 11 00:21:37.898196 kernel: Tracing variant of Tasks RCU enabled. Jul 11 00:21:37.898203 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 11 00:21:37.898210 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 11 00:21:37.898216 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 11 00:21:37.898225 kernel: GICv3: 256 SPIs implemented Jul 11 00:21:37.898231 kernel: GICv3: 0 Extended SPIs implemented Jul 11 00:21:37.898238 kernel: Root IRQ handler: gic_handle_irq Jul 11 00:21:37.898244 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 11 00:21:37.898251 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 11 00:21:37.898258 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 11 00:21:37.898264 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jul 11 00:21:37.898271 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jul 11 00:21:37.898278 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 11 00:21:37.898285 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 11 00:21:37.898291 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 11 00:21:37.898299 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 00:21:37.898306 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 11 00:21:37.898313 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 11 00:21:37.898320 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 11 00:21:37.898326 kernel: arm-pv: using stolen time PV Jul 11 00:21:37.898333 kernel: Console: colour dummy device 80x25 Jul 11 00:21:37.898340 kernel: ACPI: Core revision 20230628 Jul 11 00:21:37.898347 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 11 00:21:37.898354 kernel: pid_max: default: 32768 minimum: 301 Jul 11 00:21:37.898361 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 11 00:21:37.898369 kernel: landlock: Up and running. Jul 11 00:21:37.898376 kernel: SELinux: Initializing. Jul 11 00:21:37.898383 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 00:21:37.898390 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 00:21:37.898397 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:21:37.898404 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:21:37.898411 kernel: rcu: Hierarchical SRCU implementation. Jul 11 00:21:37.898418 kernel: rcu: Max phase no-delay instances is 400. Jul 11 00:21:37.898425 kernel: Platform MSI: ITS@0x8080000 domain created Jul 11 00:21:37.898433 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 11 00:21:37.898440 kernel: Remapping and enabling EFI services. Jul 11 00:21:37.898447 kernel: smp: Bringing up secondary CPUs ... Jul 11 00:21:37.898454 kernel: Detected PIPT I-cache on CPU1 Jul 11 00:21:37.898461 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 11 00:21:37.898468 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 11 00:21:37.898476 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 00:21:37.898483 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 11 00:21:37.898489 kernel: Detected PIPT I-cache on CPU2 Jul 11 00:21:37.898496 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 11 00:21:37.898505 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 11 00:21:37.898512 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 00:21:37.898523 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 11 00:21:37.898532 kernel: Detected PIPT I-cache on CPU3 Jul 11 00:21:37.898539 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 11 00:21:37.898547 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 11 00:21:37.898554 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 00:21:37.898561 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 11 00:21:37.898569 kernel: smp: Brought up 1 node, 4 CPUs Jul 11 00:21:37.898577 kernel: SMP: Total of 4 processors activated. Jul 11 00:21:37.898585 kernel: CPU features: detected: 32-bit EL0 Support Jul 11 00:21:37.898592 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 11 00:21:37.898600 kernel: CPU features: detected: Common not Private translations Jul 11 00:21:37.898607 kernel: CPU features: detected: CRC32 instructions Jul 11 00:21:37.898614 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 11 00:21:37.898622 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 11 00:21:37.898629 kernel: CPU features: detected: LSE atomic instructions Jul 11 00:21:37.898637 kernel: CPU features: detected: Privileged Access Never Jul 11 00:21:37.898645 kernel: CPU features: detected: RAS Extension Support Jul 11 00:21:37.898652 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 11 00:21:37.898659 kernel: CPU: All CPU(s) started at EL1 Jul 11 00:21:37.898666 kernel: alternatives: applying system-wide alternatives Jul 11 00:21:37.898673 kernel: devtmpfs: initialized Jul 11 00:21:37.898681 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 11 00:21:37.898688 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 11 00:21:37.898695 kernel: pinctrl core: initialized pinctrl subsystem Jul 11 00:21:37.898703 kernel: SMBIOS 3.0.0 present. Jul 11 00:21:37.898710 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jul 11 00:21:37.898717 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 11 00:21:37.898725 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 11 00:21:37.898732 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 11 00:21:37.898740 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 11 00:21:37.898747 kernel: audit: initializing netlink subsys (disabled) Jul 11 00:21:37.898754 kernel: audit: type=2000 audit(0.022:1): state=initialized audit_enabled=0 res=1 Jul 11 00:21:37.898766 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 11 00:21:37.898774 kernel: cpuidle: using governor menu Jul 11 00:21:37.898781 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 11 00:21:37.898788 kernel: ASID allocator initialised with 32768 entries Jul 11 00:21:37.898796 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 11 00:21:37.898803 kernel: Serial: AMBA PL011 UART driver Jul 11 00:21:37.898810 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 11 00:21:37.898817 kernel: Modules: 0 pages in range for non-PLT usage Jul 11 00:21:37.898824 kernel: Modules: 509008 pages in range for PLT usage Jul 11 00:21:37.898832 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 11 00:21:37.898840 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 11 00:21:37.898847 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 11 00:21:37.898859 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 11 00:21:37.898866 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 11 00:21:37.898873 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 11 00:21:37.898881 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 11 00:21:37.898888 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 11 00:21:37.898895 kernel: ACPI: Added _OSI(Module Device) Jul 11 00:21:37.898902 kernel: ACPI: Added _OSI(Processor Device) Jul 11 00:21:37.898911 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 11 00:21:37.898918 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 11 00:21:37.898926 kernel: ACPI: Interpreter enabled Jul 11 00:21:37.898934 kernel: ACPI: Using GIC for interrupt routing Jul 11 00:21:37.898941 kernel: ACPI: MCFG table detected, 1 entries Jul 11 00:21:37.898949 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 11 00:21:37.898956 kernel: printk: console [ttyAMA0] enabled Jul 11 00:21:37.898964 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 11 00:21:37.899116 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 11 00:21:37.899200 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 11 00:21:37.899269 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 11 00:21:37.899335 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 11 00:21:37.899401 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 11 00:21:37.899411 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 11 00:21:37.899419 kernel: PCI host bridge to bus 0000:00 Jul 11 00:21:37.899489 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 11 00:21:37.899551 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 11 00:21:37.899618 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 11 00:21:37.899677 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 11 00:21:37.899763 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 11 00:21:37.899837 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 11 00:21:37.899911 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 11 00:21:37.899983 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 11 00:21:37.900106 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 11 00:21:37.900193 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 11 00:21:37.900270 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 11 00:21:37.900355 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 11 00:21:37.900415 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 11 00:21:37.900476 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 11 00:21:37.900537 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 11 00:21:37.900546 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 11 00:21:37.900554 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 11 00:21:37.900561 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 11 00:21:37.900569 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 11 00:21:37.900576 kernel: iommu: Default domain type: Translated Jul 11 00:21:37.900584 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 11 00:21:37.900591 kernel: efivars: Registered efivars operations Jul 11 00:21:37.900598 kernel: vgaarb: loaded Jul 11 00:21:37.900607 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 11 00:21:37.900614 kernel: VFS: Disk quotas dquot_6.6.0 Jul 11 00:21:37.900622 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 11 00:21:37.900629 kernel: pnp: PnP ACPI init Jul 11 00:21:37.900707 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 11 00:21:37.900718 kernel: pnp: PnP ACPI: found 1 devices Jul 11 00:21:37.900726 kernel: NET: Registered PF_INET protocol family Jul 11 00:21:37.900733 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 11 00:21:37.900744 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 11 00:21:37.900751 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 11 00:21:37.900759 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 11 00:21:37.900767 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 11 00:21:37.900774 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 11 00:21:37.900781 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 00:21:37.900789 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 00:21:37.900797 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 11 00:21:37.900804 kernel: PCI: CLS 0 bytes, default 64 Jul 11 00:21:37.900813 kernel: kvm [1]: HYP mode not available Jul 11 00:21:37.900820 kernel: Initialise system trusted keyrings Jul 11 00:21:37.900827 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 11 00:21:37.900834 kernel: Key type asymmetric registered Jul 11 00:21:37.900842 kernel: Asymmetric key parser 'x509' registered Jul 11 00:21:37.900849 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 11 00:21:37.900864 kernel: io scheduler mq-deadline registered Jul 11 00:21:37.900872 kernel: io scheduler kyber registered Jul 11 00:21:37.900879 kernel: io scheduler bfq registered Jul 11 00:21:37.900888 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 11 00:21:37.900895 kernel: ACPI: button: Power Button [PWRB] Jul 11 00:21:37.900903 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 11 00:21:37.900973 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 11 00:21:37.900983 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 11 00:21:37.900990 kernel: thunder_xcv, ver 1.0 Jul 11 00:21:37.900997 kernel: thunder_bgx, ver 1.0 Jul 11 00:21:37.901004 kernel: nicpf, ver 1.0 Jul 11 00:21:37.901011 kernel: nicvf, ver 1.0 Jul 11 00:21:37.901101 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 11 00:21:37.901166 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-11T00:21:37 UTC (1752193297) Jul 11 00:21:37.901176 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 11 00:21:37.901184 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 11 00:21:37.901191 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 11 00:21:37.901198 kernel: watchdog: Hard watchdog permanently disabled Jul 11 00:21:37.901206 kernel: NET: Registered PF_INET6 protocol family Jul 11 00:21:37.901213 kernel: Segment Routing with IPv6 Jul 11 00:21:37.901222 kernel: In-situ OAM (IOAM) with IPv6 Jul 11 00:21:37.901229 kernel: NET: Registered PF_PACKET protocol family Jul 11 00:21:37.901237 kernel: Key type dns_resolver registered Jul 11 00:21:37.901244 kernel: registered taskstats version 1 Jul 11 00:21:37.901251 kernel: Loading compiled-in X.509 certificates Jul 11 00:21:37.901258 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.96-flatcar: 9d58afa0c1753353480d5539f26f662c9ce000cb' Jul 11 00:21:37.901265 kernel: Key type .fscrypt registered Jul 11 00:21:37.901273 kernel: Key type fscrypt-provisioning registered Jul 11 00:21:37.901280 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 11 00:21:37.901289 kernel: ima: Allocated hash algorithm: sha1 Jul 11 00:21:37.901296 kernel: ima: No architecture policies found Jul 11 00:21:37.901303 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 11 00:21:37.901311 kernel: clk: Disabling unused clocks Jul 11 00:21:37.901318 kernel: Freeing unused kernel memory: 39424K Jul 11 00:21:37.901325 kernel: Run /init as init process Jul 11 00:21:37.901332 kernel: with arguments: Jul 11 00:21:37.901344 kernel: /init Jul 11 00:21:37.901352 kernel: with environment: Jul 11 00:21:37.901360 kernel: HOME=/ Jul 11 00:21:37.901367 kernel: TERM=linux Jul 11 00:21:37.901374 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 11 00:21:37.901383 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 11 00:21:37.901393 systemd[1]: Detected virtualization kvm. Jul 11 00:21:37.901401 systemd[1]: Detected architecture arm64. Jul 11 00:21:37.901408 systemd[1]: Running in initrd. Jul 11 00:21:37.901416 systemd[1]: No hostname configured, using default hostname. Jul 11 00:21:37.901425 systemd[1]: Hostname set to . Jul 11 00:21:37.901434 systemd[1]: Initializing machine ID from VM UUID. Jul 11 00:21:37.901442 systemd[1]: Queued start job for default target initrd.target. Jul 11 00:21:37.901450 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:21:37.901458 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:21:37.901466 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 11 00:21:37.901474 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 00:21:37.901482 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 11 00:21:37.901492 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 11 00:21:37.901501 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 11 00:21:37.901510 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 11 00:21:37.901518 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:21:37.901526 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:21:37.901534 systemd[1]: Reached target paths.target - Path Units. Jul 11 00:21:37.901544 systemd[1]: Reached target slices.target - Slice Units. Jul 11 00:21:37.901552 systemd[1]: Reached target swap.target - Swaps. Jul 11 00:21:37.901560 systemd[1]: Reached target timers.target - Timer Units. Jul 11 00:21:37.901568 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 00:21:37.901576 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 00:21:37.901584 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 11 00:21:37.901592 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 11 00:21:37.901603 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:21:37.901611 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 00:21:37.901632 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:21:37.901640 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 00:21:37.901648 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 11 00:21:37.901656 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 00:21:37.901664 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 11 00:21:37.901672 systemd[1]: Starting systemd-fsck-usr.service... Jul 11 00:21:37.901679 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 00:21:37.901687 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 00:21:37.901696 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:21:37.901705 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 11 00:21:37.901728 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:21:37.901736 systemd[1]: Finished systemd-fsck-usr.service. Jul 11 00:21:37.901761 systemd-journald[239]: Collecting audit messages is disabled. Jul 11 00:21:37.901781 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 11 00:21:37.901790 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:21:37.901798 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 11 00:21:37.901806 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:21:37.901817 systemd-journald[239]: Journal started Jul 11 00:21:37.901835 systemd-journald[239]: Runtime Journal (/run/log/journal/ce98fd6c6d1144d19ebf0cab1a78ef2a) is 5.9M, max 47.3M, 41.4M free. Jul 11 00:21:37.890710 systemd-modules-load[240]: Inserted module 'overlay' Jul 11 00:21:37.904244 systemd-modules-load[240]: Inserted module 'br_netfilter' Jul 11 00:21:37.905508 kernel: Bridge firewalling registered Jul 11 00:21:37.905526 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 00:21:37.906521 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 00:21:37.907467 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 00:21:37.911455 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:21:37.913454 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 00:21:37.915748 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 00:21:37.922371 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:21:37.924434 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:21:37.926943 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:21:37.928151 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:21:37.943242 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 11 00:21:37.945100 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 00:21:37.955366 dracut-cmdline[277]: dracut-dracut-053 Jul 11 00:21:37.957803 dracut-cmdline[277]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1479f76954ab5eb3c0ce800eb2a80ad04b273ff773a5af5c1fe82fb8feef2990 Jul 11 00:21:37.975757 systemd-resolved[280]: Positive Trust Anchors: Jul 11 00:21:37.975778 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:21:37.975813 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 00:21:37.980557 systemd-resolved[280]: Defaulting to hostname 'linux'. Jul 11 00:21:37.981494 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 00:21:37.982792 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:21:38.023081 kernel: SCSI subsystem initialized Jul 11 00:21:38.027062 kernel: Loading iSCSI transport class v2.0-870. Jul 11 00:21:38.035088 kernel: iscsi: registered transport (tcp) Jul 11 00:21:38.048107 kernel: iscsi: registered transport (qla4xxx) Jul 11 00:21:38.048148 kernel: QLogic iSCSI HBA Driver Jul 11 00:21:38.088768 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 11 00:21:38.101206 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 11 00:21:38.118379 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 11 00:21:38.118431 kernel: device-mapper: uevent: version 1.0.3 Jul 11 00:21:38.118443 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 11 00:21:38.165076 kernel: raid6: neonx8 gen() 15777 MB/s Jul 11 00:21:38.182068 kernel: raid6: neonx4 gen() 15656 MB/s Jul 11 00:21:38.199063 kernel: raid6: neonx2 gen() 13246 MB/s Jul 11 00:21:38.216065 kernel: raid6: neonx1 gen() 10489 MB/s Jul 11 00:21:38.233065 kernel: raid6: int64x8 gen() 6959 MB/s Jul 11 00:21:38.250065 kernel: raid6: int64x4 gen() 7349 MB/s Jul 11 00:21:38.267065 kernel: raid6: int64x2 gen() 6128 MB/s Jul 11 00:21:38.284064 kernel: raid6: int64x1 gen() 5052 MB/s Jul 11 00:21:38.284080 kernel: raid6: using algorithm neonx8 gen() 15777 MB/s Jul 11 00:21:38.301071 kernel: raid6: .... xor() 11938 MB/s, rmw enabled Jul 11 00:21:38.301088 kernel: raid6: using neon recovery algorithm Jul 11 00:21:38.306069 kernel: xor: measuring software checksum speed Jul 11 00:21:38.306087 kernel: 8regs : 19778 MB/sec Jul 11 00:21:38.307527 kernel: 32regs : 17869 MB/sec Jul 11 00:21:38.307540 kernel: arm64_neon : 26910 MB/sec Jul 11 00:21:38.307549 kernel: xor: using function: arm64_neon (26910 MB/sec) Jul 11 00:21:38.357081 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 11 00:21:38.368119 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 11 00:21:38.382203 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:21:38.394782 systemd-udevd[462]: Using default interface naming scheme 'v255'. Jul 11 00:21:38.397895 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:21:38.400123 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 11 00:21:38.414594 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Jul 11 00:21:38.440003 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 00:21:38.447220 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 00:21:38.491102 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:21:38.515250 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 11 00:21:38.526606 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 11 00:21:38.527891 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 00:21:38.528925 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:21:38.531027 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 00:21:38.536325 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 11 00:21:38.536490 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 11 00:21:38.542136 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 11 00:21:38.542168 kernel: GPT:9289727 != 19775487 Jul 11 00:21:38.542185 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 11 00:21:38.541809 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 11 00:21:38.544432 kernel: GPT:9289727 != 19775487 Jul 11 00:21:38.544455 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 11 00:21:38.554347 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:21:38.545020 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 00:21:38.545142 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:21:38.556937 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:21:38.557756 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:21:38.557893 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:21:38.559661 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:21:38.570350 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:21:38.572833 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 11 00:21:38.578073 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (521) Jul 11 00:21:38.578106 kernel: BTRFS: device fsid f5d5cad7-cb7a-4b07-bec7-847b84711ad7 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (506) Jul 11 00:21:38.589308 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 11 00:21:38.590445 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:21:38.598173 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 11 00:21:38.605442 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 00:21:38.608971 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 11 00:21:38.609881 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 11 00:21:38.626253 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 11 00:21:38.628371 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:21:38.632683 disk-uuid[550]: Primary Header is updated. Jul 11 00:21:38.632683 disk-uuid[550]: Secondary Entries is updated. Jul 11 00:21:38.632683 disk-uuid[550]: Secondary Header is updated. Jul 11 00:21:38.638300 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:21:38.646865 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:21:39.654706 disk-uuid[551]: The operation has completed successfully. Jul 11 00:21:39.656285 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:21:39.673581 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 11 00:21:39.673674 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 11 00:21:39.693225 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 11 00:21:39.695930 sh[573]: Success Jul 11 00:21:39.709097 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 11 00:21:39.736746 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 11 00:21:39.744352 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 11 00:21:39.747967 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 11 00:21:39.756553 kernel: BTRFS info (device dm-0): first mount of filesystem f5d5cad7-cb7a-4b07-bec7-847b84711ad7 Jul 11 00:21:39.756589 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 11 00:21:39.756600 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 11 00:21:39.758380 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 11 00:21:39.758397 kernel: BTRFS info (device dm-0): using free space tree Jul 11 00:21:39.761983 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 11 00:21:39.763172 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 11 00:21:39.772222 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 11 00:21:39.773579 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 11 00:21:39.783089 kernel: BTRFS info (device vda6): first mount of filesystem 183e1727-cabf-4be9-ba6e-b2af88e10184 Jul 11 00:21:39.783131 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 11 00:21:39.783150 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:21:39.785069 kernel: BTRFS info (device vda6): auto enabling async discard Jul 11 00:21:39.792841 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 11 00:21:39.794248 kernel: BTRFS info (device vda6): last unmount of filesystem 183e1727-cabf-4be9-ba6e-b2af88e10184 Jul 11 00:21:39.802098 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 11 00:21:39.809262 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 11 00:21:39.874808 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 00:21:39.886261 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 00:21:39.910443 ignition[666]: Ignition 2.19.0 Jul 11 00:21:39.910453 ignition[666]: Stage: fetch-offline Jul 11 00:21:39.910490 ignition[666]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:21:39.910498 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:21:39.910650 ignition[666]: parsed url from cmdline: "" Jul 11 00:21:39.910653 ignition[666]: no config URL provided Jul 11 00:21:39.910658 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" Jul 11 00:21:39.910665 ignition[666]: no config at "/usr/lib/ignition/user.ign" Jul 11 00:21:39.910687 ignition[666]: op(1): [started] loading QEMU firmware config module Jul 11 00:21:39.910692 ignition[666]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 11 00:21:39.916518 systemd-networkd[764]: lo: Link UP Jul 11 00:21:39.916531 systemd-networkd[764]: lo: Gained carrier Jul 11 00:21:39.917214 systemd-networkd[764]: Enumeration completed Jul 11 00:21:39.917437 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 00:21:39.917609 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:21:39.917612 systemd-networkd[764]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 00:21:39.918521 systemd-networkd[764]: eth0: Link UP Jul 11 00:21:39.918525 systemd-networkd[764]: eth0: Gained carrier Jul 11 00:21:39.918531 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:21:39.919209 systemd[1]: Reached target network.target - Network. Jul 11 00:21:39.931481 ignition[666]: op(1): [finished] loading QEMU firmware config module Jul 11 00:21:39.933107 systemd-networkd[764]: eth0: DHCPv4 address 10.0.0.102/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 00:21:39.968227 ignition[666]: parsing config with SHA512: 70b6b66d6eabb7006e81aa4575982d39d417f819abc71abb61189dc4c6f15e29e701cf4e33c91e9cc9cc54b2c3c1d66c893a2ebdb2c3f1dd9a023d309588c972 Jul 11 00:21:39.973829 unknown[666]: fetched base config from "system" Jul 11 00:21:39.973838 unknown[666]: fetched user config from "qemu" Jul 11 00:21:39.974349 ignition[666]: fetch-offline: fetch-offline passed Jul 11 00:21:39.974415 ignition[666]: Ignition finished successfully Jul 11 00:21:39.976492 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 00:21:39.977528 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 11 00:21:39.989173 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 11 00:21:39.998760 ignition[771]: Ignition 2.19.0 Jul 11 00:21:39.998770 ignition[771]: Stage: kargs Jul 11 00:21:39.998922 ignition[771]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:21:39.998932 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:21:39.999801 ignition[771]: kargs: kargs passed Jul 11 00:21:39.999841 ignition[771]: Ignition finished successfully Jul 11 00:21:40.001633 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 11 00:21:40.003668 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 11 00:21:40.015184 ignition[779]: Ignition 2.19.0 Jul 11 00:21:40.015193 ignition[779]: Stage: disks Jul 11 00:21:40.015338 ignition[779]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:21:40.015348 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:21:40.018120 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 11 00:21:40.016221 ignition[779]: disks: disks passed Jul 11 00:21:40.019222 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 11 00:21:40.016262 ignition[779]: Ignition finished successfully Jul 11 00:21:40.020502 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 11 00:21:40.021674 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 00:21:40.023240 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 00:21:40.024539 systemd[1]: Reached target basic.target - Basic System. Jul 11 00:21:40.035192 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 11 00:21:40.044568 systemd-fsck[789]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 11 00:21:40.048480 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 11 00:21:40.050227 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 11 00:21:40.095079 kernel: EXT4-fs (vda9): mounted filesystem a2a437d1-0a8e-46b9-88bf-4a47ff29fe90 r/w with ordered data mode. Quota mode: none. Jul 11 00:21:40.095077 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 11 00:21:40.096079 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 11 00:21:40.107147 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 00:21:40.108557 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 11 00:21:40.109392 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 11 00:21:40.109427 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 11 00:21:40.109447 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 00:21:40.115723 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 11 00:21:40.120055 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (797) Jul 11 00:21:40.120077 kernel: BTRFS info (device vda6): first mount of filesystem 183e1727-cabf-4be9-ba6e-b2af88e10184 Jul 11 00:21:40.120088 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 11 00:21:40.120098 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:21:40.118016 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 11 00:21:40.124080 kernel: BTRFS info (device vda6): auto enabling async discard Jul 11 00:21:40.124363 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 00:21:40.162870 initrd-setup-root[821]: cut: /sysroot/etc/passwd: No such file or directory Jul 11 00:21:40.165923 initrd-setup-root[828]: cut: /sysroot/etc/group: No such file or directory Jul 11 00:21:40.170111 initrd-setup-root[835]: cut: /sysroot/etc/shadow: No such file or directory Jul 11 00:21:40.173539 initrd-setup-root[842]: cut: /sysroot/etc/gshadow: No such file or directory Jul 11 00:21:40.237025 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 11 00:21:40.247174 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 11 00:21:40.248419 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 11 00:21:40.253082 kernel: BTRFS info (device vda6): last unmount of filesystem 183e1727-cabf-4be9-ba6e-b2af88e10184 Jul 11 00:21:40.268431 ignition[910]: INFO : Ignition 2.19.0 Jul 11 00:21:40.268431 ignition[910]: INFO : Stage: mount Jul 11 00:21:40.269934 ignition[910]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:21:40.269934 ignition[910]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:21:40.269934 ignition[910]: INFO : mount: mount passed Jul 11 00:21:40.269934 ignition[910]: INFO : Ignition finished successfully Jul 11 00:21:40.269589 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 11 00:21:40.271136 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 11 00:21:40.287211 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 11 00:21:40.755659 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 11 00:21:40.769265 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 00:21:40.779321 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (923) Jul 11 00:21:40.779356 kernel: BTRFS info (device vda6): first mount of filesystem 183e1727-cabf-4be9-ba6e-b2af88e10184 Jul 11 00:21:40.779366 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 11 00:21:40.780433 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:21:40.784075 kernel: BTRFS info (device vda6): auto enabling async discard Jul 11 00:21:40.784626 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 00:21:40.800860 ignition[940]: INFO : Ignition 2.19.0 Jul 11 00:21:40.800860 ignition[940]: INFO : Stage: files Jul 11 00:21:40.802108 ignition[940]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:21:40.802108 ignition[940]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:21:40.802108 ignition[940]: DEBUG : files: compiled without relabeling support, skipping Jul 11 00:21:40.805393 ignition[940]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 11 00:21:40.805393 ignition[940]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 11 00:21:40.807465 ignition[940]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 11 00:21:40.807465 ignition[940]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 11 00:21:40.809348 ignition[940]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 11 00:21:40.809348 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 11 00:21:40.809348 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 11 00:21:40.807682 unknown[940]: wrote ssh authorized keys file for user: core Jul 11 00:21:40.874811 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 11 00:21:41.063887 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 11 00:21:41.063887 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 11 00:21:41.066574 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 11 00:21:41.170167 systemd-networkd[764]: eth0: Gained IPv6LL Jul 11 00:21:41.415979 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 11 00:21:41.558603 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 11 00:21:41.559862 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 11 00:21:41.559862 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 11 00:21:41.559862 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 11 00:21:41.559862 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 11 00:21:41.559862 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 00:21:41.559862 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 00:21:41.559862 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 00:21:41.559862 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 00:21:41.559862 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:21:41.559862 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:21:41.559862 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 11 00:21:41.559862 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 11 00:21:41.559862 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 11 00:21:41.559862 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 11 00:21:41.941513 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 11 00:21:42.318315 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 11 00:21:42.318315 ignition[940]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 11 00:21:42.321110 ignition[940]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 00:21:42.321110 ignition[940]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 00:21:42.321110 ignition[940]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 11 00:21:42.321110 ignition[940]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 11 00:21:42.321110 ignition[940]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:21:42.321110 ignition[940]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:21:42.321110 ignition[940]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 11 00:21:42.321110 ignition[940]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 11 00:21:42.348412 ignition[940]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:21:42.351623 ignition[940]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:21:42.352720 ignition[940]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 11 00:21:42.352720 ignition[940]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 11 00:21:42.352720 ignition[940]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 11 00:21:42.352720 ignition[940]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:21:42.352720 ignition[940]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:21:42.352720 ignition[940]: INFO : files: files passed Jul 11 00:21:42.352720 ignition[940]: INFO : Ignition finished successfully Jul 11 00:21:42.353758 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 11 00:21:42.366200 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 11 00:21:42.368693 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 11 00:21:42.370009 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 11 00:21:42.370100 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 11 00:21:42.375503 initrd-setup-root-after-ignition[969]: grep: /sysroot/oem/oem-release: No such file or directory Jul 11 00:21:42.378176 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:21:42.378176 initrd-setup-root-after-ignition[971]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:21:42.380529 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:21:42.381762 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 00:21:42.383017 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 11 00:21:42.388211 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 11 00:21:42.406190 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 11 00:21:42.406299 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 11 00:21:42.407895 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 11 00:21:42.409372 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 11 00:21:42.410797 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 11 00:21:42.411547 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 11 00:21:42.426469 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 00:21:42.428507 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 11 00:21:42.440447 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:21:42.441394 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:21:42.442964 systemd[1]: Stopped target timers.target - Timer Units. Jul 11 00:21:42.444259 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 11 00:21:42.444371 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 00:21:42.446268 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 11 00:21:42.447600 systemd[1]: Stopped target basic.target - Basic System. Jul 11 00:21:42.448757 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 11 00:21:42.449964 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 00:21:42.451412 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 11 00:21:42.452814 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 11 00:21:42.454141 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 00:21:42.455605 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 11 00:21:42.457094 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 11 00:21:42.458695 systemd[1]: Stopped target swap.target - Swaps. Jul 11 00:21:42.459775 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 11 00:21:42.459905 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 11 00:21:42.461612 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:21:42.463010 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:21:42.464426 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 11 00:21:42.465847 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:21:42.466773 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 11 00:21:42.466892 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 11 00:21:42.468932 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 11 00:21:42.469048 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 00:21:42.470471 systemd[1]: Stopped target paths.target - Path Units. Jul 11 00:21:42.471595 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 11 00:21:42.476106 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:21:42.477025 systemd[1]: Stopped target slices.target - Slice Units. Jul 11 00:21:42.478556 systemd[1]: Stopped target sockets.target - Socket Units. Jul 11 00:21:42.479679 systemd[1]: iscsid.socket: Deactivated successfully. Jul 11 00:21:42.479769 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 00:21:42.480836 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 11 00:21:42.480922 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 00:21:42.481999 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 11 00:21:42.482121 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 00:21:42.483358 systemd[1]: ignition-files.service: Deactivated successfully. Jul 11 00:21:42.483458 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 11 00:21:42.494231 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 11 00:21:42.494903 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 11 00:21:42.495025 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:21:42.497203 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 11 00:21:42.498471 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 11 00:21:42.498593 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:21:42.500045 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 11 00:21:42.500159 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 00:21:42.505357 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 11 00:21:42.507110 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 11 00:21:42.511019 ignition[995]: INFO : Ignition 2.19.0 Jul 11 00:21:42.511019 ignition[995]: INFO : Stage: umount Jul 11 00:21:42.512636 ignition[995]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:21:42.512636 ignition[995]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:21:42.512636 ignition[995]: INFO : umount: umount passed Jul 11 00:21:42.512636 ignition[995]: INFO : Ignition finished successfully Jul 11 00:21:42.513601 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 11 00:21:42.515071 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 11 00:21:42.515200 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 11 00:21:42.516999 systemd[1]: Stopped target network.target - Network. Jul 11 00:21:42.517984 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 11 00:21:42.518044 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 11 00:21:42.519297 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 11 00:21:42.519339 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 11 00:21:42.520498 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 11 00:21:42.520537 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 11 00:21:42.522016 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 11 00:21:42.522093 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 11 00:21:42.523479 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 11 00:21:42.524822 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 11 00:21:42.526309 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 11 00:21:42.526395 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 11 00:21:42.527709 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 11 00:21:42.527789 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 11 00:21:42.528861 systemd-networkd[764]: eth0: DHCPv6 lease lost Jul 11 00:21:42.530691 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 11 00:21:42.530791 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 11 00:21:42.531806 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 11 00:21:42.531836 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:21:42.538456 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 11 00:21:42.539604 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 11 00:21:42.539669 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 00:21:42.541567 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:21:42.543606 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 11 00:21:42.543698 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 11 00:21:42.547439 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 11 00:21:42.547527 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:21:42.548966 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 11 00:21:42.549012 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 11 00:21:42.550544 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 11 00:21:42.550589 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:21:42.553523 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 11 00:21:42.553639 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 11 00:21:42.556638 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 11 00:21:42.556764 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:21:42.558378 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 11 00:21:42.558426 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 11 00:21:42.559837 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 11 00:21:42.559883 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:21:42.560898 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 11 00:21:42.560942 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 11 00:21:42.563145 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 11 00:21:42.563188 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 11 00:21:42.565619 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 00:21:42.565674 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:21:42.576222 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 11 00:21:42.577751 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 11 00:21:42.577802 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:21:42.579389 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:21:42.579427 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:21:42.582015 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 11 00:21:42.583566 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 11 00:21:42.584703 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 11 00:21:42.586743 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 11 00:21:42.597616 systemd[1]: Switching root. Jul 11 00:21:42.633100 systemd-journald[239]: Journal stopped Jul 11 00:21:43.350019 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Jul 11 00:21:43.350083 kernel: SELinux: policy capability network_peer_controls=1 Jul 11 00:21:43.350100 kernel: SELinux: policy capability open_perms=1 Jul 11 00:21:43.350112 kernel: SELinux: policy capability extended_socket_class=1 Jul 11 00:21:43.350125 kernel: SELinux: policy capability always_check_network=0 Jul 11 00:21:43.350135 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 11 00:21:43.350149 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 11 00:21:43.350162 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 11 00:21:43.350176 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 11 00:21:43.350186 kernel: audit: type=1403 audit(1752193302.814:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 11 00:21:43.350197 systemd[1]: Successfully loaded SELinux policy in 30.660ms. Jul 11 00:21:43.350214 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.272ms. Jul 11 00:21:43.350225 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 11 00:21:43.350237 systemd[1]: Detected virtualization kvm. Jul 11 00:21:43.350250 systemd[1]: Detected architecture arm64. Jul 11 00:21:43.350262 systemd[1]: Detected first boot. Jul 11 00:21:43.350272 systemd[1]: Initializing machine ID from VM UUID. Jul 11 00:21:43.350284 zram_generator::config[1040]: No configuration found. Jul 11 00:21:43.350296 systemd[1]: Populated /etc with preset unit settings. Jul 11 00:21:43.350306 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 11 00:21:43.350334 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 11 00:21:43.350345 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 11 00:21:43.350357 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 11 00:21:43.350368 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 11 00:21:43.350380 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 11 00:21:43.350392 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 11 00:21:43.350402 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 11 00:21:43.350414 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 11 00:21:43.350426 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 11 00:21:43.350437 systemd[1]: Created slice user.slice - User and Session Slice. Jul 11 00:21:43.350447 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:21:43.350459 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:21:43.350471 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 11 00:21:43.350482 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 11 00:21:43.350493 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 11 00:21:43.350505 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 00:21:43.350516 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 11 00:21:43.350527 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:21:43.350537 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 11 00:21:43.350548 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 11 00:21:43.350559 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 11 00:21:43.350571 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 11 00:21:43.350582 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:21:43.350592 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 00:21:43.350602 systemd[1]: Reached target slices.target - Slice Units. Jul 11 00:21:43.350613 systemd[1]: Reached target swap.target - Swaps. Jul 11 00:21:43.350623 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 11 00:21:43.350633 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 11 00:21:43.350643 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:21:43.350656 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 00:21:43.350666 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:21:43.350676 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 11 00:21:43.350687 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 11 00:21:43.350697 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 11 00:21:43.350707 systemd[1]: Mounting media.mount - External Media Directory... Jul 11 00:21:43.350717 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 11 00:21:43.350729 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 11 00:21:43.350740 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 11 00:21:43.350753 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 11 00:21:43.350764 systemd[1]: Reached target machines.target - Containers. Jul 11 00:21:43.350775 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 11 00:21:43.350786 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:21:43.350799 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 00:21:43.350810 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 11 00:21:43.350820 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:21:43.350831 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 00:21:43.350852 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:21:43.350865 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 11 00:21:43.350877 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:21:43.350888 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 11 00:21:43.350898 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 11 00:21:43.350909 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 11 00:21:43.350920 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 11 00:21:43.350930 systemd[1]: Stopped systemd-fsck-usr.service. Jul 11 00:21:43.350940 kernel: fuse: init (API version 7.39) Jul 11 00:21:43.350952 kernel: loop: module loaded Jul 11 00:21:43.350961 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 00:21:43.350973 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 00:21:43.350983 kernel: ACPI: bus type drm_connector registered Jul 11 00:21:43.350993 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 11 00:21:43.351003 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 11 00:21:43.351034 systemd-journald[1107]: Collecting audit messages is disabled. Jul 11 00:21:43.351065 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 00:21:43.351080 systemd-journald[1107]: Journal started Jul 11 00:21:43.351130 systemd-journald[1107]: Runtime Journal (/run/log/journal/ce98fd6c6d1144d19ebf0cab1a78ef2a) is 5.9M, max 47.3M, 41.4M free. Jul 11 00:21:43.171139 systemd[1]: Queued start job for default target multi-user.target. Jul 11 00:21:43.186002 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 11 00:21:43.186346 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 11 00:21:43.352255 systemd[1]: verity-setup.service: Deactivated successfully. Jul 11 00:21:43.352281 systemd[1]: Stopped verity-setup.service. Jul 11 00:21:43.356168 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 00:21:43.356781 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 11 00:21:43.358023 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 11 00:21:43.359210 systemd[1]: Mounted media.mount - External Media Directory. Jul 11 00:21:43.360288 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 11 00:21:43.361465 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 11 00:21:43.362769 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 11 00:21:43.363980 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 11 00:21:43.365448 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:21:43.366949 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 11 00:21:43.367123 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 11 00:21:43.368501 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:21:43.368637 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:21:43.370132 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 00:21:43.370279 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 00:21:43.371566 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:21:43.371696 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:21:43.373194 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 11 00:21:43.373329 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 11 00:21:43.374815 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:21:43.374969 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:21:43.376359 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 00:21:43.377899 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 00:21:43.379434 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 11 00:21:43.392025 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 11 00:21:43.400185 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 11 00:21:43.402006 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 11 00:21:43.402949 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 11 00:21:43.402987 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 00:21:43.404726 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 11 00:21:43.406657 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 11 00:21:43.408578 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 11 00:21:43.409514 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:21:43.411300 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 11 00:21:43.416253 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 11 00:21:43.417296 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:21:43.421249 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 11 00:21:43.422611 systemd-journald[1107]: Time spent on flushing to /var/log/journal/ce98fd6c6d1144d19ebf0cab1a78ef2a is 18.446ms for 853 entries. Jul 11 00:21:43.422611 systemd-journald[1107]: System Journal (/var/log/journal/ce98fd6c6d1144d19ebf0cab1a78ef2a) is 8.0M, max 195.6M, 187.6M free. Jul 11 00:21:43.454930 systemd-journald[1107]: Received client request to flush runtime journal. Jul 11 00:21:43.454980 kernel: loop0: detected capacity change from 0 to 114432 Jul 11 00:21:43.422832 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 00:21:43.424188 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:21:43.426731 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 11 00:21:43.434045 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 11 00:21:43.436295 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:21:43.437564 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 11 00:21:43.438645 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 11 00:21:43.439828 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 11 00:21:43.453047 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 11 00:21:43.458484 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 11 00:21:43.462060 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:21:43.463332 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 11 00:21:43.466156 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 11 00:21:43.464850 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 11 00:21:43.471182 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 11 00:21:43.474417 udevadm[1160]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 11 00:21:43.492987 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 11 00:21:43.500076 kernel: loop1: detected capacity change from 0 to 114328 Jul 11 00:21:43.503968 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 00:21:43.506465 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 11 00:21:43.507155 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 11 00:21:43.538082 kernel: loop2: detected capacity change from 0 to 207008 Jul 11 00:21:43.540798 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Jul 11 00:21:43.540815 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Jul 11 00:21:43.546172 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:21:43.582107 kernel: loop3: detected capacity change from 0 to 114432 Jul 11 00:21:43.587098 kernel: loop4: detected capacity change from 0 to 114328 Jul 11 00:21:43.591070 kernel: loop5: detected capacity change from 0 to 207008 Jul 11 00:21:43.595434 (sd-merge)[1176]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 11 00:21:43.595815 (sd-merge)[1176]: Merged extensions into '/usr'. Jul 11 00:21:43.599731 systemd[1]: Reloading requested from client PID 1151 ('systemd-sysext') (unit systemd-sysext.service)... Jul 11 00:21:43.599860 systemd[1]: Reloading... Jul 11 00:21:43.647086 zram_generator::config[1203]: No configuration found. Jul 11 00:21:43.719806 ldconfig[1146]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 11 00:21:43.740964 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:21:43.776867 systemd[1]: Reloading finished in 176 ms. Jul 11 00:21:43.807500 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 11 00:21:43.809149 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 11 00:21:43.821265 systemd[1]: Starting ensure-sysext.service... Jul 11 00:21:43.822949 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 00:21:43.834892 systemd[1]: Reloading requested from client PID 1238 ('systemctl') (unit ensure-sysext.service)... Jul 11 00:21:43.834906 systemd[1]: Reloading... Jul 11 00:21:43.842322 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 11 00:21:43.842577 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 11 00:21:43.843237 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 11 00:21:43.843450 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Jul 11 00:21:43.843504 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Jul 11 00:21:43.845649 systemd-tmpfiles[1239]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 00:21:43.845663 systemd-tmpfiles[1239]: Skipping /boot Jul 11 00:21:43.852474 systemd-tmpfiles[1239]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 00:21:43.852490 systemd-tmpfiles[1239]: Skipping /boot Jul 11 00:21:43.886074 zram_generator::config[1266]: No configuration found. Jul 11 00:21:43.965934 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:21:44.001867 systemd[1]: Reloading finished in 166 ms. Jul 11 00:21:44.018399 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 11 00:21:44.032573 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:21:44.040282 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 11 00:21:44.042569 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 11 00:21:44.044686 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 11 00:21:44.049375 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 00:21:44.052450 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:21:44.057217 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 11 00:21:44.060555 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:21:44.064359 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:21:44.067882 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:21:44.070462 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:21:44.071379 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:21:44.073222 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 11 00:21:44.075687 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:21:44.075851 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:21:44.078133 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:21:44.083346 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 00:21:44.084292 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:21:44.085021 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 11 00:21:44.087661 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:21:44.087818 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:21:44.091113 systemd-udevd[1310]: Using default interface naming scheme 'v255'. Jul 11 00:21:44.091197 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:21:44.091342 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:21:44.094044 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 00:21:44.094248 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 00:21:44.102732 systemd[1]: Finished ensure-sysext.service. Jul 11 00:21:44.106926 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 11 00:21:44.108545 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:21:44.108691 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:21:44.111945 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:21:44.112012 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 00:21:44.129362 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 11 00:21:44.131636 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 11 00:21:44.133180 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:21:44.134544 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 11 00:21:44.138260 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 11 00:21:44.153635 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 11 00:21:44.161632 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 00:21:44.162467 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 11 00:21:44.166812 augenrules[1365]: No rules Jul 11 00:21:44.170096 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 11 00:21:44.174756 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 11 00:21:44.190105 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1347) Jul 11 00:21:44.197588 systemd-resolved[1307]: Positive Trust Anchors: Jul 11 00:21:44.197603 systemd-resolved[1307]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:21:44.197642 systemd-resolved[1307]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 00:21:44.205452 systemd-resolved[1307]: Defaulting to hostname 'linux'. Jul 11 00:21:44.214029 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 00:21:44.215999 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:21:44.232414 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 00:21:44.236787 systemd-networkd[1363]: lo: Link UP Jul 11 00:21:44.239105 systemd-networkd[1363]: lo: Gained carrier Jul 11 00:21:44.240034 systemd-networkd[1363]: Enumeration completed Jul 11 00:21:44.240241 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 11 00:21:44.241275 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 00:21:44.242830 systemd[1]: Reached target network.target - Network. Jul 11 00:21:44.244137 systemd-networkd[1363]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:21:44.244148 systemd-networkd[1363]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 00:21:44.244819 systemd-networkd[1363]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:21:44.244861 systemd-networkd[1363]: eth0: Link UP Jul 11 00:21:44.244864 systemd-networkd[1363]: eth0: Gained carrier Jul 11 00:21:44.244872 systemd-networkd[1363]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:21:44.246229 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 11 00:21:44.256018 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 11 00:21:44.257247 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 11 00:21:44.258241 systemd-networkd[1363]: eth0: DHCPv4 address 10.0.0.102/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 00:21:44.258292 systemd[1]: Reached target time-set.target - System Time Set. Jul 11 00:21:44.259697 systemd-timesyncd[1333]: Network configuration changed, trying to establish connection. Jul 11 00:21:44.260883 systemd-timesyncd[1333]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 11 00:21:44.260934 systemd-timesyncd[1333]: Initial clock synchronization to Fri 2025-07-11 00:21:44.206171 UTC. Jul 11 00:21:44.305319 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:21:44.310722 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 11 00:21:44.313129 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 11 00:21:44.335134 lvm[1388]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 11 00:21:44.350095 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:21:44.371317 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 11 00:21:44.372425 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:21:44.373267 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 00:21:44.374097 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 11 00:21:44.374950 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 11 00:21:44.376016 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 11 00:21:44.376930 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 11 00:21:44.377849 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 11 00:21:44.378755 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 11 00:21:44.378786 systemd[1]: Reached target paths.target - Path Units. Jul 11 00:21:44.379441 systemd[1]: Reached target timers.target - Timer Units. Jul 11 00:21:44.380825 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 11 00:21:44.383139 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 11 00:21:44.390901 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 11 00:21:44.392832 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 11 00:21:44.394147 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 11 00:21:44.394986 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 00:21:44.395736 systemd[1]: Reached target basic.target - Basic System. Jul 11 00:21:44.396442 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 11 00:21:44.396471 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 11 00:21:44.397344 systemd[1]: Starting containerd.service - containerd container runtime... Jul 11 00:21:44.398979 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 11 00:21:44.401203 lvm[1396]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 11 00:21:44.402222 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 11 00:21:44.404550 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 11 00:21:44.405426 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 11 00:21:44.407393 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 11 00:21:44.411328 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 11 00:21:44.413981 jq[1399]: false Jul 11 00:21:44.414901 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 11 00:21:44.416732 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 11 00:21:44.423156 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 11 00:21:44.427823 extend-filesystems[1400]: Found loop3 Jul 11 00:21:44.428624 extend-filesystems[1400]: Found loop4 Jul 11 00:21:44.428624 extend-filesystems[1400]: Found loop5 Jul 11 00:21:44.428624 extend-filesystems[1400]: Found vda Jul 11 00:21:44.428624 extend-filesystems[1400]: Found vda1 Jul 11 00:21:44.428624 extend-filesystems[1400]: Found vda2 Jul 11 00:21:44.428624 extend-filesystems[1400]: Found vda3 Jul 11 00:21:44.428624 extend-filesystems[1400]: Found usr Jul 11 00:21:44.428624 extend-filesystems[1400]: Found vda4 Jul 11 00:21:44.428624 extend-filesystems[1400]: Found vda6 Jul 11 00:21:44.428624 extend-filesystems[1400]: Found vda7 Jul 11 00:21:44.428624 extend-filesystems[1400]: Found vda9 Jul 11 00:21:44.428624 extend-filesystems[1400]: Checking size of /dev/vda9 Jul 11 00:21:44.429216 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 11 00:21:44.433232 dbus-daemon[1398]: [system] SELinux support is enabled Jul 11 00:21:44.429633 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 11 00:21:44.439210 systemd[1]: Starting update-engine.service - Update Engine... Jul 11 00:21:44.441361 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 11 00:21:44.443646 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 11 00:21:44.448217 jq[1418]: true Jul 11 00:21:44.450369 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 11 00:21:44.455195 extend-filesystems[1400]: Resized partition /dev/vda9 Jul 11 00:21:44.456709 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 11 00:21:44.456950 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 11 00:21:44.457271 systemd[1]: motdgen.service: Deactivated successfully. Jul 11 00:21:44.457404 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 11 00:21:44.462712 extend-filesystems[1422]: resize2fs 1.47.1 (20-May-2024) Jul 11 00:21:44.467265 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1335) Jul 11 00:21:44.467292 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 11 00:21:44.464542 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 11 00:21:44.464774 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 11 00:21:44.486273 (ntainerd)[1425]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 11 00:21:44.496443 jq[1424]: true Jul 11 00:21:44.507002 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 11 00:21:44.508213 update_engine[1412]: I20250711 00:21:44.506903 1412 main.cc:92] Flatcar Update Engine starting Jul 11 00:21:44.507033 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 11 00:21:44.508020 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 11 00:21:44.508035 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 11 00:21:44.513129 systemd[1]: Started update-engine.service - Update Engine. Jul 11 00:21:44.513552 update_engine[1412]: I20250711 00:21:44.513227 1412 update_check_scheduler.cc:74] Next update check in 4m36s Jul 11 00:21:44.513386 systemd-logind[1406]: Watching system buttons on /dev/input/event0 (Power Button) Jul 11 00:21:44.514590 systemd-logind[1406]: New seat seat0. Jul 11 00:21:44.515080 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 11 00:21:44.523477 tar[1423]: linux-arm64/LICENSE Jul 11 00:21:44.546511 tar[1423]: linux-arm64/helm Jul 11 00:21:44.546414 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 11 00:21:44.546592 extend-filesystems[1422]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 11 00:21:44.546592 extend-filesystems[1422]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 11 00:21:44.546592 extend-filesystems[1422]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 11 00:21:44.552311 extend-filesystems[1400]: Resized filesystem in /dev/vda9 Jul 11 00:21:44.547358 systemd[1]: Started systemd-logind.service - User Login Management. Jul 11 00:21:44.549586 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 11 00:21:44.549742 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 11 00:21:44.566187 bash[1452]: Updated "/home/core/.ssh/authorized_keys" Jul 11 00:21:44.567526 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 11 00:21:44.569235 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 11 00:21:44.606723 locksmithd[1442]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 11 00:21:44.729061 containerd[1425]: time="2025-07-11T00:21:44.726609400Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 11 00:21:44.752966 containerd[1425]: time="2025-07-11T00:21:44.752928440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:21:44.754328 containerd[1425]: time="2025-07-11T00:21:44.754294960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:21:44.754360 containerd[1425]: time="2025-07-11T00:21:44.754326800Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 11 00:21:44.754360 containerd[1425]: time="2025-07-11T00:21:44.754343600Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 11 00:21:44.754495 containerd[1425]: time="2025-07-11T00:21:44.754475200Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 11 00:21:44.754527 containerd[1425]: time="2025-07-11T00:21:44.754498080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 11 00:21:44.754566 containerd[1425]: time="2025-07-11T00:21:44.754549200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:21:44.754593 containerd[1425]: time="2025-07-11T00:21:44.754565600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:21:44.754730 containerd[1425]: time="2025-07-11T00:21:44.754710120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:21:44.754758 containerd[1425]: time="2025-07-11T00:21:44.754729800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 11 00:21:44.754758 containerd[1425]: time="2025-07-11T00:21:44.754744280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:21:44.754758 containerd[1425]: time="2025-07-11T00:21:44.754754400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 11 00:21:44.754843 containerd[1425]: time="2025-07-11T00:21:44.754821000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:21:44.755040 containerd[1425]: time="2025-07-11T00:21:44.755019720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:21:44.755162 containerd[1425]: time="2025-07-11T00:21:44.755141280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:21:44.755162 containerd[1425]: time="2025-07-11T00:21:44.755160680Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 11 00:21:44.755267 containerd[1425]: time="2025-07-11T00:21:44.755249120Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 11 00:21:44.755310 containerd[1425]: time="2025-07-11T00:21:44.755295960Z" level=info msg="metadata content store policy set" policy=shared Jul 11 00:21:44.758845 containerd[1425]: time="2025-07-11T00:21:44.758783880Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 11 00:21:44.758880 containerd[1425]: time="2025-07-11T00:21:44.758843400Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 11 00:21:44.758880 containerd[1425]: time="2025-07-11T00:21:44.758860800Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 11 00:21:44.758880 containerd[1425]: time="2025-07-11T00:21:44.758875040Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 11 00:21:44.758939 containerd[1425]: time="2025-07-11T00:21:44.758887640Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 11 00:21:44.759117 containerd[1425]: time="2025-07-11T00:21:44.759094560Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 11 00:21:44.759464 containerd[1425]: time="2025-07-11T00:21:44.759441360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 11 00:21:44.759573 containerd[1425]: time="2025-07-11T00:21:44.759555080Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 11 00:21:44.759597 containerd[1425]: time="2025-07-11T00:21:44.759576480Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 11 00:21:44.759632 containerd[1425]: time="2025-07-11T00:21:44.759598680Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 11 00:21:44.759632 containerd[1425]: time="2025-07-11T00:21:44.759613880Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 11 00:21:44.759632 containerd[1425]: time="2025-07-11T00:21:44.759627400Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 11 00:21:44.759681 containerd[1425]: time="2025-07-11T00:21:44.759639840Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 11 00:21:44.759681 containerd[1425]: time="2025-07-11T00:21:44.759653040Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 11 00:21:44.759681 containerd[1425]: time="2025-07-11T00:21:44.759666880Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 11 00:21:44.759681 containerd[1425]: time="2025-07-11T00:21:44.759678640Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 11 00:21:44.759747 containerd[1425]: time="2025-07-11T00:21:44.759690840Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 11 00:21:44.759747 containerd[1425]: time="2025-07-11T00:21:44.759707040Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 11 00:21:44.759747 containerd[1425]: time="2025-07-11T00:21:44.759729760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 11 00:21:44.759747 containerd[1425]: time="2025-07-11T00:21:44.759742360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 11 00:21:44.759815 containerd[1425]: time="2025-07-11T00:21:44.759753920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 11 00:21:44.759815 containerd[1425]: time="2025-07-11T00:21:44.759766040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 11 00:21:44.759815 containerd[1425]: time="2025-07-11T00:21:44.759777680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 11 00:21:44.759815 containerd[1425]: time="2025-07-11T00:21:44.759791080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 11 00:21:44.759815 containerd[1425]: time="2025-07-11T00:21:44.759802840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 11 00:21:44.759926 containerd[1425]: time="2025-07-11T00:21:44.759815880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 11 00:21:44.759926 containerd[1425]: time="2025-07-11T00:21:44.759829440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 11 00:21:44.759926 containerd[1425]: time="2025-07-11T00:21:44.759855880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 11 00:21:44.759926 containerd[1425]: time="2025-07-11T00:21:44.759868920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 11 00:21:44.759926 containerd[1425]: time="2025-07-11T00:21:44.759880840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 11 00:21:44.759926 containerd[1425]: time="2025-07-11T00:21:44.759893320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 11 00:21:44.759926 containerd[1425]: time="2025-07-11T00:21:44.759912560Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 11 00:21:44.760071 containerd[1425]: time="2025-07-11T00:21:44.759943840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 11 00:21:44.760071 containerd[1425]: time="2025-07-11T00:21:44.759957280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 11 00:21:44.760071 containerd[1425]: time="2025-07-11T00:21:44.759968320Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 11 00:21:44.760272 containerd[1425]: time="2025-07-11T00:21:44.760246880Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 11 00:21:44.760298 containerd[1425]: time="2025-07-11T00:21:44.760276400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 11 00:21:44.760478 containerd[1425]: time="2025-07-11T00:21:44.760455720Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 11 00:21:44.760517 containerd[1425]: time="2025-07-11T00:21:44.760482080Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 11 00:21:44.760517 containerd[1425]: time="2025-07-11T00:21:44.760493360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 11 00:21:44.760517 containerd[1425]: time="2025-07-11T00:21:44.760506240Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 11 00:21:44.760517 containerd[1425]: time="2025-07-11T00:21:44.760516080Z" level=info msg="NRI interface is disabled by configuration." Jul 11 00:21:44.760588 containerd[1425]: time="2025-07-11T00:21:44.760528120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 11 00:21:44.761270 containerd[1425]: time="2025-07-11T00:21:44.761154640Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 11 00:21:44.761391 containerd[1425]: time="2025-07-11T00:21:44.761277400Z" level=info msg="Connect containerd service" Jul 11 00:21:44.761391 containerd[1425]: time="2025-07-11T00:21:44.761309440Z" level=info msg="using legacy CRI server" Jul 11 00:21:44.761391 containerd[1425]: time="2025-07-11T00:21:44.761317200Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 11 00:21:44.761473 containerd[1425]: time="2025-07-11T00:21:44.761448280Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 11 00:21:44.762588 containerd[1425]: time="2025-07-11T00:21:44.762482920Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 00:21:44.762779 containerd[1425]: time="2025-07-11T00:21:44.762745920Z" level=info msg="Start subscribing containerd event" Jul 11 00:21:44.762814 containerd[1425]: time="2025-07-11T00:21:44.762798440Z" level=info msg="Start recovering state" Jul 11 00:21:44.763171 containerd[1425]: time="2025-07-11T00:21:44.762871880Z" level=info msg="Start event monitor" Jul 11 00:21:44.763171 containerd[1425]: time="2025-07-11T00:21:44.762883760Z" level=info msg="Start snapshots syncer" Jul 11 00:21:44.763171 containerd[1425]: time="2025-07-11T00:21:44.762892320Z" level=info msg="Start cni network conf syncer for default" Jul 11 00:21:44.763171 containerd[1425]: time="2025-07-11T00:21:44.762900240Z" level=info msg="Start streaming server" Jul 11 00:21:44.763662 containerd[1425]: time="2025-07-11T00:21:44.763634720Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 11 00:21:44.763780 containerd[1425]: time="2025-07-11T00:21:44.763761680Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 11 00:21:44.763929 systemd[1]: Started containerd.service - containerd container runtime. Jul 11 00:21:44.765539 containerd[1425]: time="2025-07-11T00:21:44.765497080Z" level=info msg="containerd successfully booted in 0.040160s" Jul 11 00:21:44.938984 tar[1423]: linux-arm64/README.md Jul 11 00:21:44.943173 sshd_keygen[1414]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 11 00:21:44.950536 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 11 00:21:44.965233 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 11 00:21:44.974332 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 11 00:21:44.979446 systemd[1]: issuegen.service: Deactivated successfully. Jul 11 00:21:44.979632 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 11 00:21:44.981971 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 11 00:21:44.993276 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 11 00:21:45.003434 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 11 00:21:45.005376 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 11 00:21:45.006421 systemd[1]: Reached target getty.target - Login Prompts. Jul 11 00:21:45.522323 systemd-networkd[1363]: eth0: Gained IPv6LL Jul 11 00:21:45.525374 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 11 00:21:45.527276 systemd[1]: Reached target network-online.target - Network is Online. Jul 11 00:21:45.539302 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 11 00:21:45.541660 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:21:45.543577 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 11 00:21:45.559914 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 11 00:21:45.560172 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 11 00:21:45.562125 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 11 00:21:45.563899 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 11 00:21:46.102452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:21:46.103731 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 11 00:21:46.104762 systemd[1]: Startup finished in 562ms (kernel) + 5.095s (initrd) + 3.340s (userspace) = 8.998s. Jul 11 00:21:46.106123 (kubelet)[1512]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:21:46.543142 kubelet[1512]: E0711 00:21:46.542975 1512 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:21:46.545354 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:21:46.545504 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:21:50.011751 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 11 00:21:50.012871 systemd[1]: Started sshd@0-10.0.0.102:22-10.0.0.1:54418.service - OpenSSH per-connection server daemon (10.0.0.1:54418). Jul 11 00:21:50.061403 sshd[1525]: Accepted publickey for core from 10.0.0.1 port 54418 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:21:50.063336 sshd[1525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:21:50.076906 systemd-logind[1406]: New session 1 of user core. Jul 11 00:21:50.077924 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 11 00:21:50.086291 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 11 00:21:50.095431 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 11 00:21:50.097523 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 11 00:21:50.103793 (systemd)[1529]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:21:50.178119 systemd[1529]: Queued start job for default target default.target. Jul 11 00:21:50.193040 systemd[1529]: Created slice app.slice - User Application Slice. Jul 11 00:21:50.193089 systemd[1529]: Reached target paths.target - Paths. Jul 11 00:21:50.193102 systemd[1529]: Reached target timers.target - Timers. Jul 11 00:21:50.194364 systemd[1529]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 11 00:21:50.204713 systemd[1529]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 11 00:21:50.204785 systemd[1529]: Reached target sockets.target - Sockets. Jul 11 00:21:50.204798 systemd[1529]: Reached target basic.target - Basic System. Jul 11 00:21:50.204837 systemd[1529]: Reached target default.target - Main User Target. Jul 11 00:21:50.204867 systemd[1529]: Startup finished in 95ms. Jul 11 00:21:50.205158 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 11 00:21:50.206470 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 11 00:21:50.266782 systemd[1]: Started sshd@1-10.0.0.102:22-10.0.0.1:54426.service - OpenSSH per-connection server daemon (10.0.0.1:54426). Jul 11 00:21:50.306322 sshd[1540]: Accepted publickey for core from 10.0.0.1 port 54426 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:21:50.307676 sshd[1540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:21:50.311985 systemd-logind[1406]: New session 2 of user core. Jul 11 00:21:50.329247 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 11 00:21:50.382013 sshd[1540]: pam_unix(sshd:session): session closed for user core Jul 11 00:21:50.391547 systemd[1]: sshd@1-10.0.0.102:22-10.0.0.1:54426.service: Deactivated successfully. Jul 11 00:21:50.393104 systemd[1]: session-2.scope: Deactivated successfully. Jul 11 00:21:50.395103 systemd-logind[1406]: Session 2 logged out. Waiting for processes to exit. Jul 11 00:21:50.408832 systemd[1]: Started sshd@2-10.0.0.102:22-10.0.0.1:54442.service - OpenSSH per-connection server daemon (10.0.0.1:54442). Jul 11 00:21:50.409795 systemd-logind[1406]: Removed session 2. Jul 11 00:21:50.440711 sshd[1547]: Accepted publickey for core from 10.0.0.1 port 54442 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:21:50.441877 sshd[1547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:21:50.445897 systemd-logind[1406]: New session 3 of user core. Jul 11 00:21:50.461215 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 11 00:21:50.509426 sshd[1547]: pam_unix(sshd:session): session closed for user core Jul 11 00:21:50.521535 systemd[1]: sshd@2-10.0.0.102:22-10.0.0.1:54442.service: Deactivated successfully. Jul 11 00:21:50.524079 systemd[1]: session-3.scope: Deactivated successfully. Jul 11 00:21:50.525239 systemd-logind[1406]: Session 3 logged out. Waiting for processes to exit. Jul 11 00:21:50.526258 systemd[1]: Started sshd@3-10.0.0.102:22-10.0.0.1:54458.service - OpenSSH per-connection server daemon (10.0.0.1:54458). Jul 11 00:21:50.527042 systemd-logind[1406]: Removed session 3. Jul 11 00:21:50.561847 sshd[1554]: Accepted publickey for core from 10.0.0.1 port 54458 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:21:50.563204 sshd[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:21:50.567111 systemd-logind[1406]: New session 4 of user core. Jul 11 00:21:50.575214 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 11 00:21:50.626856 sshd[1554]: pam_unix(sshd:session): session closed for user core Jul 11 00:21:50.635649 systemd[1]: sshd@3-10.0.0.102:22-10.0.0.1:54458.service: Deactivated successfully. Jul 11 00:21:50.637119 systemd[1]: session-4.scope: Deactivated successfully. Jul 11 00:21:50.640407 systemd-logind[1406]: Session 4 logged out. Waiting for processes to exit. Jul 11 00:21:50.649317 systemd[1]: Started sshd@4-10.0.0.102:22-10.0.0.1:54474.service - OpenSSH per-connection server daemon (10.0.0.1:54474). Jul 11 00:21:50.650448 systemd-logind[1406]: Removed session 4. Jul 11 00:21:50.682363 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 54474 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:21:50.683568 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:21:50.687192 systemd-logind[1406]: New session 5 of user core. Jul 11 00:21:50.699213 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 11 00:21:50.755000 sudo[1564]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 11 00:21:50.755314 sudo[1564]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:21:50.768951 sudo[1564]: pam_unix(sudo:session): session closed for user root Jul 11 00:21:50.770914 sshd[1561]: pam_unix(sshd:session): session closed for user core Jul 11 00:21:50.781643 systemd[1]: sshd@4-10.0.0.102:22-10.0.0.1:54474.service: Deactivated successfully. Jul 11 00:21:50.783974 systemd[1]: session-5.scope: Deactivated successfully. Jul 11 00:21:50.786892 systemd-logind[1406]: Session 5 logged out. Waiting for processes to exit. Jul 11 00:21:50.787210 systemd[1]: Started sshd@5-10.0.0.102:22-10.0.0.1:54480.service - OpenSSH per-connection server daemon (10.0.0.1:54480). Jul 11 00:21:50.788376 systemd-logind[1406]: Removed session 5. Jul 11 00:21:50.823560 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 54480 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:21:50.824952 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:21:50.828883 systemd-logind[1406]: New session 6 of user core. Jul 11 00:21:50.838196 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 11 00:21:50.891664 sudo[1573]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 11 00:21:50.891935 sudo[1573]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:21:50.894857 sudo[1573]: pam_unix(sudo:session): session closed for user root Jul 11 00:21:50.899805 sudo[1572]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 11 00:21:50.900137 sudo[1572]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:21:50.921350 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 11 00:21:50.922788 auditctl[1576]: No rules Jul 11 00:21:50.923765 systemd[1]: audit-rules.service: Deactivated successfully. Jul 11 00:21:50.923985 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 11 00:21:50.925727 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 11 00:21:50.951299 augenrules[1595]: No rules Jul 11 00:21:50.952747 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 11 00:21:50.953930 sudo[1572]: pam_unix(sudo:session): session closed for user root Jul 11 00:21:50.956791 sshd[1569]: pam_unix(sshd:session): session closed for user core Jul 11 00:21:50.966627 systemd[1]: sshd@5-10.0.0.102:22-10.0.0.1:54480.service: Deactivated successfully. Jul 11 00:21:50.969348 systemd[1]: session-6.scope: Deactivated successfully. Jul 11 00:21:50.970830 systemd-logind[1406]: Session 6 logged out. Waiting for processes to exit. Jul 11 00:21:50.975420 systemd[1]: Started sshd@6-10.0.0.102:22-10.0.0.1:54488.service - OpenSSH per-connection server daemon (10.0.0.1:54488). Jul 11 00:21:50.976226 systemd-logind[1406]: Removed session 6. Jul 11 00:21:51.008509 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 54488 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:21:51.010077 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:21:51.013844 systemd-logind[1406]: New session 7 of user core. Jul 11 00:21:51.021241 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 11 00:21:51.071935 sudo[1606]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 11 00:21:51.072844 sudo[1606]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:21:51.388318 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 11 00:21:51.388409 (dockerd)[1624]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 11 00:21:51.665356 dockerd[1624]: time="2025-07-11T00:21:51.665219807Z" level=info msg="Starting up" Jul 11 00:21:51.803795 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3908268292-merged.mount: Deactivated successfully. Jul 11 00:21:51.825480 dockerd[1624]: time="2025-07-11T00:21:51.825434364Z" level=info msg="Loading containers: start." Jul 11 00:21:51.913120 kernel: Initializing XFRM netlink socket Jul 11 00:21:51.975075 systemd-networkd[1363]: docker0: Link UP Jul 11 00:21:51.992472 dockerd[1624]: time="2025-07-11T00:21:51.992374993Z" level=info msg="Loading containers: done." Jul 11 00:21:52.010094 dockerd[1624]: time="2025-07-11T00:21:52.009975431Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 11 00:21:52.010238 dockerd[1624]: time="2025-07-11T00:21:52.010103648Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 11 00:21:52.010238 dockerd[1624]: time="2025-07-11T00:21:52.010207690Z" level=info msg="Daemon has completed initialization" Jul 11 00:21:52.041424 dockerd[1624]: time="2025-07-11T00:21:52.041282136Z" level=info msg="API listen on /run/docker.sock" Jul 11 00:21:52.041676 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 11 00:21:52.577562 containerd[1425]: time="2025-07-11T00:21:52.577513023Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 11 00:21:52.801732 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3241057356-merged.mount: Deactivated successfully. Jul 11 00:21:53.160503 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount588640503.mount: Deactivated successfully. Jul 11 00:21:53.897930 containerd[1425]: time="2025-07-11T00:21:53.897881030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:21:53.898850 containerd[1425]: time="2025-07-11T00:21:53.898426913Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=26328196" Jul 11 00:21:53.899701 containerd[1425]: time="2025-07-11T00:21:53.899666293Z" level=info msg="ImageCreate event name:\"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:21:53.902469 containerd[1425]: time="2025-07-11T00:21:53.902434377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:21:53.903776 containerd[1425]: time="2025-07-11T00:21:53.903745909Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"26324994\" in 1.326186407s" Jul 11 00:21:53.903776 containerd[1425]: time="2025-07-11T00:21:53.903776916Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jul 11 00:21:53.904433 containerd[1425]: time="2025-07-11T00:21:53.904395310Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 11 00:21:54.861242 containerd[1425]: time="2025-07-11T00:21:54.861191955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:21:54.861767 containerd[1425]: time="2025-07-11T00:21:54.861735203Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=22529230" Jul 11 00:21:54.862657 containerd[1425]: time="2025-07-11T00:21:54.862634283Z" level=info msg="ImageCreate event name:\"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:21:54.865415 containerd[1425]: time="2025-07-11T00:21:54.865383417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:21:54.866588 containerd[1425]: time="2025-07-11T00:21:54.866559091Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"24065018\" in 962.131335ms" Jul 11 00:21:54.866671 containerd[1425]: time="2025-07-11T00:21:54.866589229Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jul 11 00:21:54.867286 containerd[1425]: time="2025-07-11T00:21:54.867199460Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 11 00:21:55.794726 containerd[1425]: time="2025-07-11T00:21:55.794476315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:21:55.795647 containerd[1425]: time="2025-07-11T00:21:55.795408007Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=17484143" Jul 11 00:21:55.796498 containerd[1425]: time="2025-07-11T00:21:55.796435169Z" level=info msg="ImageCreate event name:\"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:21:55.799430 containerd[1425]: time="2025-07-11T00:21:55.799377182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:21:55.800559 containerd[1425]: time="2025-07-11T00:21:55.800531715Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"19019949\" in 933.278761ms" Jul 11 00:21:55.800938 containerd[1425]: time="2025-07-11T00:21:55.800637485Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jul 11 00:21:55.801140 containerd[1425]: time="2025-07-11T00:21:55.801084645Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 11 00:21:56.677298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3691396542.mount: Deactivated successfully. Jul 11 00:21:56.678229 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 11 00:21:56.687282 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:21:56.789470 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:21:56.793679 (kubelet)[1854]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:21:56.852811 kubelet[1854]: E0711 00:21:56.852699 1854 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:21:56.856032 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:21:56.856208 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:21:57.138196 containerd[1425]: time="2025-07-11T00:21:57.137814707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:21:57.139412 containerd[1425]: time="2025-07-11T00:21:57.139368898Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=27378408" Jul 11 00:21:57.140404 containerd[1425]: time="2025-07-11T00:21:57.140351312Z" level=info msg="ImageCreate event name:\"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:21:57.142505 containerd[1425]: time="2025-07-11T00:21:57.142453512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:21:57.143330 containerd[1425]: time="2025-07-11T00:21:57.143298993Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"27377425\" in 1.34217701s" Jul 11 00:21:57.143402 containerd[1425]: time="2025-07-11T00:21:57.143333865Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 11 00:21:57.143955 containerd[1425]: time="2025-07-11T00:21:57.143923218Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 11 00:21:57.656498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2708407494.mount: Deactivated successfully. Jul 11 00:21:58.276513 containerd[1425]: time="2025-07-11T00:21:58.276456471Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:21:58.277473 containerd[1425]: time="2025-07-11T00:21:58.277227827Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 11 00:21:58.278228 containerd[1425]: time="2025-07-11T00:21:58.278194788Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:21:58.281706 containerd[1425]: time="2025-07-11T00:21:58.281652084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:21:58.283030 containerd[1425]: time="2025-07-11T00:21:58.282882329Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.138917886s" Jul 11 00:21:58.283030 containerd[1425]: time="2025-07-11T00:21:58.282916448Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 11 00:21:58.283517 containerd[1425]: time="2025-07-11T00:21:58.283338622Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 11 00:21:58.718647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3236795746.mount: Deactivated successfully. Jul 11 00:21:58.721779 containerd[1425]: time="2025-07-11T00:21:58.721739193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:21:58.722445 containerd[1425]: time="2025-07-11T00:21:58.722402637Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 11 00:21:58.723074 containerd[1425]: time="2025-07-11T00:21:58.723037516Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:21:58.725843 containerd[1425]: time="2025-07-11T00:21:58.725786142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:21:58.726482 containerd[1425]: time="2025-07-11T00:21:58.726402883Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 443.033417ms" Jul 11 00:21:58.726482 containerd[1425]: time="2025-07-11T00:21:58.726432647Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 11 00:21:58.726873 containerd[1425]: time="2025-07-11T00:21:58.726846231Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 11 00:21:59.206221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4051214469.mount: Deactivated successfully. Jul 11 00:22:00.501117 containerd[1425]: time="2025-07-11T00:22:00.500874900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:00.501996 containerd[1425]: time="2025-07-11T00:22:00.501732114Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" Jul 11 00:22:00.502692 containerd[1425]: time="2025-07-11T00:22:00.502661221Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:00.506001 containerd[1425]: time="2025-07-11T00:22:00.505939973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:00.507369 containerd[1425]: time="2025-07-11T00:22:00.507323464Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 1.78044311s" Jul 11 00:22:00.507369 containerd[1425]: time="2025-07-11T00:22:00.507364786Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 11 00:22:06.091971 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:22:06.103263 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:22:06.124243 systemd[1]: Reloading requested from client PID 2003 ('systemctl') (unit session-7.scope)... Jul 11 00:22:06.124390 systemd[1]: Reloading... Jul 11 00:22:06.189259 zram_generator::config[2042]: No configuration found. Jul 11 00:22:06.311632 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:22:06.365209 systemd[1]: Reloading finished in 240 ms. Jul 11 00:22:06.404046 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:22:06.407557 systemd[1]: kubelet.service: Deactivated successfully. Jul 11 00:22:06.407739 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:22:06.410234 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:22:06.513225 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:22:06.517919 (kubelet)[2089]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 00:22:06.550950 kubelet[2089]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:22:06.550950 kubelet[2089]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 11 00:22:06.550950 kubelet[2089]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:22:06.551345 kubelet[2089]: I0711 00:22:06.551037 2089 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 00:22:07.165313 kubelet[2089]: I0711 00:22:07.165267 2089 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 11 00:22:07.165313 kubelet[2089]: I0711 00:22:07.165303 2089 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 00:22:07.166612 kubelet[2089]: I0711 00:22:07.165999 2089 server.go:954] "Client rotation is on, will bootstrap in background" Jul 11 00:22:07.206546 kubelet[2089]: E0711 00:22:07.206499 2089 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.102:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:22:07.210098 kubelet[2089]: I0711 00:22:07.208317 2089 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:22:07.212683 kubelet[2089]: E0711 00:22:07.212650 2089 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 11 00:22:07.212683 kubelet[2089]: I0711 00:22:07.212681 2089 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 11 00:22:07.215388 kubelet[2089]: I0711 00:22:07.215344 2089 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 00:22:07.216016 kubelet[2089]: I0711 00:22:07.215952 2089 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 00:22:07.216240 kubelet[2089]: I0711 00:22:07.216011 2089 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 11 00:22:07.216327 kubelet[2089]: I0711 00:22:07.216301 2089 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 00:22:07.216327 kubelet[2089]: I0711 00:22:07.216312 2089 container_manager_linux.go:304] "Creating device plugin manager" Jul 11 00:22:07.216520 kubelet[2089]: I0711 00:22:07.216504 2089 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:22:07.220750 kubelet[2089]: I0711 00:22:07.220717 2089 kubelet.go:446] "Attempting to sync node with API server" Jul 11 00:22:07.220750 kubelet[2089]: I0711 00:22:07.220742 2089 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 00:22:07.220857 kubelet[2089]: I0711 00:22:07.220768 2089 kubelet.go:352] "Adding apiserver pod source" Jul 11 00:22:07.220857 kubelet[2089]: I0711 00:22:07.220778 2089 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 00:22:07.228047 kubelet[2089]: W0711 00:22:07.227918 2089 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Jul 11 00:22:07.228047 kubelet[2089]: E0711 00:22:07.227993 2089 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:22:07.228236 kubelet[2089]: I0711 00:22:07.228176 2089 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 11 00:22:07.228562 kubelet[2089]: W0711 00:22:07.228503 2089 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Jul 11 00:22:07.228590 kubelet[2089]: E0711 00:22:07.228557 2089 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:22:07.229107 kubelet[2089]: I0711 00:22:07.229063 2089 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 00:22:07.229288 kubelet[2089]: W0711 00:22:07.229256 2089 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 11 00:22:07.230689 kubelet[2089]: I0711 00:22:07.230657 2089 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 11 00:22:07.230759 kubelet[2089]: I0711 00:22:07.230734 2089 server.go:1287] "Started kubelet" Jul 11 00:22:07.233575 kubelet[2089]: I0711 00:22:07.232442 2089 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 00:22:07.233575 kubelet[2089]: I0711 00:22:07.232761 2089 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 00:22:07.233575 kubelet[2089]: I0711 00:22:07.232829 2089 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 00:22:07.233575 kubelet[2089]: I0711 00:22:07.232964 2089 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 00:22:07.234105 kubelet[2089]: I0711 00:22:07.234085 2089 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 00:22:07.234278 kubelet[2089]: I0711 00:22:07.234266 2089 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 11 00:22:07.235818 kubelet[2089]: E0711 00:22:07.235793 2089 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:22:07.235891 kubelet[2089]: I0711 00:22:07.235878 2089 server.go:479] "Adding debug handlers to kubelet server" Jul 11 00:22:07.236435 kubelet[2089]: I0711 00:22:07.236420 2089 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 11 00:22:07.236481 kubelet[2089]: I0711 00:22:07.236468 2089 reconciler.go:26] "Reconciler: start to sync state" Jul 11 00:22:07.240360 kubelet[2089]: W0711 00:22:07.239359 2089 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Jul 11 00:22:07.240360 kubelet[2089]: E0711 00:22:07.239418 2089 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:22:07.240360 kubelet[2089]: E0711 00:22:07.239495 2089 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.102:6443: connect: connection refused" interval="200ms" Jul 11 00:22:07.241724 kubelet[2089]: I0711 00:22:07.241422 2089 factory.go:221] Registration of the systemd container factory successfully Jul 11 00:22:07.241724 kubelet[2089]: I0711 00:22:07.241540 2089 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 00:22:07.246707 kubelet[2089]: I0711 00:22:07.245785 2089 factory.go:221] Registration of the containerd container factory successfully Jul 11 00:22:07.247502 kubelet[2089]: E0711 00:22:07.247073 2089 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.102:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.102:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18510a8c2f64f372 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-11 00:22:07.230677874 +0000 UTC m=+0.709365054,LastTimestamp:2025-07-11 00:22:07.230677874 +0000 UTC m=+0.709365054,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 11 00:22:07.252513 kubelet[2089]: I0711 00:22:07.252455 2089 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 00:22:07.253884 kubelet[2089]: I0711 00:22:07.253854 2089 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 00:22:07.253884 kubelet[2089]: I0711 00:22:07.253875 2089 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 11 00:22:07.253976 kubelet[2089]: I0711 00:22:07.253910 2089 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 11 00:22:07.253976 kubelet[2089]: I0711 00:22:07.253918 2089 kubelet.go:2382] "Starting kubelet main sync loop" Jul 11 00:22:07.253976 kubelet[2089]: E0711 00:22:07.253963 2089 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 00:22:07.258927 kubelet[2089]: W0711 00:22:07.258873 2089 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Jul 11 00:22:07.258993 kubelet[2089]: E0711 00:22:07.258938 2089 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:22:07.259964 kubelet[2089]: I0711 00:22:07.259933 2089 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 11 00:22:07.259964 kubelet[2089]: I0711 00:22:07.259961 2089 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 11 00:22:07.260064 kubelet[2089]: I0711 00:22:07.259979 2089 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:22:07.337019 kubelet[2089]: E0711 00:22:07.336961 2089 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:22:07.354205 kubelet[2089]: E0711 00:22:07.354167 2089 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 11 00:22:07.382800 kubelet[2089]: E0711 00:22:07.382700 2089 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.102:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.102:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18510a8c2f64f372 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-11 00:22:07.230677874 +0000 UTC m=+0.709365054,LastTimestamp:2025-07-11 00:22:07.230677874 +0000 UTC m=+0.709365054,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 11 00:22:07.426935 kubelet[2089]: I0711 00:22:07.426827 2089 policy_none.go:49] "None policy: Start" Jul 11 00:22:07.426935 kubelet[2089]: I0711 00:22:07.426860 2089 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 11 00:22:07.426935 kubelet[2089]: I0711 00:22:07.426895 2089 state_mem.go:35] "Initializing new in-memory state store" Jul 11 00:22:07.433324 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 11 00:22:07.437958 kubelet[2089]: E0711 00:22:07.437919 2089 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:22:07.440439 kubelet[2089]: E0711 00:22:07.440408 2089 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.102:6443: connect: connection refused" interval="400ms" Jul 11 00:22:07.447478 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 11 00:22:07.458607 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 11 00:22:07.459909 kubelet[2089]: I0711 00:22:07.459725 2089 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 00:22:07.459976 kubelet[2089]: I0711 00:22:07.459932 2089 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 00:22:07.459976 kubelet[2089]: I0711 00:22:07.459943 2089 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 00:22:07.460282 kubelet[2089]: I0711 00:22:07.460258 2089 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 00:22:07.461219 kubelet[2089]: E0711 00:22:07.461197 2089 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 11 00:22:07.461273 kubelet[2089]: E0711 00:22:07.461259 2089 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 11 00:22:07.562922 kubelet[2089]: I0711 00:22:07.562643 2089 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:22:07.563207 kubelet[2089]: E0711 00:22:07.563082 2089 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.102:6443/api/v1/nodes\": dial tcp 10.0.0.102:6443: connect: connection refused" node="localhost" Jul 11 00:22:07.563267 systemd[1]: Created slice kubepods-burstable-pod3750a6888001a00d5c4aa8e9004940f7.slice - libcontainer container kubepods-burstable-pod3750a6888001a00d5c4aa8e9004940f7.slice. Jul 11 00:22:07.580509 kubelet[2089]: E0711 00:22:07.580402 2089 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:22:07.583446 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jul 11 00:22:07.584952 kubelet[2089]: E0711 00:22:07.584929 2089 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:22:07.587115 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jul 11 00:22:07.588812 kubelet[2089]: E0711 00:22:07.588661 2089 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:22:07.639860 kubelet[2089]: I0711 00:22:07.639799 2089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:22:07.639860 kubelet[2089]: I0711 00:22:07.639847 2089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 11 00:22:07.640218 kubelet[2089]: I0711 00:22:07.639872 2089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3750a6888001a00d5c4aa8e9004940f7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3750a6888001a00d5c4aa8e9004940f7\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:22:07.640218 kubelet[2089]: I0711 00:22:07.639911 2089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3750a6888001a00d5c4aa8e9004940f7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3750a6888001a00d5c4aa8e9004940f7\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:22:07.640218 kubelet[2089]: I0711 00:22:07.639961 2089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3750a6888001a00d5c4aa8e9004940f7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3750a6888001a00d5c4aa8e9004940f7\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:22:07.640218 kubelet[2089]: I0711 00:22:07.639997 2089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:22:07.640218 kubelet[2089]: I0711 00:22:07.640016 2089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:22:07.640325 kubelet[2089]: I0711 00:22:07.640031 2089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:22:07.640325 kubelet[2089]: I0711 00:22:07.640046 2089 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:22:07.764669 kubelet[2089]: I0711 00:22:07.764556 2089 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:22:07.764986 kubelet[2089]: E0711 00:22:07.764927 2089 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.102:6443/api/v1/nodes\": dial tcp 10.0.0.102:6443: connect: connection refused" node="localhost" Jul 11 00:22:07.841704 kubelet[2089]: E0711 00:22:07.841663 2089 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.102:6443: connect: connection refused" interval="800ms" Jul 11 00:22:07.880885 kubelet[2089]: E0711 00:22:07.880851 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:07.881506 containerd[1425]: time="2025-07-11T00:22:07.881457109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3750a6888001a00d5c4aa8e9004940f7,Namespace:kube-system,Attempt:0,}" Jul 11 00:22:07.885813 kubelet[2089]: E0711 00:22:07.885728 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:07.886178 containerd[1425]: time="2025-07-11T00:22:07.886137984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jul 11 00:22:07.889457 kubelet[2089]: E0711 00:22:07.889427 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:07.890112 containerd[1425]: time="2025-07-11T00:22:07.889770755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jul 11 00:22:08.113687 kubelet[2089]: W0711 00:22:08.113569 2089 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Jul 11 00:22:08.113687 kubelet[2089]: E0711 00:22:08.113651 2089 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:22:08.150892 kubelet[2089]: W0711 00:22:08.150784 2089 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Jul 11 00:22:08.150892 kubelet[2089]: E0711 00:22:08.150851 2089 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:22:08.166380 kubelet[2089]: I0711 00:22:08.166115 2089 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:22:08.166503 kubelet[2089]: E0711 00:22:08.166432 2089 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.102:6443/api/v1/nodes\": dial tcp 10.0.0.102:6443: connect: connection refused" node="localhost" Jul 11 00:22:08.361889 kubelet[2089]: W0711 00:22:08.361847 2089 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Jul 11 00:22:08.361889 kubelet[2089]: E0711 00:22:08.361893 2089 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:22:08.386362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1383198251.mount: Deactivated successfully. Jul 11 00:22:08.391402 containerd[1425]: time="2025-07-11T00:22:08.391357422Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:22:08.392165 containerd[1425]: time="2025-07-11T00:22:08.392121861Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 11 00:22:08.392852 containerd[1425]: time="2025-07-11T00:22:08.392821400Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:22:08.394124 containerd[1425]: time="2025-07-11T00:22:08.394098358Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 11 00:22:08.394601 containerd[1425]: time="2025-07-11T00:22:08.394569730Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:22:08.395136 containerd[1425]: time="2025-07-11T00:22:08.395071212Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 11 00:22:08.395293 containerd[1425]: time="2025-07-11T00:22:08.395271669Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:22:08.395982 containerd[1425]: time="2025-07-11T00:22:08.395955373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:22:08.397123 containerd[1425]: time="2025-07-11T00:22:08.397086577Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 507.250844ms" Jul 11 00:22:08.400515 containerd[1425]: time="2025-07-11T00:22:08.400484506Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 514.27003ms" Jul 11 00:22:08.403048 containerd[1425]: time="2025-07-11T00:22:08.402906663Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 521.367624ms" Jul 11 00:22:08.526258 containerd[1425]: time="2025-07-11T00:22:08.526147837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:22:08.526258 containerd[1425]: time="2025-07-11T00:22:08.526196501Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:22:08.526258 containerd[1425]: time="2025-07-11T00:22:08.526207138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:08.526441 containerd[1425]: time="2025-07-11T00:22:08.526277636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:08.526689 containerd[1425]: time="2025-07-11T00:22:08.526623527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:22:08.526772 containerd[1425]: time="2025-07-11T00:22:08.526698743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:22:08.526772 containerd[1425]: time="2025-07-11T00:22:08.526711579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:08.526870 containerd[1425]: time="2025-07-11T00:22:08.526818265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:08.529687 containerd[1425]: time="2025-07-11T00:22:08.529028049Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:22:08.529687 containerd[1425]: time="2025-07-11T00:22:08.529097587Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:22:08.529687 containerd[1425]: time="2025-07-11T00:22:08.529117821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:08.529687 containerd[1425]: time="2025-07-11T00:22:08.529193997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:08.546235 systemd[1]: Started cri-containerd-0eb996550223387ca4ff32cca463a4ded5cc27ddebdb1bd30255c8f5f74026d3.scope - libcontainer container 0eb996550223387ca4ff32cca463a4ded5cc27ddebdb1bd30255c8f5f74026d3. Jul 11 00:22:08.547710 systemd[1]: Started cri-containerd-490eabd78cfa5f2b1ac67068107b6311eb27dd604a0cce2fa36dca3db3e1af22.scope - libcontainer container 490eabd78cfa5f2b1ac67068107b6311eb27dd604a0cce2fa36dca3db3e1af22. Jul 11 00:22:08.551537 systemd[1]: Started cri-containerd-f8636e1e0580f950be4a620ebdc6db6a49bfa73bcc8605508e3848ed2893676f.scope - libcontainer container f8636e1e0580f950be4a620ebdc6db6a49bfa73bcc8605508e3848ed2893676f. Jul 11 00:22:08.581069 containerd[1425]: time="2025-07-11T00:22:08.580975843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"0eb996550223387ca4ff32cca463a4ded5cc27ddebdb1bd30255c8f5f74026d3\"" Jul 11 00:22:08.582077 containerd[1425]: time="2025-07-11T00:22:08.581830414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"490eabd78cfa5f2b1ac67068107b6311eb27dd604a0cce2fa36dca3db3e1af22\"" Jul 11 00:22:08.582135 kubelet[2089]: E0711 00:22:08.582038 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:08.582382 kubelet[2089]: E0711 00:22:08.582323 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:08.584614 containerd[1425]: time="2025-07-11T00:22:08.584294398Z" level=info msg="CreateContainer within sandbox \"0eb996550223387ca4ff32cca463a4ded5cc27ddebdb1bd30255c8f5f74026d3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 11 00:22:08.585311 containerd[1425]: time="2025-07-11T00:22:08.585277808Z" level=info msg="CreateContainer within sandbox \"490eabd78cfa5f2b1ac67068107b6311eb27dd604a0cce2fa36dca3db3e1af22\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 11 00:22:08.587314 containerd[1425]: time="2025-07-11T00:22:08.587279377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3750a6888001a00d5c4aa8e9004940f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8636e1e0580f950be4a620ebdc6db6a49bfa73bcc8605508e3848ed2893676f\"" Jul 11 00:22:08.587879 kubelet[2089]: E0711 00:22:08.587851 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:08.591556 containerd[1425]: time="2025-07-11T00:22:08.591434148Z" level=info msg="CreateContainer within sandbox \"f8636e1e0580f950be4a620ebdc6db6a49bfa73bcc8605508e3848ed2893676f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 11 00:22:08.603950 containerd[1425]: time="2025-07-11T00:22:08.603900101Z" level=info msg="CreateContainer within sandbox \"490eabd78cfa5f2b1ac67068107b6311eb27dd604a0cce2fa36dca3db3e1af22\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"84c784cac07dedc32ef76250a6885efd3d52ae331d1514fa53f1a11f66b555a9\"" Jul 11 00:22:08.604585 containerd[1425]: time="2025-07-11T00:22:08.604544698Z" level=info msg="StartContainer for \"84c784cac07dedc32ef76250a6885efd3d52ae331d1514fa53f1a11f66b555a9\"" Jul 11 00:22:08.607236 containerd[1425]: time="2025-07-11T00:22:08.607106211Z" level=info msg="CreateContainer within sandbox \"0eb996550223387ca4ff32cca463a4ded5cc27ddebdb1bd30255c8f5f74026d3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"be9a19341c5c91ed492fe96436f60b79f3fc310f16ba58a567b371592bf7597b\"" Jul 11 00:22:08.607607 containerd[1425]: time="2025-07-11T00:22:08.607582021Z" level=info msg="StartContainer for \"be9a19341c5c91ed492fe96436f60b79f3fc310f16ba58a567b371592bf7597b\"" Jul 11 00:22:08.634239 systemd[1]: Started cri-containerd-84c784cac07dedc32ef76250a6885efd3d52ae331d1514fa53f1a11f66b555a9.scope - libcontainer container 84c784cac07dedc32ef76250a6885efd3d52ae331d1514fa53f1a11f66b555a9. Jul 11 00:22:08.636576 systemd[1]: Started cri-containerd-be9a19341c5c91ed492fe96436f60b79f3fc310f16ba58a567b371592bf7597b.scope - libcontainer container be9a19341c5c91ed492fe96436f60b79f3fc310f16ba58a567b371592bf7597b. Jul 11 00:22:08.642529 kubelet[2089]: E0711 00:22:08.642487 2089 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.102:6443: connect: connection refused" interval="1.6s" Jul 11 00:22:08.661683 containerd[1425]: time="2025-07-11T00:22:08.661585727Z" level=info msg="CreateContainer within sandbox \"f8636e1e0580f950be4a620ebdc6db6a49bfa73bcc8605508e3848ed2893676f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"15d230b0a852e8cc160a3341117512d39c44280eeb7f5091bf84167d416dec35\"" Jul 11 00:22:08.662213 containerd[1425]: time="2025-07-11T00:22:08.662175742Z" level=info msg="StartContainer for \"15d230b0a852e8cc160a3341117512d39c44280eeb7f5091bf84167d416dec35\"" Jul 11 00:22:08.686533 containerd[1425]: time="2025-07-11T00:22:08.685392187Z" level=info msg="StartContainer for \"84c784cac07dedc32ef76250a6885efd3d52ae331d1514fa53f1a11f66b555a9\" returns successfully" Jul 11 00:22:08.686533 containerd[1425]: time="2025-07-11T00:22:08.685534063Z" level=info msg="StartContainer for \"be9a19341c5c91ed492fe96436f60b79f3fc310f16ba58a567b371592bf7597b\" returns successfully" Jul 11 00:22:08.699224 systemd[1]: Started cri-containerd-15d230b0a852e8cc160a3341117512d39c44280eeb7f5091bf84167d416dec35.scope - libcontainer container 15d230b0a852e8cc160a3341117512d39c44280eeb7f5091bf84167d416dec35. Jul 11 00:22:08.718244 kubelet[2089]: W0711 00:22:08.718123 2089 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Jul 11 00:22:08.718244 kubelet[2089]: E0711 00:22:08.718194 2089 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:22:08.745773 containerd[1425]: time="2025-07-11T00:22:08.745583064Z" level=info msg="StartContainer for \"15d230b0a852e8cc160a3341117512d39c44280eeb7f5091bf84167d416dec35\" returns successfully" Jul 11 00:22:08.969251 kubelet[2089]: I0711 00:22:08.968467 2089 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:22:09.266897 kubelet[2089]: E0711 00:22:09.266517 2089 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:22:09.266897 kubelet[2089]: E0711 00:22:09.266638 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:09.268232 kubelet[2089]: E0711 00:22:09.267958 2089 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:22:09.268232 kubelet[2089]: E0711 00:22:09.268073 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:09.269627 kubelet[2089]: E0711 00:22:09.269607 2089 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:22:09.269873 kubelet[2089]: E0711 00:22:09.269857 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:10.258460 kubelet[2089]: E0711 00:22:10.258407 2089 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 11 00:22:10.272325 kubelet[2089]: E0711 00:22:10.272107 2089 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:22:10.272325 kubelet[2089]: E0711 00:22:10.272217 2089 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:22:10.272325 kubelet[2089]: E0711 00:22:10.272268 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:10.272499 kubelet[2089]: E0711 00:22:10.272351 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:10.363923 kubelet[2089]: I0711 00:22:10.363865 2089 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 11 00:22:10.363923 kubelet[2089]: E0711 00:22:10.363908 2089 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 11 00:22:10.376183 kubelet[2089]: E0711 00:22:10.376118 2089 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:22:10.476607 kubelet[2089]: E0711 00:22:10.476558 2089 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:22:10.577332 kubelet[2089]: E0711 00:22:10.577195 2089 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:22:10.677610 kubelet[2089]: E0711 00:22:10.677545 2089 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:22:10.777857 kubelet[2089]: E0711 00:22:10.777809 2089 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:22:10.878696 kubelet[2089]: E0711 00:22:10.878652 2089 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:22:10.979234 kubelet[2089]: E0711 00:22:10.979196 2089 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:22:11.079723 kubelet[2089]: E0711 00:22:11.079684 2089 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:22:11.180254 kubelet[2089]: E0711 00:22:11.180105 2089 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:22:11.337840 kubelet[2089]: I0711 00:22:11.337780 2089 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 00:22:11.352092 kubelet[2089]: I0711 00:22:11.352015 2089 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 11 00:22:11.356886 kubelet[2089]: I0711 00:22:11.356800 2089 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 00:22:12.233334 kubelet[2089]: I0711 00:22:12.233288 2089 apiserver.go:52] "Watching apiserver" Jul 11 00:22:12.236244 kubelet[2089]: E0711 00:22:12.235661 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:12.236244 kubelet[2089]: E0711 00:22:12.235930 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:12.236244 kubelet[2089]: E0711 00:22:12.236190 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:12.237788 kubelet[2089]: I0711 00:22:12.237753 2089 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 11 00:22:12.468248 systemd[1]: Reloading requested from client PID 2368 ('systemctl') (unit session-7.scope)... Jul 11 00:22:12.468270 systemd[1]: Reloading... Jul 11 00:22:12.538125 zram_generator::config[2408]: No configuration found. Jul 11 00:22:12.708889 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:22:12.774288 systemd[1]: Reloading finished in 305 ms. Jul 11 00:22:12.806849 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:22:12.819950 systemd[1]: kubelet.service: Deactivated successfully. Jul 11 00:22:12.820198 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:22:12.820255 systemd[1]: kubelet.service: Consumed 1.088s CPU time, 127.1M memory peak, 0B memory swap peak. Jul 11 00:22:12.828364 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:22:12.929722 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:22:12.933911 (kubelet)[2449]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 00:22:12.974646 kubelet[2449]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:22:12.974646 kubelet[2449]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 11 00:22:12.974646 kubelet[2449]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:22:12.974932 kubelet[2449]: I0711 00:22:12.974636 2449 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 00:22:12.982165 kubelet[2449]: I0711 00:22:12.982121 2449 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 11 00:22:12.982165 kubelet[2449]: I0711 00:22:12.982154 2449 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 00:22:12.982422 kubelet[2449]: I0711 00:22:12.982394 2449 server.go:954] "Client rotation is on, will bootstrap in background" Jul 11 00:22:12.983696 kubelet[2449]: I0711 00:22:12.983672 2449 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 11 00:22:12.986090 kubelet[2449]: I0711 00:22:12.985954 2449 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:22:12.988985 kubelet[2449]: E0711 00:22:12.988956 2449 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 11 00:22:12.988985 kubelet[2449]: I0711 00:22:12.988986 2449 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 11 00:22:12.991688 kubelet[2449]: I0711 00:22:12.991659 2449 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 00:22:12.991879 kubelet[2449]: I0711 00:22:12.991842 2449 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 00:22:12.992139 kubelet[2449]: I0711 00:22:12.991873 2449 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 11 00:22:12.992214 kubelet[2449]: I0711 00:22:12.992148 2449 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 00:22:12.992214 kubelet[2449]: I0711 00:22:12.992159 2449 container_manager_linux.go:304] "Creating device plugin manager" Jul 11 00:22:12.992214 kubelet[2449]: I0711 00:22:12.992206 2449 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:22:12.992349 kubelet[2449]: I0711 00:22:12.992328 2449 kubelet.go:446] "Attempting to sync node with API server" Jul 11 00:22:12.992349 kubelet[2449]: I0711 00:22:12.992345 2449 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 00:22:12.992408 kubelet[2449]: I0711 00:22:12.992362 2449 kubelet.go:352] "Adding apiserver pod source" Jul 11 00:22:12.992408 kubelet[2449]: I0711 00:22:12.992371 2449 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 00:22:12.993717 kubelet[2449]: I0711 00:22:12.993671 2449 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 11 00:22:12.994245 kubelet[2449]: I0711 00:22:12.994219 2449 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 00:22:12.997075 kubelet[2449]: I0711 00:22:12.994729 2449 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 11 00:22:12.997075 kubelet[2449]: I0711 00:22:12.994767 2449 server.go:1287] "Started kubelet" Jul 11 00:22:12.997075 kubelet[2449]: I0711 00:22:12.995184 2449 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 00:22:12.997075 kubelet[2449]: I0711 00:22:12.995491 2449 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 00:22:12.997075 kubelet[2449]: I0711 00:22:12.995711 2449 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 00:22:12.998224 kubelet[2449]: I0711 00:22:12.998206 2449 server.go:479] "Adding debug handlers to kubelet server" Jul 11 00:22:13.000508 kubelet[2449]: I0711 00:22:13.000488 2449 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 00:22:13.001140 kubelet[2449]: I0711 00:22:13.001118 2449 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 00:22:13.003928 kubelet[2449]: E0711 00:22:13.003908 2449 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 00:22:13.004250 kubelet[2449]: E0711 00:22:13.004236 2449 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:22:13.004353 kubelet[2449]: I0711 00:22:13.004342 2449 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 11 00:22:13.004824 kubelet[2449]: I0711 00:22:13.004806 2449 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 11 00:22:13.005011 kubelet[2449]: I0711 00:22:13.004998 2449 reconciler.go:26] "Reconciler: start to sync state" Jul 11 00:22:13.005388 kubelet[2449]: I0711 00:22:13.005358 2449 factory.go:221] Registration of the systemd container factory successfully Jul 11 00:22:13.005486 kubelet[2449]: I0711 00:22:13.005467 2449 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 00:22:13.015415 kubelet[2449]: I0711 00:22:13.014708 2449 factory.go:221] Registration of the containerd container factory successfully Jul 11 00:22:13.019089 kubelet[2449]: I0711 00:22:13.019019 2449 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 00:22:13.020273 kubelet[2449]: I0711 00:22:13.020246 2449 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 00:22:13.020273 kubelet[2449]: I0711 00:22:13.020272 2449 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 11 00:22:13.020359 kubelet[2449]: I0711 00:22:13.020293 2449 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 11 00:22:13.020359 kubelet[2449]: I0711 00:22:13.020301 2449 kubelet.go:2382] "Starting kubelet main sync loop" Jul 11 00:22:13.020359 kubelet[2449]: E0711 00:22:13.020340 2449 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 00:22:13.045718 kubelet[2449]: I0711 00:22:13.045692 2449 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 11 00:22:13.045718 kubelet[2449]: I0711 00:22:13.045712 2449 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 11 00:22:13.045848 kubelet[2449]: I0711 00:22:13.045735 2449 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:22:13.045920 kubelet[2449]: I0711 00:22:13.045904 2449 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 11 00:22:13.045964 kubelet[2449]: I0711 00:22:13.045921 2449 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 11 00:22:13.045964 kubelet[2449]: I0711 00:22:13.045941 2449 policy_none.go:49] "None policy: Start" Jul 11 00:22:13.045964 kubelet[2449]: I0711 00:22:13.045949 2449 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 11 00:22:13.045964 kubelet[2449]: I0711 00:22:13.045958 2449 state_mem.go:35] "Initializing new in-memory state store" Jul 11 00:22:13.046087 kubelet[2449]: I0711 00:22:13.046073 2449 state_mem.go:75] "Updated machine memory state" Jul 11 00:22:13.050512 kubelet[2449]: I0711 00:22:13.050466 2449 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 00:22:13.050687 kubelet[2449]: I0711 00:22:13.050659 2449 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 00:22:13.050724 kubelet[2449]: I0711 00:22:13.050681 2449 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 00:22:13.050952 kubelet[2449]: I0711 00:22:13.050928 2449 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 00:22:13.051709 kubelet[2449]: E0711 00:22:13.051685 2449 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 11 00:22:13.121402 kubelet[2449]: I0711 00:22:13.121362 2449 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 00:22:13.121402 kubelet[2449]: I0711 00:22:13.121405 2449 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 11 00:22:13.121547 kubelet[2449]: I0711 00:22:13.121516 2449 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 00:22:13.126813 kubelet[2449]: E0711 00:22:13.126775 2449 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 11 00:22:13.126813 kubelet[2449]: E0711 00:22:13.126796 2449 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 11 00:22:13.127339 kubelet[2449]: E0711 00:22:13.127320 2449 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 11 00:22:13.154696 kubelet[2449]: I0711 00:22:13.154667 2449 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:22:13.161059 kubelet[2449]: I0711 00:22:13.161026 2449 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 11 00:22:13.161259 kubelet[2449]: I0711 00:22:13.161242 2449 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 11 00:22:13.206435 kubelet[2449]: I0711 00:22:13.206365 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3750a6888001a00d5c4aa8e9004940f7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3750a6888001a00d5c4aa8e9004940f7\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:22:13.206435 kubelet[2449]: I0711 00:22:13.206426 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:22:13.206595 kubelet[2449]: I0711 00:22:13.206447 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:22:13.206595 kubelet[2449]: I0711 00:22:13.206465 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:22:13.206595 kubelet[2449]: I0711 00:22:13.206482 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:22:13.206595 kubelet[2449]: I0711 00:22:13.206498 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:22:13.206595 kubelet[2449]: I0711 00:22:13.206513 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 11 00:22:13.206710 kubelet[2449]: I0711 00:22:13.206528 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3750a6888001a00d5c4aa8e9004940f7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3750a6888001a00d5c4aa8e9004940f7\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:22:13.206710 kubelet[2449]: I0711 00:22:13.206553 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3750a6888001a00d5c4aa8e9004940f7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3750a6888001a00d5c4aa8e9004940f7\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:22:13.428002 kubelet[2449]: E0711 00:22:13.427950 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:13.428002 kubelet[2449]: E0711 00:22:13.428000 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:13.428187 kubelet[2449]: E0711 00:22:13.428105 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:13.477291 sudo[2486]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 11 00:22:13.477576 sudo[2486]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 11 00:22:13.898422 sudo[2486]: pam_unix(sudo:session): session closed for user root Jul 11 00:22:13.994802 kubelet[2449]: I0711 00:22:13.993498 2449 apiserver.go:52] "Watching apiserver" Jul 11 00:22:14.005809 kubelet[2449]: I0711 00:22:14.005773 2449 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 11 00:22:14.032907 kubelet[2449]: E0711 00:22:14.032460 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:14.032907 kubelet[2449]: I0711 00:22:14.032464 2449 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 00:22:14.032907 kubelet[2449]: E0711 00:22:14.032672 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:14.039880 kubelet[2449]: E0711 00:22:14.039117 2449 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 11 00:22:14.039880 kubelet[2449]: E0711 00:22:14.039287 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:14.054930 kubelet[2449]: I0711 00:22:14.054871 2449 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.054855908 podStartE2EDuration="3.054855908s" podCreationTimestamp="2025-07-11 00:22:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:22:14.05371459 +0000 UTC m=+1.116544522" watchObservedRunningTime="2025-07-11 00:22:14.054855908 +0000 UTC m=+1.117685840" Jul 11 00:22:14.062700 kubelet[2449]: I0711 00:22:14.062411 2449 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.062393403 podStartE2EDuration="3.062393403s" podCreationTimestamp="2025-07-11 00:22:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:22:14.061801727 +0000 UTC m=+1.124631619" watchObservedRunningTime="2025-07-11 00:22:14.062393403 +0000 UTC m=+1.125223335" Jul 11 00:22:15.033530 kubelet[2449]: E0711 00:22:15.033378 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:15.033530 kubelet[2449]: E0711 00:22:15.033465 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:16.025500 sudo[1606]: pam_unix(sudo:session): session closed for user root Jul 11 00:22:16.027493 sshd[1603]: pam_unix(sshd:session): session closed for user core Jul 11 00:22:16.032273 systemd[1]: sshd@6-10.0.0.102:22-10.0.0.1:54488.service: Deactivated successfully. Jul 11 00:22:16.035577 systemd[1]: session-7.scope: Deactivated successfully. Jul 11 00:22:16.035936 systemd[1]: session-7.scope: Consumed 8.329s CPU time, 151.0M memory peak, 0B memory swap peak. Jul 11 00:22:16.036615 systemd-logind[1406]: Session 7 logged out. Waiting for processes to exit. Jul 11 00:22:16.037527 systemd-logind[1406]: Removed session 7. Jul 11 00:22:17.227092 kubelet[2449]: E0711 00:22:17.227002 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:17.236962 kubelet[2449]: E0711 00:22:17.236930 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:18.181078 kubelet[2449]: E0711 00:22:18.180738 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:18.203367 kubelet[2449]: I0711 00:22:18.203249 2449 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=7.203235459 podStartE2EDuration="7.203235459s" podCreationTimestamp="2025-07-11 00:22:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:22:14.071205157 +0000 UTC m=+1.134035089" watchObservedRunningTime="2025-07-11 00:22:18.203235459 +0000 UTC m=+5.266065391" Jul 11 00:22:18.996802 kubelet[2449]: I0711 00:22:18.996656 2449 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 11 00:22:18.997178 kubelet[2449]: I0711 00:22:18.997151 2449 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 11 00:22:18.997208 containerd[1425]: time="2025-07-11T00:22:18.996966131Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 11 00:22:19.040743 kubelet[2449]: E0711 00:22:19.040481 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:19.695590 systemd[1]: Created slice kubepods-besteffort-pod9fc60cec_2652_4017_95d1_d2881971e305.slice - libcontainer container kubepods-besteffort-pod9fc60cec_2652_4017_95d1_d2881971e305.slice. Jul 11 00:22:19.709135 systemd[1]: Created slice kubepods-burstable-pod9e64dd31_5370_4fa7_b77b_3b48af1a6c68.slice - libcontainer container kubepods-burstable-pod9e64dd31_5370_4fa7_b77b_3b48af1a6c68.slice. Jul 11 00:22:19.747668 kubelet[2449]: I0711 00:22:19.747616 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-bpf-maps\") pod \"cilium-v2gjl\" (UID: \"9e64dd31-5370-4fa7-b77b-3b48af1a6c68\") " pod="kube-system/cilium-v2gjl" Jul 11 00:22:19.747668 kubelet[2449]: I0711 00:22:19.747662 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-clustermesh-secrets\") pod \"cilium-v2gjl\" (UID: \"9e64dd31-5370-4fa7-b77b-3b48af1a6c68\") " pod="kube-system/cilium-v2gjl" Jul 11 00:22:19.747827 kubelet[2449]: I0711 00:22:19.747684 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-host-proc-sys-kernel\") pod \"cilium-v2gjl\" (UID: \"9e64dd31-5370-4fa7-b77b-3b48af1a6c68\") " pod="kube-system/cilium-v2gjl" Jul 11 00:22:19.747827 kubelet[2449]: I0711 00:22:19.747700 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz5mb\" (UniqueName: \"kubernetes.io/projected/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-kube-api-access-bz5mb\") pod \"cilium-v2gjl\" (UID: \"9e64dd31-5370-4fa7-b77b-3b48af1a6c68\") " pod="kube-system/cilium-v2gjl" Jul 11 00:22:19.747827 kubelet[2449]: I0711 00:22:19.747738 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9fc60cec-2652-4017-95d1-d2881971e305-lib-modules\") pod \"kube-proxy-mxq6r\" (UID: \"9fc60cec-2652-4017-95d1-d2881971e305\") " pod="kube-system/kube-proxy-mxq6r" Jul 11 00:22:19.747827 kubelet[2449]: I0711 00:22:19.747756 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-xtables-lock\") pod \"cilium-v2gjl\" (UID: \"9e64dd31-5370-4fa7-b77b-3b48af1a6c68\") " pod="kube-system/cilium-v2gjl" Jul 11 00:22:19.747827 kubelet[2449]: I0711 00:22:19.747770 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-cilium-run\") pod \"cilium-v2gjl\" (UID: \"9e64dd31-5370-4fa7-b77b-3b48af1a6c68\") " pod="kube-system/cilium-v2gjl" Jul 11 00:22:19.748000 kubelet[2449]: I0711 00:22:19.747784 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9fc60cec-2652-4017-95d1-d2881971e305-kube-proxy\") pod \"kube-proxy-mxq6r\" (UID: \"9fc60cec-2652-4017-95d1-d2881971e305\") " pod="kube-system/kube-proxy-mxq6r" Jul 11 00:22:19.748000 kubelet[2449]: I0711 00:22:19.747818 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9fc60cec-2652-4017-95d1-d2881971e305-xtables-lock\") pod \"kube-proxy-mxq6r\" (UID: \"9fc60cec-2652-4017-95d1-d2881971e305\") " pod="kube-system/kube-proxy-mxq6r" Jul 11 00:22:19.748000 kubelet[2449]: I0711 00:22:19.747841 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nmmt\" (UniqueName: \"kubernetes.io/projected/9fc60cec-2652-4017-95d1-d2881971e305-kube-api-access-8nmmt\") pod \"kube-proxy-mxq6r\" (UID: \"9fc60cec-2652-4017-95d1-d2881971e305\") " pod="kube-system/kube-proxy-mxq6r" Jul 11 00:22:19.748000 kubelet[2449]: I0711 00:22:19.747857 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-etc-cni-netd\") pod \"cilium-v2gjl\" (UID: \"9e64dd31-5370-4fa7-b77b-3b48af1a6c68\") " pod="kube-system/cilium-v2gjl" Jul 11 00:22:19.748000 kubelet[2449]: I0711 00:22:19.747876 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-hubble-tls\") pod \"cilium-v2gjl\" (UID: \"9e64dd31-5370-4fa7-b77b-3b48af1a6c68\") " pod="kube-system/cilium-v2gjl" Jul 11 00:22:19.748000 kubelet[2449]: I0711 00:22:19.747893 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-hostproc\") pod \"cilium-v2gjl\" (UID: \"9e64dd31-5370-4fa7-b77b-3b48af1a6c68\") " pod="kube-system/cilium-v2gjl" Jul 11 00:22:19.748155 kubelet[2449]: I0711 00:22:19.747912 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-cni-path\") pod \"cilium-v2gjl\" (UID: \"9e64dd31-5370-4fa7-b77b-3b48af1a6c68\") " pod="kube-system/cilium-v2gjl" Jul 11 00:22:19.748155 kubelet[2449]: I0711 00:22:19.747933 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-cilium-cgroup\") pod \"cilium-v2gjl\" (UID: \"9e64dd31-5370-4fa7-b77b-3b48af1a6c68\") " pod="kube-system/cilium-v2gjl" Jul 11 00:22:19.748155 kubelet[2449]: I0711 00:22:19.747949 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-host-proc-sys-net\") pod \"cilium-v2gjl\" (UID: \"9e64dd31-5370-4fa7-b77b-3b48af1a6c68\") " pod="kube-system/cilium-v2gjl" Jul 11 00:22:19.748155 kubelet[2449]: I0711 00:22:19.747963 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-cilium-config-path\") pod \"cilium-v2gjl\" (UID: \"9e64dd31-5370-4fa7-b77b-3b48af1a6c68\") " pod="kube-system/cilium-v2gjl" Jul 11 00:22:19.748155 kubelet[2449]: I0711 00:22:19.748006 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-lib-modules\") pod \"cilium-v2gjl\" (UID: \"9e64dd31-5370-4fa7-b77b-3b48af1a6c68\") " pod="kube-system/cilium-v2gjl" Jul 11 00:22:20.006878 kubelet[2449]: E0711 00:22:20.006764 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:20.008455 containerd[1425]: time="2025-07-11T00:22:20.008001299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mxq6r,Uid:9fc60cec-2652-4017-95d1-d2881971e305,Namespace:kube-system,Attempt:0,}" Jul 11 00:22:20.011698 kubelet[2449]: E0711 00:22:20.011639 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:20.012784 containerd[1425]: time="2025-07-11T00:22:20.012725571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v2gjl,Uid:9e64dd31-5370-4fa7-b77b-3b48af1a6c68,Namespace:kube-system,Attempt:0,}" Jul 11 00:22:20.030148 containerd[1425]: time="2025-07-11T00:22:20.029952730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:22:20.030148 containerd[1425]: time="2025-07-11T00:22:20.030011007Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:22:20.030148 containerd[1425]: time="2025-07-11T00:22:20.030026567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:20.030439 containerd[1425]: time="2025-07-11T00:22:20.030396830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:20.035589 containerd[1425]: time="2025-07-11T00:22:20.035381890Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:22:20.035589 containerd[1425]: time="2025-07-11T00:22:20.035527044Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:22:20.035589 containerd[1425]: time="2025-07-11T00:22:20.035545163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:20.035979 containerd[1425]: time="2025-07-11T00:22:20.035768873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:20.046238 systemd[1]: Started cri-containerd-411a07d231850851dc93331fb783001741dbb76211ad34f1d5895e76b6bd35dc.scope - libcontainer container 411a07d231850851dc93331fb783001741dbb76211ad34f1d5895e76b6bd35dc. Jul 11 00:22:20.051657 systemd[1]: Started cri-containerd-a152906c0eaf01bfdce0f2df3c1944e18664831c2b4e0400818de9eff8a80764.scope - libcontainer container a152906c0eaf01bfdce0f2df3c1944e18664831c2b4e0400818de9eff8a80764. Jul 11 00:22:20.094360 systemd[1]: Created slice kubepods-besteffort-pod1e88fbd0_eb01_4885_9eb0_4bc016c1717a.slice - libcontainer container kubepods-besteffort-pod1e88fbd0_eb01_4885_9eb0_4bc016c1717a.slice. Jul 11 00:22:20.100167 containerd[1425]: time="2025-07-11T00:22:20.099645132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mxq6r,Uid:9fc60cec-2652-4017-95d1-d2881971e305,Namespace:kube-system,Attempt:0,} returns sandbox id \"411a07d231850851dc93331fb783001741dbb76211ad34f1d5895e76b6bd35dc\"" Jul 11 00:22:20.101357 kubelet[2449]: E0711 00:22:20.101247 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:20.102231 containerd[1425]: time="2025-07-11T00:22:20.102195459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v2gjl,Uid:9e64dd31-5370-4fa7-b77b-3b48af1a6c68,Namespace:kube-system,Attempt:0,} returns sandbox id \"a152906c0eaf01bfdce0f2df3c1944e18664831c2b4e0400818de9eff8a80764\"" Jul 11 00:22:20.105428 kubelet[2449]: E0711 00:22:20.105353 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:20.105528 containerd[1425]: time="2025-07-11T00:22:20.105464515Z" level=info msg="CreateContainer within sandbox \"411a07d231850851dc93331fb783001741dbb76211ad34f1d5895e76b6bd35dc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 11 00:22:20.106541 containerd[1425]: time="2025-07-11T00:22:20.106514909Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 11 00:22:20.125478 containerd[1425]: time="2025-07-11T00:22:20.125434673Z" level=info msg="CreateContainer within sandbox \"411a07d231850851dc93331fb783001741dbb76211ad34f1d5895e76b6bd35dc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a6cccd1b61472bb72bcaea8fb34c6f821f48fd2c5178a7172b3471acc33ad8cd\"" Jul 11 00:22:20.137755 containerd[1425]: time="2025-07-11T00:22:20.137709171Z" level=info msg="StartContainer for \"a6cccd1b61472bb72bcaea8fb34c6f821f48fd2c5178a7172b3471acc33ad8cd\"" Jul 11 00:22:20.152177 kubelet[2449]: I0711 00:22:20.152118 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e88fbd0-eb01-4885-9eb0-4bc016c1717a-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-qnkrc\" (UID: \"1e88fbd0-eb01-4885-9eb0-4bc016c1717a\") " pod="kube-system/cilium-operator-6c4d7847fc-qnkrc" Jul 11 00:22:20.152177 kubelet[2449]: I0711 00:22:20.152175 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlctd\" (UniqueName: \"kubernetes.io/projected/1e88fbd0-eb01-4885-9eb0-4bc016c1717a-kube-api-access-jlctd\") pod \"cilium-operator-6c4d7847fc-qnkrc\" (UID: \"1e88fbd0-eb01-4885-9eb0-4bc016c1717a\") " pod="kube-system/cilium-operator-6c4d7847fc-qnkrc" Jul 11 00:22:20.164249 systemd[1]: Started cri-containerd-a6cccd1b61472bb72bcaea8fb34c6f821f48fd2c5178a7172b3471acc33ad8cd.scope - libcontainer container a6cccd1b61472bb72bcaea8fb34c6f821f48fd2c5178a7172b3471acc33ad8cd. Jul 11 00:22:20.186837 containerd[1425]: time="2025-07-11T00:22:20.186731166Z" level=info msg="StartContainer for \"a6cccd1b61472bb72bcaea8fb34c6f821f48fd2c5178a7172b3471acc33ad8cd\" returns successfully" Jul 11 00:22:20.398471 kubelet[2449]: E0711 00:22:20.398437 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:20.399063 containerd[1425]: time="2025-07-11T00:22:20.398892556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-qnkrc,Uid:1e88fbd0-eb01-4885-9eb0-4bc016c1717a,Namespace:kube-system,Attempt:0,}" Jul 11 00:22:20.427429 containerd[1425]: time="2025-07-11T00:22:20.427235904Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:22:20.427429 containerd[1425]: time="2025-07-11T00:22:20.427284382Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:22:20.427429 containerd[1425]: time="2025-07-11T00:22:20.427296782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:20.427429 containerd[1425]: time="2025-07-11T00:22:20.427385218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:20.445228 systemd[1]: Started cri-containerd-e66cab895cb8c34b6b10cb3ce5a309c2f35694a6e22fb50a6e8f87718f283b58.scope - libcontainer container e66cab895cb8c34b6b10cb3ce5a309c2f35694a6e22fb50a6e8f87718f283b58. Jul 11 00:22:20.478143 containerd[1425]: time="2025-07-11T00:22:20.477492925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-qnkrc,Uid:1e88fbd0-eb01-4885-9eb0-4bc016c1717a,Namespace:kube-system,Attempt:0,} returns sandbox id \"e66cab895cb8c34b6b10cb3ce5a309c2f35694a6e22fb50a6e8f87718f283b58\"" Jul 11 00:22:20.478503 kubelet[2449]: E0711 00:22:20.478479 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:21.046695 kubelet[2449]: E0711 00:22:21.046395 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:26.130393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1122330987.mount: Deactivated successfully. Jul 11 00:22:27.256887 kubelet[2449]: E0711 00:22:27.256389 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:27.260609 kubelet[2449]: E0711 00:22:27.260254 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:27.281319 kubelet[2449]: I0711 00:22:27.281269 2449 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mxq6r" podStartSLOduration=8.281242271 podStartE2EDuration="8.281242271s" podCreationTimestamp="2025-07-11 00:22:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:22:21.055251176 +0000 UTC m=+8.118081108" watchObservedRunningTime="2025-07-11 00:22:27.281242271 +0000 UTC m=+14.344072203" Jul 11 00:22:27.452312 containerd[1425]: time="2025-07-11T00:22:27.452244663Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:27.452863 containerd[1425]: time="2025-07-11T00:22:27.452823125Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 11 00:22:27.453664 containerd[1425]: time="2025-07-11T00:22:27.453635421Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:27.456114 containerd[1425]: time="2025-07-11T00:22:27.455568962Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.348932978s" Jul 11 00:22:27.456114 containerd[1425]: time="2025-07-11T00:22:27.455609761Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 11 00:22:27.473560 containerd[1425]: time="2025-07-11T00:22:27.473512100Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 11 00:22:27.474172 containerd[1425]: time="2025-07-11T00:22:27.474128841Z" level=info msg="CreateContainer within sandbox \"a152906c0eaf01bfdce0f2df3c1944e18664831c2b4e0400818de9eff8a80764\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 11 00:22:27.499482 containerd[1425]: time="2025-07-11T00:22:27.499426437Z" level=info msg="CreateContainer within sandbox \"a152906c0eaf01bfdce0f2df3c1944e18664831c2b4e0400818de9eff8a80764\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cb8e201c7eb814fac85a960f083b87991d87448f7879ce020098e65e98a804f0\"" Jul 11 00:22:27.499957 containerd[1425]: time="2025-07-11T00:22:27.499925581Z" level=info msg="StartContainer for \"cb8e201c7eb814fac85a960f083b87991d87448f7879ce020098e65e98a804f0\"" Jul 11 00:22:27.527224 systemd[1]: Started cri-containerd-cb8e201c7eb814fac85a960f083b87991d87448f7879ce020098e65e98a804f0.scope - libcontainer container cb8e201c7eb814fac85a960f083b87991d87448f7879ce020098e65e98a804f0. Jul 11 00:22:27.551960 containerd[1425]: time="2025-07-11T00:22:27.551831533Z" level=info msg="StartContainer for \"cb8e201c7eb814fac85a960f083b87991d87448f7879ce020098e65e98a804f0\" returns successfully" Jul 11 00:22:27.596610 systemd[1]: cri-containerd-cb8e201c7eb814fac85a960f083b87991d87448f7879ce020098e65e98a804f0.scope: Deactivated successfully. Jul 11 00:22:27.895776 containerd[1425]: time="2025-07-11T00:22:27.889007502Z" level=info msg="shim disconnected" id=cb8e201c7eb814fac85a960f083b87991d87448f7879ce020098e65e98a804f0 namespace=k8s.io Jul 11 00:22:27.895776 containerd[1425]: time="2025-07-11T00:22:27.895772817Z" level=warning msg="cleaning up after shim disconnected" id=cb8e201c7eb814fac85a960f083b87991d87448f7879ce020098e65e98a804f0 namespace=k8s.io Jul 11 00:22:27.895776 containerd[1425]: time="2025-07-11T00:22:27.895787417Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:22:28.061754 kubelet[2449]: E0711 00:22:28.061711 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:28.061910 kubelet[2449]: E0711 00:22:28.061776 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:28.063996 containerd[1425]: time="2025-07-11T00:22:28.063957142Z" level=info msg="CreateContainer within sandbox \"a152906c0eaf01bfdce0f2df3c1944e18664831c2b4e0400818de9eff8a80764\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 11 00:22:28.083907 containerd[1425]: time="2025-07-11T00:22:28.083856211Z" level=info msg="CreateContainer within sandbox \"a152906c0eaf01bfdce0f2df3c1944e18664831c2b4e0400818de9eff8a80764\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"235721277c463966026620f36116b00cd1ffd4a02af4e3028e903e5e51455ecb\"" Jul 11 00:22:28.085110 containerd[1425]: time="2025-07-11T00:22:28.084357836Z" level=info msg="StartContainer for \"235721277c463966026620f36116b00cd1ffd4a02af4e3028e903e5e51455ecb\"" Jul 11 00:22:28.117270 systemd[1]: Started cri-containerd-235721277c463966026620f36116b00cd1ffd4a02af4e3028e903e5e51455ecb.scope - libcontainer container 235721277c463966026620f36116b00cd1ffd4a02af4e3028e903e5e51455ecb. Jul 11 00:22:28.140087 containerd[1425]: time="2025-07-11T00:22:28.140028839Z" level=info msg="StartContainer for \"235721277c463966026620f36116b00cd1ffd4a02af4e3028e903e5e51455ecb\" returns successfully" Jul 11 00:22:28.162816 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 11 00:22:28.163043 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:22:28.163125 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:22:28.170522 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:22:28.170706 systemd[1]: cri-containerd-235721277c463966026620f36116b00cd1ffd4a02af4e3028e903e5e51455ecb.scope: Deactivated successfully. Jul 11 00:22:28.188209 containerd[1425]: time="2025-07-11T00:22:28.188031381Z" level=info msg="shim disconnected" id=235721277c463966026620f36116b00cd1ffd4a02af4e3028e903e5e51455ecb namespace=k8s.io Jul 11 00:22:28.188209 containerd[1425]: time="2025-07-11T00:22:28.188103219Z" level=warning msg="cleaning up after shim disconnected" id=235721277c463966026620f36116b00cd1ffd4a02af4e3028e903e5e51455ecb namespace=k8s.io Jul 11 00:22:28.188209 containerd[1425]: time="2025-07-11T00:22:28.188111539Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:22:28.200918 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:22:28.497880 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb8e201c7eb814fac85a960f083b87991d87448f7879ce020098e65e98a804f0-rootfs.mount: Deactivated successfully. Jul 11 00:22:28.950523 containerd[1425]: time="2025-07-11T00:22:28.950467100Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:28.952023 containerd[1425]: time="2025-07-11T00:22:28.951952697Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 11 00:22:28.953105 containerd[1425]: time="2025-07-11T00:22:28.952874751Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:28.954377 containerd[1425]: time="2025-07-11T00:22:28.954340589Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.48078437s" Jul 11 00:22:28.954443 containerd[1425]: time="2025-07-11T00:22:28.954381027Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 11 00:22:28.960090 containerd[1425]: time="2025-07-11T00:22:28.958170959Z" level=info msg="CreateContainer within sandbox \"e66cab895cb8c34b6b10cb3ce5a309c2f35694a6e22fb50a6e8f87718f283b58\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 11 00:22:28.977347 containerd[1425]: time="2025-07-11T00:22:28.977289690Z" level=info msg="CreateContainer within sandbox \"e66cab895cb8c34b6b10cb3ce5a309c2f35694a6e22fb50a6e8f87718f283b58\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3318370c20da32dfc481d9e7d4f681d450579920b977ffbd3cdd97ceeb860b8d\"" Jul 11 00:22:28.977917 containerd[1425]: time="2025-07-11T00:22:28.977891993Z" level=info msg="StartContainer for \"3318370c20da32dfc481d9e7d4f681d450579920b977ffbd3cdd97ceeb860b8d\"" Jul 11 00:22:29.000222 systemd[1]: Started cri-containerd-3318370c20da32dfc481d9e7d4f681d450579920b977ffbd3cdd97ceeb860b8d.scope - libcontainer container 3318370c20da32dfc481d9e7d4f681d450579920b977ffbd3cdd97ceeb860b8d. Jul 11 00:22:29.025158 containerd[1425]: time="2025-07-11T00:22:29.025106632Z" level=info msg="StartContainer for \"3318370c20da32dfc481d9e7d4f681d450579920b977ffbd3cdd97ceeb860b8d\" returns successfully" Jul 11 00:22:29.065020 kubelet[2449]: E0711 00:22:29.064857 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:29.069802 kubelet[2449]: E0711 00:22:29.069720 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:29.074586 containerd[1425]: time="2025-07-11T00:22:29.074511725Z" level=info msg="CreateContainer within sandbox \"a152906c0eaf01bfdce0f2df3c1944e18664831c2b4e0400818de9eff8a80764\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 11 00:22:29.078631 kubelet[2449]: I0711 00:22:29.078544 2449 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-qnkrc" podStartSLOduration=0.602572698 podStartE2EDuration="9.078528695s" podCreationTimestamp="2025-07-11 00:22:20 +0000 UTC" firstStartedPulling="2025-07-11 00:22:20.479307765 +0000 UTC m=+7.542137697" lastFinishedPulling="2025-07-11 00:22:28.955263802 +0000 UTC m=+16.018093694" observedRunningTime="2025-07-11 00:22:29.076674346 +0000 UTC m=+16.139504278" watchObservedRunningTime="2025-07-11 00:22:29.078528695 +0000 UTC m=+16.141358627" Jul 11 00:22:29.133183 containerd[1425]: time="2025-07-11T00:22:29.133128407Z" level=info msg="CreateContainer within sandbox \"a152906c0eaf01bfdce0f2df3c1944e18664831c2b4e0400818de9eff8a80764\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"660ab4370835efef9badd86d80d049cba479b53d65f1b687ab67de1396554dbf\"" Jul 11 00:22:29.134043 containerd[1425]: time="2025-07-11T00:22:29.134005343Z" level=info msg="StartContainer for \"660ab4370835efef9badd86d80d049cba479b53d65f1b687ab67de1396554dbf\"" Jul 11 00:22:29.162252 systemd[1]: Started cri-containerd-660ab4370835efef9badd86d80d049cba479b53d65f1b687ab67de1396554dbf.scope - libcontainer container 660ab4370835efef9badd86d80d049cba479b53d65f1b687ab67de1396554dbf. Jul 11 00:22:29.202612 containerd[1425]: time="2025-07-11T00:22:29.201772335Z" level=info msg="StartContainer for \"660ab4370835efef9badd86d80d049cba479b53d65f1b687ab67de1396554dbf\" returns successfully" Jul 11 00:22:29.218234 systemd[1]: cri-containerd-660ab4370835efef9badd86d80d049cba479b53d65f1b687ab67de1396554dbf.scope: Deactivated successfully. Jul 11 00:22:29.390767 containerd[1425]: time="2025-07-11T00:22:29.390702343Z" level=info msg="shim disconnected" id=660ab4370835efef9badd86d80d049cba479b53d65f1b687ab67de1396554dbf namespace=k8s.io Jul 11 00:22:29.390767 containerd[1425]: time="2025-07-11T00:22:29.390765701Z" level=warning msg="cleaning up after shim disconnected" id=660ab4370835efef9badd86d80d049cba479b53d65f1b687ab67de1396554dbf namespace=k8s.io Jul 11 00:22:29.390976 containerd[1425]: time="2025-07-11T00:22:29.390775541Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:22:30.073447 kubelet[2449]: E0711 00:22:30.073390 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:30.073931 kubelet[2449]: E0711 00:22:30.073483 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:30.075313 containerd[1425]: time="2025-07-11T00:22:30.075225331Z" level=info msg="CreateContainer within sandbox \"a152906c0eaf01bfdce0f2df3c1944e18664831c2b4e0400818de9eff8a80764\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 11 00:22:30.092689 containerd[1425]: time="2025-07-11T00:22:30.092474044Z" level=info msg="CreateContainer within sandbox \"a152906c0eaf01bfdce0f2df3c1944e18664831c2b4e0400818de9eff8a80764\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ef178a9582d3c7f8646690fef246b2895cf55dd66884b6c636816428c0127172\"" Jul 11 00:22:30.093073 containerd[1425]: time="2025-07-11T00:22:30.092846954Z" level=info msg="StartContainer for \"ef178a9582d3c7f8646690fef246b2895cf55dd66884b6c636816428c0127172\"" Jul 11 00:22:30.107754 update_engine[1412]: I20250711 00:22:30.107133 1412 update_attempter.cc:509] Updating boot flags... Jul 11 00:22:30.123253 systemd[1]: Started cri-containerd-ef178a9582d3c7f8646690fef246b2895cf55dd66884b6c636816428c0127172.scope - libcontainer container ef178a9582d3c7f8646690fef246b2895cf55dd66884b6c636816428c0127172. Jul 11 00:22:30.132079 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3113) Jul 11 00:22:30.165513 systemd[1]: cri-containerd-ef178a9582d3c7f8646690fef246b2895cf55dd66884b6c636816428c0127172.scope: Deactivated successfully. Jul 11 00:22:30.170146 containerd[1425]: time="2025-07-11T00:22:30.168359317Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e64dd31_5370_4fa7_b77b_3b48af1a6c68.slice/cri-containerd-ef178a9582d3c7f8646690fef246b2895cf55dd66884b6c636816428c0127172.scope/memory.events\": no such file or directory" Jul 11 00:22:30.182109 containerd[1425]: time="2025-07-11T00:22:30.181549495Z" level=info msg="StartContainer for \"ef178a9582d3c7f8646690fef246b2895cf55dd66884b6c636816428c0127172\" returns successfully" Jul 11 00:22:30.198178 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3117) Jul 11 00:22:30.209080 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3117) Jul 11 00:22:30.243439 containerd[1425]: time="2025-07-11T00:22:30.243378651Z" level=info msg="shim disconnected" id=ef178a9582d3c7f8646690fef246b2895cf55dd66884b6c636816428c0127172 namespace=k8s.io Jul 11 00:22:30.243650 containerd[1425]: time="2025-07-11T00:22:30.243631885Z" level=warning msg="cleaning up after shim disconnected" id=ef178a9582d3c7f8646690fef246b2895cf55dd66884b6c636816428c0127172 namespace=k8s.io Jul 11 00:22:30.243780 containerd[1425]: time="2025-07-11T00:22:30.243763481Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:22:30.497721 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef178a9582d3c7f8646690fef246b2895cf55dd66884b6c636816428c0127172-rootfs.mount: Deactivated successfully. Jul 11 00:22:31.077801 kubelet[2449]: E0711 00:22:31.077660 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:31.080764 containerd[1425]: time="2025-07-11T00:22:31.080646602Z" level=info msg="CreateContainer within sandbox \"a152906c0eaf01bfdce0f2df3c1944e18664831c2b4e0400818de9eff8a80764\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 11 00:22:31.092777 containerd[1425]: time="2025-07-11T00:22:31.092667465Z" level=info msg="CreateContainer within sandbox \"a152906c0eaf01bfdce0f2df3c1944e18664831c2b4e0400818de9eff8a80764\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e74c2abc4ca3e1cd23b706979ee96e5c77dc45603dc7538c871dc209beaf5be5\"" Jul 11 00:22:31.095475 containerd[1425]: time="2025-07-11T00:22:31.094315185Z" level=info msg="StartContainer for \"e74c2abc4ca3e1cd23b706979ee96e5c77dc45603dc7538c871dc209beaf5be5\"" Jul 11 00:22:31.144295 systemd[1]: Started cri-containerd-e74c2abc4ca3e1cd23b706979ee96e5c77dc45603dc7538c871dc209beaf5be5.scope - libcontainer container e74c2abc4ca3e1cd23b706979ee96e5c77dc45603dc7538c871dc209beaf5be5. Jul 11 00:22:31.168681 containerd[1425]: time="2025-07-11T00:22:31.168545233Z" level=info msg="StartContainer for \"e74c2abc4ca3e1cd23b706979ee96e5c77dc45603dc7538c871dc209beaf5be5\" returns successfully" Jul 11 00:22:31.324534 kubelet[2449]: I0711 00:22:31.324490 2449 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 11 00:22:31.377186 systemd[1]: Created slice kubepods-burstable-pod86486268_b101_4b57_8e91_fd24674862f8.slice - libcontainer container kubepods-burstable-pod86486268_b101_4b57_8e91_fd24674862f8.slice. Jul 11 00:22:31.384794 systemd[1]: Created slice kubepods-burstable-podd1fd5939_c6d6_4ba7_a542_8fc0eba4b96a.slice - libcontainer container kubepods-burstable-podd1fd5939_c6d6_4ba7_a542_8fc0eba4b96a.slice. Jul 11 00:22:31.444952 kubelet[2449]: I0711 00:22:31.444909 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86486268-b101-4b57-8e91-fd24674862f8-config-volume\") pod \"coredns-668d6bf9bc-clgvv\" (UID: \"86486268-b101-4b57-8e91-fd24674862f8\") " pod="kube-system/coredns-668d6bf9bc-clgvv" Jul 11 00:22:31.445242 kubelet[2449]: I0711 00:22:31.445223 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8flh\" (UniqueName: \"kubernetes.io/projected/86486268-b101-4b57-8e91-fd24674862f8-kube-api-access-r8flh\") pod \"coredns-668d6bf9bc-clgvv\" (UID: \"86486268-b101-4b57-8e91-fd24674862f8\") " pod="kube-system/coredns-668d6bf9bc-clgvv" Jul 11 00:22:31.445406 kubelet[2449]: I0711 00:22:31.445382 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1fd5939-c6d6-4ba7-a542-8fc0eba4b96a-config-volume\") pod \"coredns-668d6bf9bc-76hxr\" (UID: \"d1fd5939-c6d6-4ba7-a542-8fc0eba4b96a\") " pod="kube-system/coredns-668d6bf9bc-76hxr" Jul 11 00:22:31.445540 kubelet[2449]: I0711 00:22:31.445525 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jvhx\" (UniqueName: \"kubernetes.io/projected/d1fd5939-c6d6-4ba7-a542-8fc0eba4b96a-kube-api-access-8jvhx\") pod \"coredns-668d6bf9bc-76hxr\" (UID: \"d1fd5939-c6d6-4ba7-a542-8fc0eba4b96a\") " pod="kube-system/coredns-668d6bf9bc-76hxr" Jul 11 00:22:31.681087 kubelet[2449]: E0711 00:22:31.680967 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:31.682597 containerd[1425]: time="2025-07-11T00:22:31.682227800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-clgvv,Uid:86486268-b101-4b57-8e91-fd24674862f8,Namespace:kube-system,Attempt:0,}" Jul 11 00:22:31.688785 kubelet[2449]: E0711 00:22:31.688760 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:31.689737 containerd[1425]: time="2025-07-11T00:22:31.689476701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-76hxr,Uid:d1fd5939-c6d6-4ba7-a542-8fc0eba4b96a,Namespace:kube-system,Attempt:0,}" Jul 11 00:22:32.081807 kubelet[2449]: E0711 00:22:32.081695 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:32.096779 kubelet[2449]: I0711 00:22:32.096492 2449 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-v2gjl" podStartSLOduration=5.729896297 podStartE2EDuration="13.096474493s" podCreationTimestamp="2025-07-11 00:22:19 +0000 UTC" firstStartedPulling="2025-07-11 00:22:20.105947334 +0000 UTC m=+7.168777266" lastFinishedPulling="2025-07-11 00:22:27.47252557 +0000 UTC m=+14.535355462" observedRunningTime="2025-07-11 00:22:32.096243578 +0000 UTC m=+19.159073510" watchObservedRunningTime="2025-07-11 00:22:32.096474493 +0000 UTC m=+19.159304425" Jul 11 00:22:33.086179 kubelet[2449]: E0711 00:22:33.086150 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:33.365203 systemd-networkd[1363]: cilium_host: Link UP Jul 11 00:22:33.365504 systemd-networkd[1363]: cilium_net: Link UP Jul 11 00:22:33.366544 systemd-networkd[1363]: cilium_net: Gained carrier Jul 11 00:22:33.366790 systemd-networkd[1363]: cilium_host: Gained carrier Jul 11 00:22:33.366915 systemd-networkd[1363]: cilium_net: Gained IPv6LL Jul 11 00:22:33.367045 systemd-networkd[1363]: cilium_host: Gained IPv6LL Jul 11 00:22:33.446043 systemd-networkd[1363]: cilium_vxlan: Link UP Jul 11 00:22:33.446061 systemd-networkd[1363]: cilium_vxlan: Gained carrier Jul 11 00:22:33.735081 kernel: NET: Registered PF_ALG protocol family Jul 11 00:22:34.088322 kubelet[2449]: E0711 00:22:34.088292 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:34.331809 systemd-networkd[1363]: lxc_health: Link UP Jul 11 00:22:34.336162 systemd-networkd[1363]: lxc_health: Gained carrier Jul 11 00:22:34.482294 systemd-networkd[1363]: cilium_vxlan: Gained IPv6LL Jul 11 00:22:34.813725 systemd-networkd[1363]: lxc25f92c383dc6: Link UP Jul 11 00:22:34.827188 kernel: eth0: renamed from tmpd3eec Jul 11 00:22:34.837277 systemd-networkd[1363]: lxc25f92c383dc6: Gained carrier Jul 11 00:22:34.838908 systemd-networkd[1363]: lxc0ade3d6f3116: Link UP Jul 11 00:22:34.850100 kernel: eth0: renamed from tmp4e9e3 Jul 11 00:22:34.861117 systemd-networkd[1363]: lxc0ade3d6f3116: Gained carrier Jul 11 00:22:35.762268 systemd-networkd[1363]: lxc_health: Gained IPv6LL Jul 11 00:22:36.023666 kubelet[2449]: E0711 00:22:36.023554 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:36.722259 systemd-networkd[1363]: lxc0ade3d6f3116: Gained IPv6LL Jul 11 00:22:36.786277 systemd-networkd[1363]: lxc25f92c383dc6: Gained IPv6LL Jul 11 00:22:36.823650 kubelet[2449]: I0711 00:22:36.823615 2449 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:22:36.824035 kubelet[2449]: E0711 00:22:36.824009 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:37.092141 kubelet[2449]: E0711 00:22:37.092045 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:38.278642 containerd[1425]: time="2025-07-11T00:22:38.278529288Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:22:38.278642 containerd[1425]: time="2025-07-11T00:22:38.278623207Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:22:38.278642 containerd[1425]: time="2025-07-11T00:22:38.278634806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:38.278997 containerd[1425]: time="2025-07-11T00:22:38.278707685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:38.291528 containerd[1425]: time="2025-07-11T00:22:38.291430218Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:22:38.291528 containerd[1425]: time="2025-07-11T00:22:38.291488057Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:22:38.291528 containerd[1425]: time="2025-07-11T00:22:38.291502817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:38.291667 containerd[1425]: time="2025-07-11T00:22:38.291585416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:38.299216 systemd[1]: Started cri-containerd-d3eec4009122d2d98574dfac280de84406ba497b22d865e894796c06e98cf169.scope - libcontainer container d3eec4009122d2d98574dfac280de84406ba497b22d865e894796c06e98cf169. Jul 11 00:22:38.307505 systemd[1]: Started cri-containerd-4e9e3dbd8e66515179f86dce7586d05a0d8415c6ddf1acbd37b177573fdb8cad.scope - libcontainer container 4e9e3dbd8e66515179f86dce7586d05a0d8415c6ddf1acbd37b177573fdb8cad. Jul 11 00:22:38.312912 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:22:38.320857 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:22:38.333274 containerd[1425]: time="2025-07-11T00:22:38.333243073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-clgvv,Uid:86486268-b101-4b57-8e91-fd24674862f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3eec4009122d2d98574dfac280de84406ba497b22d865e894796c06e98cf169\"" Jul 11 00:22:38.333900 kubelet[2449]: E0711 00:22:38.333875 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:38.335602 containerd[1425]: time="2025-07-11T00:22:38.335454314Z" level=info msg="CreateContainer within sandbox \"d3eec4009122d2d98574dfac280de84406ba497b22d865e894796c06e98cf169\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 00:22:38.349284 containerd[1425]: time="2025-07-11T00:22:38.349254708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-76hxr,Uid:d1fd5939-c6d6-4ba7-a542-8fc0eba4b96a,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e9e3dbd8e66515179f86dce7586d05a0d8415c6ddf1acbd37b177573fdb8cad\"" Jul 11 00:22:38.349903 kubelet[2449]: E0711 00:22:38.349884 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:38.351347 containerd[1425]: time="2025-07-11T00:22:38.350509646Z" level=info msg="CreateContainer within sandbox \"d3eec4009122d2d98574dfac280de84406ba497b22d865e894796c06e98cf169\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c8029e928c039f0573c3ea1c5ff89f8be8e35438830c737c913e804cf5e0aba5\"" Jul 11 00:22:38.351347 containerd[1425]: time="2025-07-11T00:22:38.350854320Z" level=info msg="StartContainer for \"c8029e928c039f0573c3ea1c5ff89f8be8e35438830c737c913e804cf5e0aba5\"" Jul 11 00:22:38.351347 containerd[1425]: time="2025-07-11T00:22:38.351388630Z" level=info msg="CreateContainer within sandbox \"4e9e3dbd8e66515179f86dce7586d05a0d8415c6ddf1acbd37b177573fdb8cad\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 00:22:38.364341 containerd[1425]: time="2025-07-11T00:22:38.364301920Z" level=info msg="CreateContainer within sandbox \"4e9e3dbd8e66515179f86dce7586d05a0d8415c6ddf1acbd37b177573fdb8cad\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"03b7090b0a59bca7f99f4cf3a2706945f95b6a33684c678dabf4645f71a5a239\"" Jul 11 00:22:38.364968 containerd[1425]: time="2025-07-11T00:22:38.364933309Z" level=info msg="StartContainer for \"03b7090b0a59bca7f99f4cf3a2706945f95b6a33684c678dabf4645f71a5a239\"" Jul 11 00:22:38.382825 systemd[1]: Started cri-containerd-c8029e928c039f0573c3ea1c5ff89f8be8e35438830c737c913e804cf5e0aba5.scope - libcontainer container c8029e928c039f0573c3ea1c5ff89f8be8e35438830c737c913e804cf5e0aba5. Jul 11 00:22:38.406344 systemd[1]: Started cri-containerd-03b7090b0a59bca7f99f4cf3a2706945f95b6a33684c678dabf4645f71a5a239.scope - libcontainer container 03b7090b0a59bca7f99f4cf3a2706945f95b6a33684c678dabf4645f71a5a239. Jul 11 00:22:38.417695 containerd[1425]: time="2025-07-11T00:22:38.417652569Z" level=info msg="StartContainer for \"c8029e928c039f0573c3ea1c5ff89f8be8e35438830c737c913e804cf5e0aba5\" returns successfully" Jul 11 00:22:38.438248 containerd[1425]: time="2025-07-11T00:22:38.438209843Z" level=info msg="StartContainer for \"03b7090b0a59bca7f99f4cf3a2706945f95b6a33684c678dabf4645f71a5a239\" returns successfully" Jul 11 00:22:39.096997 kubelet[2449]: E0711 00:22:39.096960 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:39.104068 kubelet[2449]: E0711 00:22:39.103518 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:39.118080 kubelet[2449]: I0711 00:22:39.116531 2449 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-76hxr" podStartSLOduration=19.11651176 podStartE2EDuration="19.11651176s" podCreationTimestamp="2025-07-11 00:22:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:22:39.116434562 +0000 UTC m=+26.179264494" watchObservedRunningTime="2025-07-11 00:22:39.11651176 +0000 UTC m=+26.179341692" Jul 11 00:22:39.130233 kubelet[2449]: I0711 00:22:39.130167 2449 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-clgvv" podStartSLOduration=19.130146808 podStartE2EDuration="19.130146808s" podCreationTimestamp="2025-07-11 00:22:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:22:39.129288382 +0000 UTC m=+26.192118314" watchObservedRunningTime="2025-07-11 00:22:39.130146808 +0000 UTC m=+26.192976740" Jul 11 00:22:40.105102 kubelet[2449]: E0711 00:22:40.105046 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:40.105473 kubelet[2449]: E0711 00:22:40.105142 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:41.106812 kubelet[2449]: E0711 00:22:41.106764 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:41.107208 kubelet[2449]: E0711 00:22:41.106830 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:42.373995 systemd[1]: Started sshd@7-10.0.0.102:22-10.0.0.1:52072.service - OpenSSH per-connection server daemon (10.0.0.1:52072). Jul 11 00:22:42.411518 sshd[3850]: Accepted publickey for core from 10.0.0.1 port 52072 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:22:42.412976 sshd[3850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:22:42.416934 systemd-logind[1406]: New session 8 of user core. Jul 11 00:22:42.428209 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 11 00:22:42.545432 sshd[3850]: pam_unix(sshd:session): session closed for user core Jul 11 00:22:42.548622 systemd[1]: sshd@7-10.0.0.102:22-10.0.0.1:52072.service: Deactivated successfully. Jul 11 00:22:42.551086 systemd[1]: session-8.scope: Deactivated successfully. Jul 11 00:22:42.551966 systemd-logind[1406]: Session 8 logged out. Waiting for processes to exit. Jul 11 00:22:42.552822 systemd-logind[1406]: Removed session 8. Jul 11 00:22:47.558662 systemd[1]: Started sshd@8-10.0.0.102:22-10.0.0.1:45780.service - OpenSSH per-connection server daemon (10.0.0.1:45780). Jul 11 00:22:47.611325 sshd[3866]: Accepted publickey for core from 10.0.0.1 port 45780 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:22:47.613013 sshd[3866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:22:47.618366 systemd-logind[1406]: New session 9 of user core. Jul 11 00:22:47.629250 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 11 00:22:47.741447 sshd[3866]: pam_unix(sshd:session): session closed for user core Jul 11 00:22:47.744461 systemd[1]: sshd@8-10.0.0.102:22-10.0.0.1:45780.service: Deactivated successfully. Jul 11 00:22:47.746177 systemd[1]: session-9.scope: Deactivated successfully. Jul 11 00:22:47.747531 systemd-logind[1406]: Session 9 logged out. Waiting for processes to exit. Jul 11 00:22:47.749514 systemd-logind[1406]: Removed session 9. Jul 11 00:22:52.751616 systemd[1]: Started sshd@9-10.0.0.102:22-10.0.0.1:51530.service - OpenSSH per-connection server daemon (10.0.0.1:51530). Jul 11 00:22:52.787510 sshd[3883]: Accepted publickey for core from 10.0.0.1 port 51530 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:22:52.788931 sshd[3883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:22:52.793145 systemd-logind[1406]: New session 10 of user core. Jul 11 00:22:52.801226 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 11 00:22:52.910525 sshd[3883]: pam_unix(sshd:session): session closed for user core Jul 11 00:22:52.913844 systemd[1]: sshd@9-10.0.0.102:22-10.0.0.1:51530.service: Deactivated successfully. Jul 11 00:22:52.915544 systemd[1]: session-10.scope: Deactivated successfully. Jul 11 00:22:52.916193 systemd-logind[1406]: Session 10 logged out. Waiting for processes to exit. Jul 11 00:22:52.917156 systemd-logind[1406]: Removed session 10. Jul 11 00:22:57.923997 systemd[1]: Started sshd@10-10.0.0.102:22-10.0.0.1:51538.service - OpenSSH per-connection server daemon (10.0.0.1:51538). Jul 11 00:22:57.980612 sshd[3898]: Accepted publickey for core from 10.0.0.1 port 51538 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:22:57.982351 sshd[3898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:22:57.991005 systemd-logind[1406]: New session 11 of user core. Jul 11 00:22:58.002342 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 11 00:22:58.138966 sshd[3898]: pam_unix(sshd:session): session closed for user core Jul 11 00:22:58.145759 systemd[1]: sshd@10-10.0.0.102:22-10.0.0.1:51538.service: Deactivated successfully. Jul 11 00:22:58.148510 systemd[1]: session-11.scope: Deactivated successfully. Jul 11 00:22:58.149838 systemd-logind[1406]: Session 11 logged out. Waiting for processes to exit. Jul 11 00:22:58.159376 systemd[1]: Started sshd@11-10.0.0.102:22-10.0.0.1:51542.service - OpenSSH per-connection server daemon (10.0.0.1:51542). Jul 11 00:22:58.161164 systemd-logind[1406]: Removed session 11. Jul 11 00:22:58.202786 sshd[3913]: Accepted publickey for core from 10.0.0.1 port 51542 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:22:58.203913 sshd[3913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:22:58.212235 systemd-logind[1406]: New session 12 of user core. Jul 11 00:22:58.222298 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 11 00:22:58.367736 sshd[3913]: pam_unix(sshd:session): session closed for user core Jul 11 00:22:58.374735 systemd[1]: sshd@11-10.0.0.102:22-10.0.0.1:51542.service: Deactivated successfully. Jul 11 00:22:58.377003 systemd[1]: session-12.scope: Deactivated successfully. Jul 11 00:22:58.378602 systemd-logind[1406]: Session 12 logged out. Waiting for processes to exit. Jul 11 00:22:58.387489 systemd[1]: Started sshd@12-10.0.0.102:22-10.0.0.1:51558.service - OpenSSH per-connection server daemon (10.0.0.1:51558). Jul 11 00:22:58.388642 systemd-logind[1406]: Removed session 12. Jul 11 00:22:58.425391 sshd[3925]: Accepted publickey for core from 10.0.0.1 port 51558 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:22:58.426737 sshd[3925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:22:58.430990 systemd-logind[1406]: New session 13 of user core. Jul 11 00:22:58.435269 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 11 00:22:58.544594 sshd[3925]: pam_unix(sshd:session): session closed for user core Jul 11 00:22:58.547653 systemd[1]: sshd@12-10.0.0.102:22-10.0.0.1:51558.service: Deactivated successfully. Jul 11 00:22:58.549319 systemd[1]: session-13.scope: Deactivated successfully. Jul 11 00:22:58.550543 systemd-logind[1406]: Session 13 logged out. Waiting for processes to exit. Jul 11 00:22:58.551337 systemd-logind[1406]: Removed session 13. Jul 11 00:23:03.554891 systemd[1]: Started sshd@13-10.0.0.102:22-10.0.0.1:44526.service - OpenSSH per-connection server daemon (10.0.0.1:44526). Jul 11 00:23:03.599031 sshd[3939]: Accepted publickey for core from 10.0.0.1 port 44526 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:23:03.600961 sshd[3939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:23:03.607042 systemd-logind[1406]: New session 14 of user core. Jul 11 00:23:03.612346 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 11 00:23:03.730344 sshd[3939]: pam_unix(sshd:session): session closed for user core Jul 11 00:23:03.733728 systemd[1]: sshd@13-10.0.0.102:22-10.0.0.1:44526.service: Deactivated successfully. Jul 11 00:23:03.735690 systemd[1]: session-14.scope: Deactivated successfully. Jul 11 00:23:03.736630 systemd-logind[1406]: Session 14 logged out. Waiting for processes to exit. Jul 11 00:23:03.737726 systemd-logind[1406]: Removed session 14. Jul 11 00:23:08.740859 systemd[1]: Started sshd@14-10.0.0.102:22-10.0.0.1:44534.service - OpenSSH per-connection server daemon (10.0.0.1:44534). Jul 11 00:23:08.776987 sshd[3954]: Accepted publickey for core from 10.0.0.1 port 44534 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:23:08.778291 sshd[3954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:23:08.782288 systemd-logind[1406]: New session 15 of user core. Jul 11 00:23:08.790236 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 11 00:23:08.900495 sshd[3954]: pam_unix(sshd:session): session closed for user core Jul 11 00:23:08.912571 systemd[1]: sshd@14-10.0.0.102:22-10.0.0.1:44534.service: Deactivated successfully. Jul 11 00:23:08.913998 systemd[1]: session-15.scope: Deactivated successfully. Jul 11 00:23:08.915474 systemd-logind[1406]: Session 15 logged out. Waiting for processes to exit. Jul 11 00:23:08.925379 systemd[1]: Started sshd@15-10.0.0.102:22-10.0.0.1:44546.service - OpenSSH per-connection server daemon (10.0.0.1:44546). Jul 11 00:23:08.926833 systemd-logind[1406]: Removed session 15. Jul 11 00:23:08.957686 sshd[3968]: Accepted publickey for core from 10.0.0.1 port 44546 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:23:08.958869 sshd[3968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:23:08.963252 systemd-logind[1406]: New session 16 of user core. Jul 11 00:23:08.973211 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 11 00:23:09.176937 sshd[3968]: pam_unix(sshd:session): session closed for user core Jul 11 00:23:09.189592 systemd[1]: sshd@15-10.0.0.102:22-10.0.0.1:44546.service: Deactivated successfully. Jul 11 00:23:09.191027 systemd[1]: session-16.scope: Deactivated successfully. Jul 11 00:23:09.193255 systemd-logind[1406]: Session 16 logged out. Waiting for processes to exit. Jul 11 00:23:09.194398 systemd[1]: Started sshd@16-10.0.0.102:22-10.0.0.1:44560.service - OpenSSH per-connection server daemon (10.0.0.1:44560). Jul 11 00:23:09.196125 systemd-logind[1406]: Removed session 16. Jul 11 00:23:09.236859 sshd[3980]: Accepted publickey for core from 10.0.0.1 port 44560 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:23:09.238253 sshd[3980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:23:09.242187 systemd-logind[1406]: New session 17 of user core. Jul 11 00:23:09.252199 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 11 00:23:09.987824 sshd[3980]: pam_unix(sshd:session): session closed for user core Jul 11 00:23:09.997596 systemd[1]: sshd@16-10.0.0.102:22-10.0.0.1:44560.service: Deactivated successfully. Jul 11 00:23:09.999008 systemd[1]: session-17.scope: Deactivated successfully. Jul 11 00:23:10.000605 systemd-logind[1406]: Session 17 logged out. Waiting for processes to exit. Jul 11 00:23:10.007726 systemd[1]: Started sshd@17-10.0.0.102:22-10.0.0.1:44562.service - OpenSSH per-connection server daemon (10.0.0.1:44562). Jul 11 00:23:10.011963 systemd-logind[1406]: Removed session 17. Jul 11 00:23:10.042861 sshd[4001]: Accepted publickey for core from 10.0.0.1 port 44562 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:23:10.044368 sshd[4001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:23:10.048360 systemd-logind[1406]: New session 18 of user core. Jul 11 00:23:10.059199 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 11 00:23:10.277952 sshd[4001]: pam_unix(sshd:session): session closed for user core Jul 11 00:23:10.287544 systemd[1]: sshd@17-10.0.0.102:22-10.0.0.1:44562.service: Deactivated successfully. Jul 11 00:23:10.289498 systemd[1]: session-18.scope: Deactivated successfully. Jul 11 00:23:10.293256 systemd-logind[1406]: Session 18 logged out. Waiting for processes to exit. Jul 11 00:23:10.304388 systemd[1]: Started sshd@18-10.0.0.102:22-10.0.0.1:44566.service - OpenSSH per-connection server daemon (10.0.0.1:44566). Jul 11 00:23:10.305362 systemd-logind[1406]: Removed session 18. Jul 11 00:23:10.339625 sshd[4014]: Accepted publickey for core from 10.0.0.1 port 44566 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:23:10.340364 sshd[4014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:23:10.344153 systemd-logind[1406]: New session 19 of user core. Jul 11 00:23:10.354298 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 11 00:23:10.465923 sshd[4014]: pam_unix(sshd:session): session closed for user core Jul 11 00:23:10.470321 systemd[1]: sshd@18-10.0.0.102:22-10.0.0.1:44566.service: Deactivated successfully. Jul 11 00:23:10.472134 systemd[1]: session-19.scope: Deactivated successfully. Jul 11 00:23:10.474360 systemd-logind[1406]: Session 19 logged out. Waiting for processes to exit. Jul 11 00:23:10.477340 systemd-logind[1406]: Removed session 19. Jul 11 00:23:15.479916 systemd[1]: Started sshd@19-10.0.0.102:22-10.0.0.1:37990.service - OpenSSH per-connection server daemon (10.0.0.1:37990). Jul 11 00:23:15.522268 sshd[4032]: Accepted publickey for core from 10.0.0.1 port 37990 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:23:15.523820 sshd[4032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:23:15.528119 systemd-logind[1406]: New session 20 of user core. Jul 11 00:23:15.543611 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 11 00:23:15.657144 sshd[4032]: pam_unix(sshd:session): session closed for user core Jul 11 00:23:15.661236 systemd-logind[1406]: Session 20 logged out. Waiting for processes to exit. Jul 11 00:23:15.662157 systemd[1]: sshd@19-10.0.0.102:22-10.0.0.1:37990.service: Deactivated successfully. Jul 11 00:23:15.664504 systemd[1]: session-20.scope: Deactivated successfully. Jul 11 00:23:15.665605 systemd-logind[1406]: Removed session 20. Jul 11 00:23:20.021208 kubelet[2449]: E0711 00:23:20.021176 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:20.669260 systemd[1]: Started sshd@20-10.0.0.102:22-10.0.0.1:37998.service - OpenSSH per-connection server daemon (10.0.0.1:37998). Jul 11 00:23:20.704672 sshd[4050]: Accepted publickey for core from 10.0.0.1 port 37998 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:23:20.705993 sshd[4050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:23:20.710580 systemd-logind[1406]: New session 21 of user core. Jul 11 00:23:20.719269 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 11 00:23:20.843227 sshd[4050]: pam_unix(sshd:session): session closed for user core Jul 11 00:23:20.847780 systemd-logind[1406]: Session 21 logged out. Waiting for processes to exit. Jul 11 00:23:20.847917 systemd[1]: sshd@20-10.0.0.102:22-10.0.0.1:37998.service: Deactivated successfully. Jul 11 00:23:20.850179 systemd[1]: session-21.scope: Deactivated successfully. Jul 11 00:23:20.851892 systemd-logind[1406]: Removed session 21. Jul 11 00:23:25.853731 systemd[1]: Started sshd@21-10.0.0.102:22-10.0.0.1:56214.service - OpenSSH per-connection server daemon (10.0.0.1:56214). Jul 11 00:23:25.893928 sshd[4064]: Accepted publickey for core from 10.0.0.1 port 56214 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:23:25.895342 sshd[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:23:25.899641 systemd-logind[1406]: New session 22 of user core. Jul 11 00:23:25.909255 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 11 00:23:26.017544 sshd[4064]: pam_unix(sshd:session): session closed for user core Jul 11 00:23:26.029756 systemd[1]: sshd@21-10.0.0.102:22-10.0.0.1:56214.service: Deactivated successfully. Jul 11 00:23:26.031460 systemd[1]: session-22.scope: Deactivated successfully. Jul 11 00:23:26.033002 systemd-logind[1406]: Session 22 logged out. Waiting for processes to exit. Jul 11 00:23:26.034370 systemd[1]: Started sshd@22-10.0.0.102:22-10.0.0.1:56216.service - OpenSSH per-connection server daemon (10.0.0.1:56216). Jul 11 00:23:26.041822 systemd-logind[1406]: Removed session 22. Jul 11 00:23:26.073686 sshd[4078]: Accepted publickey for core from 10.0.0.1 port 56216 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:23:26.074895 sshd[4078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:23:26.079145 systemd-logind[1406]: New session 23 of user core. Jul 11 00:23:26.085195 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 11 00:23:27.929684 containerd[1425]: time="2025-07-11T00:23:27.929584857Z" level=info msg="StopContainer for \"3318370c20da32dfc481d9e7d4f681d450579920b977ffbd3cdd97ceeb860b8d\" with timeout 30 (s)" Jul 11 00:23:27.930268 containerd[1425]: time="2025-07-11T00:23:27.930077828Z" level=info msg="Stop container \"3318370c20da32dfc481d9e7d4f681d450579920b977ffbd3cdd97ceeb860b8d\" with signal terminated" Jul 11 00:23:27.941793 systemd[1]: cri-containerd-3318370c20da32dfc481d9e7d4f681d450579920b977ffbd3cdd97ceeb860b8d.scope: Deactivated successfully. Jul 11 00:23:27.954296 systemd[1]: run-containerd-runc-k8s.io-e74c2abc4ca3e1cd23b706979ee96e5c77dc45603dc7538c871dc209beaf5be5-runc.OEAoSl.mount: Deactivated successfully. Jul 11 00:23:27.966240 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3318370c20da32dfc481d9e7d4f681d450579920b977ffbd3cdd97ceeb860b8d-rootfs.mount: Deactivated successfully. Jul 11 00:23:27.973868 containerd[1425]: time="2025-07-11T00:23:27.973800699Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 00:23:27.974121 containerd[1425]: time="2025-07-11T00:23:27.974044984Z" level=info msg="StopContainer for \"e74c2abc4ca3e1cd23b706979ee96e5c77dc45603dc7538c871dc209beaf5be5\" with timeout 2 (s)" Jul 11 00:23:27.974371 containerd[1425]: time="2025-07-11T00:23:27.974285229Z" level=info msg="Stop container \"e74c2abc4ca3e1cd23b706979ee96e5c77dc45603dc7538c871dc209beaf5be5\" with signal terminated" Jul 11 00:23:27.976360 containerd[1425]: time="2025-07-11T00:23:27.976178308Z" level=info msg="shim disconnected" id=3318370c20da32dfc481d9e7d4f681d450579920b977ffbd3cdd97ceeb860b8d namespace=k8s.io Jul 11 00:23:27.976360 containerd[1425]: time="2025-07-11T00:23:27.976225509Z" level=warning msg="cleaning up after shim disconnected" id=3318370c20da32dfc481d9e7d4f681d450579920b977ffbd3cdd97ceeb860b8d namespace=k8s.io Jul 11 00:23:27.976360 containerd[1425]: time="2025-07-11T00:23:27.976234549Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:23:27.980637 systemd-networkd[1363]: lxc_health: Link DOWN Jul 11 00:23:27.980974 systemd-networkd[1363]: lxc_health: Lost carrier Jul 11 00:23:28.010576 systemd[1]: cri-containerd-e74c2abc4ca3e1cd23b706979ee96e5c77dc45603dc7538c871dc209beaf5be5.scope: Deactivated successfully. Jul 11 00:23:28.010848 systemd[1]: cri-containerd-e74c2abc4ca3e1cd23b706979ee96e5c77dc45603dc7538c871dc209beaf5be5.scope: Consumed 6.440s CPU time. Jul 11 00:23:28.023158 containerd[1425]: time="2025-07-11T00:23:28.023115831Z" level=info msg="StopContainer for \"3318370c20da32dfc481d9e7d4f681d450579920b977ffbd3cdd97ceeb860b8d\" returns successfully" Jul 11 00:23:28.023863 containerd[1425]: time="2025-07-11T00:23:28.023840565Z" level=info msg="StopPodSandbox for \"e66cab895cb8c34b6b10cb3ce5a309c2f35694a6e22fb50a6e8f87718f283b58\"" Jul 11 00:23:28.023979 containerd[1425]: time="2025-07-11T00:23:28.023960488Z" level=info msg="Container to stop \"3318370c20da32dfc481d9e7d4f681d450579920b977ffbd3cdd97ceeb860b8d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:23:28.025994 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e66cab895cb8c34b6b10cb3ce5a309c2f35694a6e22fb50a6e8f87718f283b58-shm.mount: Deactivated successfully. Jul 11 00:23:28.032790 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e74c2abc4ca3e1cd23b706979ee96e5c77dc45603dc7538c871dc209beaf5be5-rootfs.mount: Deactivated successfully. Jul 11 00:23:28.037176 systemd[1]: cri-containerd-e66cab895cb8c34b6b10cb3ce5a309c2f35694a6e22fb50a6e8f87718f283b58.scope: Deactivated successfully. Jul 11 00:23:28.039189 containerd[1425]: time="2025-07-11T00:23:28.039145233Z" level=info msg="shim disconnected" id=e74c2abc4ca3e1cd23b706979ee96e5c77dc45603dc7538c871dc209beaf5be5 namespace=k8s.io Jul 11 00:23:28.039189 containerd[1425]: time="2025-07-11T00:23:28.039217354Z" level=warning msg="cleaning up after shim disconnected" id=e74c2abc4ca3e1cd23b706979ee96e5c77dc45603dc7538c871dc209beaf5be5 namespace=k8s.io Jul 11 00:23:28.039189 containerd[1425]: time="2025-07-11T00:23:28.039227835Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:23:28.062073 containerd[1425]: time="2025-07-11T00:23:28.061840369Z" level=info msg="shim disconnected" id=e66cab895cb8c34b6b10cb3ce5a309c2f35694a6e22fb50a6e8f87718f283b58 namespace=k8s.io Jul 11 00:23:28.062073 containerd[1425]: time="2025-07-11T00:23:28.061896651Z" level=warning msg="cleaning up after shim disconnected" id=e66cab895cb8c34b6b10cb3ce5a309c2f35694a6e22fb50a6e8f87718f283b58 namespace=k8s.io Jul 11 00:23:28.062073 containerd[1425]: time="2025-07-11T00:23:28.061904731Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:23:28.064491 containerd[1425]: time="2025-07-11T00:23:28.064365660Z" level=info msg="StopContainer for \"e74c2abc4ca3e1cd23b706979ee96e5c77dc45603dc7538c871dc209beaf5be5\" returns successfully" Jul 11 00:23:28.064855 containerd[1425]: time="2025-07-11T00:23:28.064829030Z" level=info msg="StopPodSandbox for \"a152906c0eaf01bfdce0f2df3c1944e18664831c2b4e0400818de9eff8a80764\"" Jul 11 00:23:28.065180 containerd[1425]: time="2025-07-11T00:23:28.065025834Z" level=info msg="Container to stop \"660ab4370835efef9badd86d80d049cba479b53d65f1b687ab67de1396554dbf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:23:28.065180 containerd[1425]: time="2025-07-11T00:23:28.065045154Z" level=info msg="Container to stop \"e74c2abc4ca3e1cd23b706979ee96e5c77dc45603dc7538c871dc209beaf5be5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:23:28.065180 containerd[1425]: time="2025-07-11T00:23:28.065082475Z" level=info msg="Container to stop \"cb8e201c7eb814fac85a960f083b87991d87448f7879ce020098e65e98a804f0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:23:28.065180 containerd[1425]: time="2025-07-11T00:23:28.065092315Z" level=info msg="Container to stop \"235721277c463966026620f36116b00cd1ffd4a02af4e3028e903e5e51455ecb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:23:28.065180 containerd[1425]: time="2025-07-11T00:23:28.065104155Z" level=info msg="Container to stop \"ef178a9582d3c7f8646690fef246b2895cf55dd66884b6c636816428c0127172\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:23:28.069725 kubelet[2449]: E0711 00:23:28.069664 2449 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 11 00:23:28.071467 systemd[1]: cri-containerd-a152906c0eaf01bfdce0f2df3c1944e18664831c2b4e0400818de9eff8a80764.scope: Deactivated successfully. Jul 11 00:23:28.076153 containerd[1425]: time="2025-07-11T00:23:28.076111057Z" level=info msg="TearDown network for sandbox \"e66cab895cb8c34b6b10cb3ce5a309c2f35694a6e22fb50a6e8f87718f283b58\" successfully" Jul 11 00:23:28.076153 containerd[1425]: time="2025-07-11T00:23:28.076147297Z" level=info msg="StopPodSandbox for \"e66cab895cb8c34b6b10cb3ce5a309c2f35694a6e22fb50a6e8f87718f283b58\" returns successfully" Jul 11 00:23:28.119832 containerd[1425]: time="2025-07-11T00:23:28.119606732Z" level=info msg="shim disconnected" id=a152906c0eaf01bfdce0f2df3c1944e18664831c2b4e0400818de9eff8a80764 namespace=k8s.io Jul 11 00:23:28.119832 containerd[1425]: time="2025-07-11T00:23:28.119662173Z" level=warning msg="cleaning up after shim disconnected" id=a152906c0eaf01bfdce0f2df3c1944e18664831c2b4e0400818de9eff8a80764 namespace=k8s.io Jul 11 00:23:28.119832 containerd[1425]: time="2025-07-11T00:23:28.119671453Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:23:28.132391 containerd[1425]: time="2025-07-11T00:23:28.132263346Z" level=info msg="TearDown network for sandbox \"a152906c0eaf01bfdce0f2df3c1944e18664831c2b4e0400818de9eff8a80764\" successfully" Jul 11 00:23:28.132391 containerd[1425]: time="2025-07-11T00:23:28.132298067Z" level=info msg="StopPodSandbox for \"a152906c0eaf01bfdce0f2df3c1944e18664831c2b4e0400818de9eff8a80764\" returns successfully" Jul 11 00:23:28.213294 kubelet[2449]: I0711 00:23:28.213172 2449 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-hubble-tls\") pod \"9e64dd31-5370-4fa7-b77b-3b48af1a6c68\" (UID: \"9e64dd31-5370-4fa7-b77b-3b48af1a6c68\") " Jul 11 00:23:28.213294 kubelet[2449]: I0711 00:23:28.213214 2449 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e88fbd0-eb01-4885-9eb0-4bc016c1717a-cilium-config-path\") pod \"1e88fbd0-eb01-4885-9eb0-4bc016c1717a\" (UID: \"1e88fbd0-eb01-4885-9eb0-4bc016c1717a\") " Jul 11 00:23:28.213294 kubelet[2449]: I0711 00:23:28.213236 2449 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlctd\" (UniqueName: \"kubernetes.io/projected/1e88fbd0-eb01-4885-9eb0-4bc016c1717a-kube-api-access-jlctd\") pod \"1e88fbd0-eb01-4885-9eb0-4bc016c1717a\" (UID: \"1e88fbd0-eb01-4885-9eb0-4bc016c1717a\") " Jul 11 00:23:28.213294 kubelet[2449]: I0711 00:23:28.213257 2449 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-etc-cni-netd\") pod \"9e64dd31-5370-4fa7-b77b-3b48af1a6c68\" (UID: \"9e64dd31-5370-4fa7-b77b-3b48af1a6c68\") " Jul 11 00:23:28.213294 kubelet[2449]: I0711 00:23:28.213274 2449 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-clustermesh-secrets\") pod \"9e64dd31-5370-4fa7-b77b-3b48af1a6c68\" (UID: \"9e64dd31-5370-4fa7-b77b-3b48af1a6c68\") " Jul 11 00:23:28.213294 kubelet[2449]: I0711 00:23:28.213295 2449 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-xtables-lock\") pod \"9e64dd31-5370-4fa7-b77b-3b48af1a6c68\" (UID: \"9e64dd31-5370-4fa7-b77b-3b48af1a6c68\") " Jul 11 00:23:28.214372 kubelet[2449]: I0711 00:23:28.213311 2449 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-lib-modules\") pod \"9e64dd31-5370-4fa7-b77b-3b48af1a6c68\" (UID: \"9e64dd31-5370-4fa7-b77b-3b48af1a6c68\") " Jul 11 00:23:28.214372 kubelet[2449]: I0711 00:23:28.213328 2449 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-bpf-maps\") pod \"9e64dd31-5370-4fa7-b77b-3b48af1a6c68\" (UID: \"9e64dd31-5370-4fa7-b77b-3b48af1a6c68\") " Jul 11 00:23:28.214372 kubelet[2449]: I0711 00:23:28.213345 2449 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bz5mb\" (UniqueName: \"kubernetes.io/projected/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-kube-api-access-bz5mb\") pod \"9e64dd31-5370-4fa7-b77b-3b48af1a6c68\" (UID: \"9e64dd31-5370-4fa7-b77b-3b48af1a6c68\") " Jul 11 00:23:28.214372 kubelet[2449]: I0711 00:23:28.213361 2449 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-cilium-run\") pod \"9e64dd31-5370-4fa7-b77b-3b48af1a6c68\" (UID: \"9e64dd31-5370-4fa7-b77b-3b48af1a6c68\") " Jul 11 00:23:28.214372 kubelet[2449]: I0711 00:23:28.213374 2449 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-cni-path\") pod \"9e64dd31-5370-4fa7-b77b-3b48af1a6c68\" (UID: \"9e64dd31-5370-4fa7-b77b-3b48af1a6c68\") " Jul 11 00:23:28.214372 kubelet[2449]: I0711 00:23:28.213395 2449 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-cilium-config-path\") pod \"9e64dd31-5370-4fa7-b77b-3b48af1a6c68\" (UID: \"9e64dd31-5370-4fa7-b77b-3b48af1a6c68\") " Jul 11 00:23:28.214520 kubelet[2449]: I0711 00:23:28.213412 2449 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-cilium-cgroup\") pod \"9e64dd31-5370-4fa7-b77b-3b48af1a6c68\" (UID: \"9e64dd31-5370-4fa7-b77b-3b48af1a6c68\") " Jul 11 00:23:28.214520 kubelet[2449]: I0711 00:23:28.213428 2449 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-host-proc-sys-net\") pod \"9e64dd31-5370-4fa7-b77b-3b48af1a6c68\" (UID: \"9e64dd31-5370-4fa7-b77b-3b48af1a6c68\") " Jul 11 00:23:28.214520 kubelet[2449]: I0711 00:23:28.213444 2449 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-host-proc-sys-kernel\") pod \"9e64dd31-5370-4fa7-b77b-3b48af1a6c68\" (UID: \"9e64dd31-5370-4fa7-b77b-3b48af1a6c68\") " Jul 11 00:23:28.214520 kubelet[2449]: I0711 00:23:28.213459 2449 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-hostproc\") pod \"9e64dd31-5370-4fa7-b77b-3b48af1a6c68\" (UID: \"9e64dd31-5370-4fa7-b77b-3b48af1a6c68\") " Jul 11 00:23:28.217181 kubelet[2449]: I0711 00:23:28.216886 2449 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9e64dd31-5370-4fa7-b77b-3b48af1a6c68" (UID: "9e64dd31-5370-4fa7-b77b-3b48af1a6c68"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:23:28.217181 kubelet[2449]: I0711 00:23:28.217087 2449 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-hostproc" (OuterVolumeSpecName: "hostproc") pod "9e64dd31-5370-4fa7-b77b-3b48af1a6c68" (UID: "9e64dd31-5370-4fa7-b77b-3b48af1a6c68"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:23:28.218423 kubelet[2449]: I0711 00:23:28.218400 2449 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9e64dd31-5370-4fa7-b77b-3b48af1a6c68" (UID: "9e64dd31-5370-4fa7-b77b-3b48af1a6c68"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:23:28.218543 kubelet[2449]: I0711 00:23:28.218528 2449 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9e64dd31-5370-4fa7-b77b-3b48af1a6c68" (UID: "9e64dd31-5370-4fa7-b77b-3b48af1a6c68"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:23:28.218625 kubelet[2449]: I0711 00:23:28.218612 2449 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9e64dd31-5370-4fa7-b77b-3b48af1a6c68" (UID: "9e64dd31-5370-4fa7-b77b-3b48af1a6c68"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:23:28.218694 kubelet[2449]: I0711 00:23:28.218681 2449 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9e64dd31-5370-4fa7-b77b-3b48af1a6c68" (UID: "9e64dd31-5370-4fa7-b77b-3b48af1a6c68"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:23:28.219008 kubelet[2449]: I0711 00:23:28.218977 2449 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e88fbd0-eb01-4885-9eb0-4bc016c1717a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1e88fbd0-eb01-4885-9eb0-4bc016c1717a" (UID: "1e88fbd0-eb01-4885-9eb0-4bc016c1717a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 11 00:23:28.219100 kubelet[2449]: I0711 00:23:28.219032 2449 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9e64dd31-5370-4fa7-b77b-3b48af1a6c68" (UID: "9e64dd31-5370-4fa7-b77b-3b48af1a6c68"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:23:28.219100 kubelet[2449]: I0711 00:23:28.219063 2449 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-cni-path" (OuterVolumeSpecName: "cni-path") pod "9e64dd31-5370-4fa7-b77b-3b48af1a6c68" (UID: "9e64dd31-5370-4fa7-b77b-3b48af1a6c68"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:23:28.219838 kubelet[2449]: I0711 00:23:28.219653 2449 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9e64dd31-5370-4fa7-b77b-3b48af1a6c68" (UID: "9e64dd31-5370-4fa7-b77b-3b48af1a6c68"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:23:28.219838 kubelet[2449]: I0711 00:23:28.219700 2449 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9e64dd31-5370-4fa7-b77b-3b48af1a6c68" (UID: "9e64dd31-5370-4fa7-b77b-3b48af1a6c68"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:23:28.228069 kubelet[2449]: I0711 00:23:28.220941 2449 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9e64dd31-5370-4fa7-b77b-3b48af1a6c68" (UID: "9e64dd31-5370-4fa7-b77b-3b48af1a6c68"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 11 00:23:28.229029 kubelet[2449]: I0711 00:23:28.228988 2449 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e88fbd0-eb01-4885-9eb0-4bc016c1717a-kube-api-access-jlctd" (OuterVolumeSpecName: "kube-api-access-jlctd") pod "1e88fbd0-eb01-4885-9eb0-4bc016c1717a" (UID: "1e88fbd0-eb01-4885-9eb0-4bc016c1717a"). InnerVolumeSpecName "kube-api-access-jlctd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 11 00:23:28.229280 kubelet[2449]: I0711 00:23:28.229219 2449 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9e64dd31-5370-4fa7-b77b-3b48af1a6c68" (UID: "9e64dd31-5370-4fa7-b77b-3b48af1a6c68"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 11 00:23:28.229961 kubelet[2449]: I0711 00:23:28.229937 2449 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9e64dd31-5370-4fa7-b77b-3b48af1a6c68" (UID: "9e64dd31-5370-4fa7-b77b-3b48af1a6c68"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 11 00:23:28.230151 kubelet[2449]: I0711 00:23:28.230126 2449 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-kube-api-access-bz5mb" (OuterVolumeSpecName: "kube-api-access-bz5mb") pod "9e64dd31-5370-4fa7-b77b-3b48af1a6c68" (UID: "9e64dd31-5370-4fa7-b77b-3b48af1a6c68"). InnerVolumeSpecName "kube-api-access-bz5mb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 11 00:23:28.231045 kubelet[2449]: I0711 00:23:28.231010 2449 scope.go:117] "RemoveContainer" containerID="3318370c20da32dfc481d9e7d4f681d450579920b977ffbd3cdd97ceeb860b8d" Jul 11 00:23:28.235435 containerd[1425]: time="2025-07-11T00:23:28.235295819Z" level=info msg="RemoveContainer for \"3318370c20da32dfc481d9e7d4f681d450579920b977ffbd3cdd97ceeb860b8d\"" Jul 11 00:23:28.236182 systemd[1]: Removed slice kubepods-besteffort-pod1e88fbd0_eb01_4885_9eb0_4bc016c1717a.slice - libcontainer container kubepods-besteffort-pod1e88fbd0_eb01_4885_9eb0_4bc016c1717a.slice. Jul 11 00:23:28.242141 containerd[1425]: time="2025-07-11T00:23:28.242098476Z" level=info msg="RemoveContainer for \"3318370c20da32dfc481d9e7d4f681d450579920b977ffbd3cdd97ceeb860b8d\" returns successfully" Jul 11 00:23:28.242342 kubelet[2449]: I0711 00:23:28.242307 2449 scope.go:117] "RemoveContainer" containerID="3318370c20da32dfc481d9e7d4f681d450579920b977ffbd3cdd97ceeb860b8d" Jul 11 00:23:28.243405 containerd[1425]: time="2025-07-11T00:23:28.243256579Z" level=error msg="ContainerStatus for \"3318370c20da32dfc481d9e7d4f681d450579920b977ffbd3cdd97ceeb860b8d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3318370c20da32dfc481d9e7d4f681d450579920b977ffbd3cdd97ceeb860b8d\": not found" Jul 11 00:23:28.249847 kubelet[2449]: E0711 00:23:28.249809 2449 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3318370c20da32dfc481d9e7d4f681d450579920b977ffbd3cdd97ceeb860b8d\": not found" containerID="3318370c20da32dfc481d9e7d4f681d450579920b977ffbd3cdd97ceeb860b8d" Jul 11 00:23:28.250321 systemd[1]: Removed slice kubepods-burstable-pod9e64dd31_5370_4fa7_b77b_3b48af1a6c68.slice - libcontainer container kubepods-burstable-pod9e64dd31_5370_4fa7_b77b_3b48af1a6c68.slice. Jul 11 00:23:28.250407 systemd[1]: kubepods-burstable-pod9e64dd31_5370_4fa7_b77b_3b48af1a6c68.slice: Consumed 6.575s CPU time. Jul 11 00:23:28.254702 kubelet[2449]: I0711 00:23:28.254600 2449 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3318370c20da32dfc481d9e7d4f681d450579920b977ffbd3cdd97ceeb860b8d"} err="failed to get container status \"3318370c20da32dfc481d9e7d4f681d450579920b977ffbd3cdd97ceeb860b8d\": rpc error: code = NotFound desc = an error occurred when try to find container \"3318370c20da32dfc481d9e7d4f681d450579920b977ffbd3cdd97ceeb860b8d\": not found" Jul 11 00:23:28.254812 kubelet[2449]: I0711 00:23:28.254709 2449 scope.go:117] "RemoveContainer" containerID="e74c2abc4ca3e1cd23b706979ee96e5c77dc45603dc7538c871dc209beaf5be5" Jul 11 00:23:28.256613 containerd[1425]: time="2025-07-11T00:23:28.256549926Z" level=info msg="RemoveContainer for \"e74c2abc4ca3e1cd23b706979ee96e5c77dc45603dc7538c871dc209beaf5be5\"" Jul 11 00:23:28.262339 containerd[1425]: time="2025-07-11T00:23:28.262149479Z" level=info msg="RemoveContainer for \"e74c2abc4ca3e1cd23b706979ee96e5c77dc45603dc7538c871dc209beaf5be5\" returns successfully" Jul 11 00:23:28.262855 kubelet[2449]: I0711 00:23:28.262580 2449 scope.go:117] "RemoveContainer" containerID="ef178a9582d3c7f8646690fef246b2895cf55dd66884b6c636816428c0127172" Jul 11 00:23:28.264830 containerd[1425]: time="2025-07-11T00:23:28.264587488Z" level=info msg="RemoveContainer for \"ef178a9582d3c7f8646690fef246b2895cf55dd66884b6c636816428c0127172\"" Jul 11 00:23:28.266933 containerd[1425]: time="2025-07-11T00:23:28.266881014Z" level=info msg="RemoveContainer for \"ef178a9582d3c7f8646690fef246b2895cf55dd66884b6c636816428c0127172\" returns successfully" Jul 11 00:23:28.267228 kubelet[2449]: I0711 00:23:28.267207 2449 scope.go:117] "RemoveContainer" containerID="660ab4370835efef9badd86d80d049cba479b53d65f1b687ab67de1396554dbf" Jul 11 00:23:28.268168 containerd[1425]: time="2025-07-11T00:23:28.268114519Z" level=info msg="RemoveContainer for \"660ab4370835efef9badd86d80d049cba479b53d65f1b687ab67de1396554dbf\"" Jul 11 00:23:28.270367 containerd[1425]: time="2025-07-11T00:23:28.270272682Z" level=info msg="RemoveContainer for \"660ab4370835efef9badd86d80d049cba479b53d65f1b687ab67de1396554dbf\" returns successfully" Jul 11 00:23:28.270441 kubelet[2449]: I0711 00:23:28.270402 2449 scope.go:117] "RemoveContainer" containerID="235721277c463966026620f36116b00cd1ffd4a02af4e3028e903e5e51455ecb" Jul 11 00:23:28.271528 containerd[1425]: time="2025-07-11T00:23:28.271245102Z" level=info msg="RemoveContainer for \"235721277c463966026620f36116b00cd1ffd4a02af4e3028e903e5e51455ecb\"" Jul 11 00:23:28.283972 containerd[1425]: time="2025-07-11T00:23:28.283848995Z" level=info msg="RemoveContainer for \"235721277c463966026620f36116b00cd1ffd4a02af4e3028e903e5e51455ecb\" returns successfully" Jul 11 00:23:28.284079 kubelet[2449]: I0711 00:23:28.284032 2449 scope.go:117] "RemoveContainer" containerID="cb8e201c7eb814fac85a960f083b87991d87448f7879ce020098e65e98a804f0" Jul 11 00:23:28.285204 containerd[1425]: time="2025-07-11T00:23:28.285159582Z" level=info msg="RemoveContainer for \"cb8e201c7eb814fac85a960f083b87991d87448f7879ce020098e65e98a804f0\"" Jul 11 00:23:28.288095 containerd[1425]: time="2025-07-11T00:23:28.287588151Z" level=info msg="RemoveContainer for \"cb8e201c7eb814fac85a960f083b87991d87448f7879ce020098e65e98a804f0\" returns successfully" Jul 11 00:23:28.291091 kubelet[2449]: I0711 00:23:28.288421 2449 scope.go:117] "RemoveContainer" containerID="e74c2abc4ca3e1cd23b706979ee96e5c77dc45603dc7538c871dc209beaf5be5" Jul 11 00:23:28.291091 kubelet[2449]: E0711 00:23:28.289271 2449 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e74c2abc4ca3e1cd23b706979ee96e5c77dc45603dc7538c871dc209beaf5be5\": not found" containerID="e74c2abc4ca3e1cd23b706979ee96e5c77dc45603dc7538c871dc209beaf5be5" Jul 11 00:23:28.291091 kubelet[2449]: I0711 00:23:28.289330 2449 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e74c2abc4ca3e1cd23b706979ee96e5c77dc45603dc7538c871dc209beaf5be5"} err="failed to get container status \"e74c2abc4ca3e1cd23b706979ee96e5c77dc45603dc7538c871dc209beaf5be5\": rpc error: code = NotFound desc = an error occurred when try to find container \"e74c2abc4ca3e1cd23b706979ee96e5c77dc45603dc7538c871dc209beaf5be5\": not found" Jul 11 00:23:28.291091 kubelet[2449]: I0711 00:23:28.289354 2449 scope.go:117] "RemoveContainer" containerID="ef178a9582d3c7f8646690fef246b2895cf55dd66884b6c636816428c0127172" Jul 11 00:23:28.291091 kubelet[2449]: E0711 00:23:28.289820 2449 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ef178a9582d3c7f8646690fef246b2895cf55dd66884b6c636816428c0127172\": not found" containerID="ef178a9582d3c7f8646690fef246b2895cf55dd66884b6c636816428c0127172" Jul 11 00:23:28.291091 kubelet[2449]: I0711 00:23:28.289836 2449 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ef178a9582d3c7f8646690fef246b2895cf55dd66884b6c636816428c0127172"} err="failed to get container status \"ef178a9582d3c7f8646690fef246b2895cf55dd66884b6c636816428c0127172\": rpc error: code = NotFound desc = an error occurred when try to find container \"ef178a9582d3c7f8646690fef246b2895cf55dd66884b6c636816428c0127172\": not found" Jul 11 00:23:28.291091 kubelet[2449]: I0711 00:23:28.289850 2449 scope.go:117] "RemoveContainer" containerID="660ab4370835efef9badd86d80d049cba479b53d65f1b687ab67de1396554dbf" Jul 11 00:23:28.291325 containerd[1425]: time="2025-07-11T00:23:28.288931378Z" level=error msg="ContainerStatus for \"e74c2abc4ca3e1cd23b706979ee96e5c77dc45603dc7538c871dc209beaf5be5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e74c2abc4ca3e1cd23b706979ee96e5c77dc45603dc7538c871dc209beaf5be5\": not found" Jul 11 00:23:28.291325 containerd[1425]: time="2025-07-11T00:23:28.289691673Z" level=error msg="ContainerStatus for \"ef178a9582d3c7f8646690fef246b2895cf55dd66884b6c636816428c0127172\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ef178a9582d3c7f8646690fef246b2895cf55dd66884b6c636816428c0127172\": not found" Jul 11 00:23:28.291325 containerd[1425]: time="2025-07-11T00:23:28.290224604Z" level=error msg="ContainerStatus for \"660ab4370835efef9badd86d80d049cba479b53d65f1b687ab67de1396554dbf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"660ab4370835efef9badd86d80d049cba479b53d65f1b687ab67de1396554dbf\": not found" Jul 11 00:23:28.291325 containerd[1425]: time="2025-07-11T00:23:28.290665653Z" level=error msg="ContainerStatus for \"235721277c463966026620f36116b00cd1ffd4a02af4e3028e903e5e51455ecb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"235721277c463966026620f36116b00cd1ffd4a02af4e3028e903e5e51455ecb\": not found" Jul 11 00:23:28.291325 containerd[1425]: time="2025-07-11T00:23:28.291121342Z" level=error msg="ContainerStatus for \"cb8e201c7eb814fac85a960f083b87991d87448f7879ce020098e65e98a804f0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cb8e201c7eb814fac85a960f083b87991d87448f7879ce020098e65e98a804f0\": not found" Jul 11 00:23:28.291439 kubelet[2449]: E0711 00:23:28.290329 2449 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"660ab4370835efef9badd86d80d049cba479b53d65f1b687ab67de1396554dbf\": not found" containerID="660ab4370835efef9badd86d80d049cba479b53d65f1b687ab67de1396554dbf" Jul 11 00:23:28.291439 kubelet[2449]: I0711 00:23:28.290344 2449 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"660ab4370835efef9badd86d80d049cba479b53d65f1b687ab67de1396554dbf"} err="failed to get container status \"660ab4370835efef9badd86d80d049cba479b53d65f1b687ab67de1396554dbf\": rpc error: code = NotFound desc = an error occurred when try to find container \"660ab4370835efef9badd86d80d049cba479b53d65f1b687ab67de1396554dbf\": not found" Jul 11 00:23:28.291439 kubelet[2449]: I0711 00:23:28.290356 2449 scope.go:117] "RemoveContainer" containerID="235721277c463966026620f36116b00cd1ffd4a02af4e3028e903e5e51455ecb" Jul 11 00:23:28.291439 kubelet[2449]: E0711 00:23:28.290796 2449 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"235721277c463966026620f36116b00cd1ffd4a02af4e3028e903e5e51455ecb\": not found" containerID="235721277c463966026620f36116b00cd1ffd4a02af4e3028e903e5e51455ecb" Jul 11 00:23:28.291439 kubelet[2449]: I0711 00:23:28.290821 2449 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"235721277c463966026620f36116b00cd1ffd4a02af4e3028e903e5e51455ecb"} err="failed to get container status \"235721277c463966026620f36116b00cd1ffd4a02af4e3028e903e5e51455ecb\": rpc error: code = NotFound desc = an error occurred when try to find container \"235721277c463966026620f36116b00cd1ffd4a02af4e3028e903e5e51455ecb\": not found" Jul 11 00:23:28.291439 kubelet[2449]: I0711 00:23:28.290840 2449 scope.go:117] "RemoveContainer" containerID="cb8e201c7eb814fac85a960f083b87991d87448f7879ce020098e65e98a804f0" Jul 11 00:23:28.291622 kubelet[2449]: E0711 00:23:28.291230 2449 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cb8e201c7eb814fac85a960f083b87991d87448f7879ce020098e65e98a804f0\": not found" containerID="cb8e201c7eb814fac85a960f083b87991d87448f7879ce020098e65e98a804f0" Jul 11 00:23:28.291622 kubelet[2449]: I0711 00:23:28.291248 2449 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cb8e201c7eb814fac85a960f083b87991d87448f7879ce020098e65e98a804f0"} err="failed to get container status \"cb8e201c7eb814fac85a960f083b87991d87448f7879ce020098e65e98a804f0\": rpc error: code = NotFound desc = an error occurred when try to find container \"cb8e201c7eb814fac85a960f083b87991d87448f7879ce020098e65e98a804f0\": not found" Jul 11 00:23:28.313865 kubelet[2449]: I0711 00:23:28.313821 2449 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 11 00:23:28.313865 kubelet[2449]: I0711 00:23:28.313857 2449 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 11 00:23:28.313865 kubelet[2449]: I0711 00:23:28.313869 2449 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 11 00:23:28.313865 kubelet[2449]: I0711 00:23:28.313877 2449 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 11 00:23:28.314100 kubelet[2449]: I0711 00:23:28.313885 2449 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 11 00:23:28.314100 kubelet[2449]: I0711 00:23:28.313920 2449 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 11 00:23:28.314100 kubelet[2449]: I0711 00:23:28.313928 2449 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e88fbd0-eb01-4885-9eb0-4bc016c1717a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 11 00:23:28.314100 kubelet[2449]: I0711 00:23:28.313936 2449 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jlctd\" (UniqueName: \"kubernetes.io/projected/1e88fbd0-eb01-4885-9eb0-4bc016c1717a-kube-api-access-jlctd\") on node \"localhost\" DevicePath \"\"" Jul 11 00:23:28.314100 kubelet[2449]: I0711 00:23:28.313944 2449 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 11 00:23:28.314100 kubelet[2449]: I0711 00:23:28.313951 2449 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 11 00:23:28.314100 kubelet[2449]: I0711 00:23:28.313958 2449 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 11 00:23:28.314100 kubelet[2449]: I0711 00:23:28.313965 2449 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 11 00:23:28.314269 kubelet[2449]: I0711 00:23:28.313973 2449 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bz5mb\" (UniqueName: \"kubernetes.io/projected/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-kube-api-access-bz5mb\") on node \"localhost\" DevicePath \"\"" Jul 11 00:23:28.314269 kubelet[2449]: I0711 00:23:28.313980 2449 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 11 00:23:28.314269 kubelet[2449]: I0711 00:23:28.313987 2449 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 11 00:23:28.314269 kubelet[2449]: I0711 00:23:28.313995 2449 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9e64dd31-5370-4fa7-b77b-3b48af1a6c68-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 11 00:23:28.949275 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e66cab895cb8c34b6b10cb3ce5a309c2f35694a6e22fb50a6e8f87718f283b58-rootfs.mount: Deactivated successfully. Jul 11 00:23:28.949370 systemd[1]: var-lib-kubelet-pods-1e88fbd0\x2deb01\x2d4885\x2d9eb0\x2d4bc016c1717a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djlctd.mount: Deactivated successfully. Jul 11 00:23:28.949423 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a152906c0eaf01bfdce0f2df3c1944e18664831c2b4e0400818de9eff8a80764-rootfs.mount: Deactivated successfully. Jul 11 00:23:28.949475 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a152906c0eaf01bfdce0f2df3c1944e18664831c2b4e0400818de9eff8a80764-shm.mount: Deactivated successfully. Jul 11 00:23:28.949530 systemd[1]: var-lib-kubelet-pods-9e64dd31\x2d5370\x2d4fa7\x2db77b\x2d3b48af1a6c68-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbz5mb.mount: Deactivated successfully. Jul 11 00:23:28.949580 systemd[1]: var-lib-kubelet-pods-9e64dd31\x2d5370\x2d4fa7\x2db77b\x2d3b48af1a6c68-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 11 00:23:28.949628 systemd[1]: var-lib-kubelet-pods-9e64dd31\x2d5370\x2d4fa7\x2db77b\x2d3b48af1a6c68-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 11 00:23:29.023503 kubelet[2449]: I0711 00:23:29.023398 2449 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e88fbd0-eb01-4885-9eb0-4bc016c1717a" path="/var/lib/kubelet/pods/1e88fbd0-eb01-4885-9eb0-4bc016c1717a/volumes" Jul 11 00:23:29.024019 kubelet[2449]: I0711 00:23:29.023994 2449 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e64dd31-5370-4fa7-b77b-3b48af1a6c68" path="/var/lib/kubelet/pods/9e64dd31-5370-4fa7-b77b-3b48af1a6c68/volumes" Jul 11 00:23:29.893201 sshd[4078]: pam_unix(sshd:session): session closed for user core Jul 11 00:23:29.902671 systemd[1]: sshd@22-10.0.0.102:22-10.0.0.1:56216.service: Deactivated successfully. Jul 11 00:23:29.904352 systemd[1]: session-23.scope: Deactivated successfully. Jul 11 00:23:29.904650 systemd[1]: session-23.scope: Consumed 1.161s CPU time. Jul 11 00:23:29.905919 systemd-logind[1406]: Session 23 logged out. Waiting for processes to exit. Jul 11 00:23:29.914332 systemd[1]: Started sshd@23-10.0.0.102:22-10.0.0.1:56230.service - OpenSSH per-connection server daemon (10.0.0.1:56230). Jul 11 00:23:29.917144 systemd-logind[1406]: Removed session 23. Jul 11 00:23:29.953728 sshd[4244]: Accepted publickey for core from 10.0.0.1 port 56230 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:23:29.955621 sshd[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:23:29.959927 systemd-logind[1406]: New session 24 of user core. Jul 11 00:23:29.973238 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 11 00:23:30.021636 kubelet[2449]: E0711 00:23:30.021174 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:31.047273 sshd[4244]: pam_unix(sshd:session): session closed for user core Jul 11 00:23:31.054073 systemd[1]: sshd@23-10.0.0.102:22-10.0.0.1:56230.service: Deactivated successfully. Jul 11 00:23:31.056709 systemd[1]: session-24.scope: Deactivated successfully. Jul 11 00:23:31.060226 systemd-logind[1406]: Session 24 logged out. Waiting for processes to exit. Jul 11 00:23:31.065643 kubelet[2449]: I0711 00:23:31.065435 2449 memory_manager.go:355] "RemoveStaleState removing state" podUID="1e88fbd0-eb01-4885-9eb0-4bc016c1717a" containerName="cilium-operator" Jul 11 00:23:31.065643 kubelet[2449]: I0711 00:23:31.065468 2449 memory_manager.go:355] "RemoveStaleState removing state" podUID="9e64dd31-5370-4fa7-b77b-3b48af1a6c68" containerName="cilium-agent" Jul 11 00:23:31.069621 kubelet[2449]: I0711 00:23:31.068902 2449 status_manager.go:890] "Failed to get status for pod" podUID="6678a3ec-7a28-465d-b6d9-df0fabbe05ae" pod="kube-system/cilium-v5g4f" err="pods \"cilium-v5g4f\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" Jul 11 00:23:31.069621 kubelet[2449]: W0711 00:23:31.068958 2449 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 11 00:23:31.069621 kubelet[2449]: W0711 00:23:31.069348 2449 reflector.go:569] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 11 00:23:31.066627 systemd[1]: Started sshd@24-10.0.0.102:22-10.0.0.1:56242.service - OpenSSH per-connection server daemon (10.0.0.1:56242). Jul 11 00:23:31.073725 kubelet[2449]: E0711 00:23:31.073681 2449 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jul 11 00:23:31.074421 kubelet[2449]: E0711 00:23:31.074345 2449 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jul 11 00:23:31.077380 systemd-logind[1406]: Removed session 24. Jul 11 00:23:31.100127 systemd[1]: Created slice kubepods-burstable-pod6678a3ec_7a28_465d_b6d9_df0fabbe05ae.slice - libcontainer container kubepods-burstable-pod6678a3ec_7a28_465d_b6d9_df0fabbe05ae.slice. Jul 11 00:23:31.122822 sshd[4257]: Accepted publickey for core from 10.0.0.1 port 56242 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:23:31.125627 sshd[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:23:31.128516 kubelet[2449]: I0711 00:23:31.128475 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6678a3ec-7a28-465d-b6d9-df0fabbe05ae-xtables-lock\") pod \"cilium-v5g4f\" (UID: \"6678a3ec-7a28-465d-b6d9-df0fabbe05ae\") " pod="kube-system/cilium-v5g4f" Jul 11 00:23:31.128516 kubelet[2449]: I0711 00:23:31.128515 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6678a3ec-7a28-465d-b6d9-df0fabbe05ae-hubble-tls\") pod \"cilium-v5g4f\" (UID: \"6678a3ec-7a28-465d-b6d9-df0fabbe05ae\") " pod="kube-system/cilium-v5g4f" Jul 11 00:23:31.128618 kubelet[2449]: I0711 00:23:31.128533 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6678a3ec-7a28-465d-b6d9-df0fabbe05ae-host-proc-sys-kernel\") pod \"cilium-v5g4f\" (UID: \"6678a3ec-7a28-465d-b6d9-df0fabbe05ae\") " pod="kube-system/cilium-v5g4f" Jul 11 00:23:31.128618 kubelet[2449]: I0711 00:23:31.128581 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6678a3ec-7a28-465d-b6d9-df0fabbe05ae-bpf-maps\") pod \"cilium-v5g4f\" (UID: \"6678a3ec-7a28-465d-b6d9-df0fabbe05ae\") " pod="kube-system/cilium-v5g4f" Jul 11 00:23:31.128618 kubelet[2449]: I0711 00:23:31.128613 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6678a3ec-7a28-465d-b6d9-df0fabbe05ae-cilium-cgroup\") pod \"cilium-v5g4f\" (UID: \"6678a3ec-7a28-465d-b6d9-df0fabbe05ae\") " pod="kube-system/cilium-v5g4f" Jul 11 00:23:31.128698 kubelet[2449]: I0711 00:23:31.128634 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6678a3ec-7a28-465d-b6d9-df0fabbe05ae-lib-modules\") pod \"cilium-v5g4f\" (UID: \"6678a3ec-7a28-465d-b6d9-df0fabbe05ae\") " pod="kube-system/cilium-v5g4f" Jul 11 00:23:31.128698 kubelet[2449]: I0711 00:23:31.128667 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6678a3ec-7a28-465d-b6d9-df0fabbe05ae-cni-path\") pod \"cilium-v5g4f\" (UID: \"6678a3ec-7a28-465d-b6d9-df0fabbe05ae\") " pod="kube-system/cilium-v5g4f" Jul 11 00:23:31.128698 kubelet[2449]: I0711 00:23:31.128689 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzs2g\" (UniqueName: \"kubernetes.io/projected/6678a3ec-7a28-465d-b6d9-df0fabbe05ae-kube-api-access-hzs2g\") pod \"cilium-v5g4f\" (UID: \"6678a3ec-7a28-465d-b6d9-df0fabbe05ae\") " pod="kube-system/cilium-v5g4f" Jul 11 00:23:31.128764 kubelet[2449]: I0711 00:23:31.128713 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6678a3ec-7a28-465d-b6d9-df0fabbe05ae-clustermesh-secrets\") pod \"cilium-v5g4f\" (UID: \"6678a3ec-7a28-465d-b6d9-df0fabbe05ae\") " pod="kube-system/cilium-v5g4f" Jul 11 00:23:31.128764 kubelet[2449]: I0711 00:23:31.128728 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6678a3ec-7a28-465d-b6d9-df0fabbe05ae-cilium-config-path\") pod \"cilium-v5g4f\" (UID: \"6678a3ec-7a28-465d-b6d9-df0fabbe05ae\") " pod="kube-system/cilium-v5g4f" Jul 11 00:23:31.128764 kubelet[2449]: I0711 00:23:31.128744 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6678a3ec-7a28-465d-b6d9-df0fabbe05ae-cilium-ipsec-secrets\") pod \"cilium-v5g4f\" (UID: \"6678a3ec-7a28-465d-b6d9-df0fabbe05ae\") " pod="kube-system/cilium-v5g4f" Jul 11 00:23:31.128836 kubelet[2449]: I0711 00:23:31.128786 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6678a3ec-7a28-465d-b6d9-df0fabbe05ae-hostproc\") pod \"cilium-v5g4f\" (UID: \"6678a3ec-7a28-465d-b6d9-df0fabbe05ae\") " pod="kube-system/cilium-v5g4f" Jul 11 00:23:31.128836 kubelet[2449]: I0711 00:23:31.128805 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6678a3ec-7a28-465d-b6d9-df0fabbe05ae-etc-cni-netd\") pod \"cilium-v5g4f\" (UID: \"6678a3ec-7a28-465d-b6d9-df0fabbe05ae\") " pod="kube-system/cilium-v5g4f" Jul 11 00:23:31.128836 kubelet[2449]: I0711 00:23:31.128824 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6678a3ec-7a28-465d-b6d9-df0fabbe05ae-cilium-run\") pod \"cilium-v5g4f\" (UID: \"6678a3ec-7a28-465d-b6d9-df0fabbe05ae\") " pod="kube-system/cilium-v5g4f" Jul 11 00:23:31.128898 kubelet[2449]: I0711 00:23:31.128840 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6678a3ec-7a28-465d-b6d9-df0fabbe05ae-host-proc-sys-net\") pod \"cilium-v5g4f\" (UID: \"6678a3ec-7a28-465d-b6d9-df0fabbe05ae\") " pod="kube-system/cilium-v5g4f" Jul 11 00:23:31.133404 systemd-logind[1406]: New session 25 of user core. Jul 11 00:23:31.137197 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 11 00:23:31.189244 sshd[4257]: pam_unix(sshd:session): session closed for user core Jul 11 00:23:31.200508 systemd[1]: sshd@24-10.0.0.102:22-10.0.0.1:56242.service: Deactivated successfully. Jul 11 00:23:31.203509 systemd[1]: session-25.scope: Deactivated successfully. Jul 11 00:23:31.204776 systemd-logind[1406]: Session 25 logged out. Waiting for processes to exit. Jul 11 00:23:31.207449 systemd[1]: Started sshd@25-10.0.0.102:22-10.0.0.1:56250.service - OpenSSH per-connection server daemon (10.0.0.1:56250). Jul 11 00:23:31.208450 systemd-logind[1406]: Removed session 25. Jul 11 00:23:31.250994 sshd[4265]: Accepted publickey for core from 10.0.0.1 port 56250 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:23:31.252430 sshd[4265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:23:31.256136 systemd-logind[1406]: New session 26 of user core. Jul 11 00:23:31.262287 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 11 00:23:32.230182 kubelet[2449]: E0711 00:23:32.230129 2449 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jul 11 00:23:32.231129 kubelet[2449]: E0711 00:23:32.230138 2449 secret.go:189] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Jul 11 00:23:32.231129 kubelet[2449]: E0711 00:23:32.230834 2449 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6678a3ec-7a28-465d-b6d9-df0fabbe05ae-cilium-ipsec-secrets podName:6678a3ec-7a28-465d-b6d9-df0fabbe05ae nodeName:}" failed. No retries permitted until 2025-07-11 00:23:32.7308097 +0000 UTC m=+79.793639632 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/6678a3ec-7a28-465d-b6d9-df0fabbe05ae-cilium-ipsec-secrets") pod "cilium-v5g4f" (UID: "6678a3ec-7a28-465d-b6d9-df0fabbe05ae") : failed to sync secret cache: timed out waiting for the condition Jul 11 00:23:32.232141 kubelet[2449]: E0711 00:23:32.230168 2449 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-v5g4f: failed to sync secret cache: timed out waiting for the condition Jul 11 00:23:32.232221 kubelet[2449]: E0711 00:23:32.232190 2449 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6678a3ec-7a28-465d-b6d9-df0fabbe05ae-hubble-tls podName:6678a3ec-7a28-465d-b6d9-df0fabbe05ae nodeName:}" failed. No retries permitted until 2025-07-11 00:23:32.732173644 +0000 UTC m=+79.795003576 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/6678a3ec-7a28-465d-b6d9-df0fabbe05ae-hubble-tls") pod "cilium-v5g4f" (UID: "6678a3ec-7a28-465d-b6d9-df0fabbe05ae") : failed to sync secret cache: timed out waiting for the condition Jul 11 00:23:32.904321 kubelet[2449]: E0711 00:23:32.904280 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:32.904835 containerd[1425]: time="2025-07-11T00:23:32.904770574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v5g4f,Uid:6678a3ec-7a28-465d-b6d9-df0fabbe05ae,Namespace:kube-system,Attempt:0,}" Jul 11 00:23:32.920422 containerd[1425]: time="2025-07-11T00:23:32.920147883Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:23:32.920422 containerd[1425]: time="2025-07-11T00:23:32.920201883Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:23:32.920422 containerd[1425]: time="2025-07-11T00:23:32.920216804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:23:32.920422 containerd[1425]: time="2025-07-11T00:23:32.920286445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:23:32.937306 systemd[1]: Started cri-containerd-18d2dca9278a772cad41ba6e84609e7b316db9281cd71c2bb36daefdc495c5f0.scope - libcontainer container 18d2dca9278a772cad41ba6e84609e7b316db9281cd71c2bb36daefdc495c5f0. Jul 11 00:23:32.953902 containerd[1425]: time="2025-07-11T00:23:32.953855150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v5g4f,Uid:6678a3ec-7a28-465d-b6d9-df0fabbe05ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"18d2dca9278a772cad41ba6e84609e7b316db9281cd71c2bb36daefdc495c5f0\"" Jul 11 00:23:32.954589 kubelet[2449]: E0711 00:23:32.954569 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:32.956318 containerd[1425]: time="2025-07-11T00:23:32.956280513Z" level=info msg="CreateContainer within sandbox \"18d2dca9278a772cad41ba6e84609e7b316db9281cd71c2bb36daefdc495c5f0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 11 00:23:32.965557 containerd[1425]: time="2025-07-11T00:23:32.965513274Z" level=info msg="CreateContainer within sandbox \"18d2dca9278a772cad41ba6e84609e7b316db9281cd71c2bb36daefdc495c5f0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6033f64ef74f8dee7837f58dae96707492fc90977daa171382b03e3bc8c097a4\"" Jul 11 00:23:32.967356 containerd[1425]: time="2025-07-11T00:23:32.967322505Z" level=info msg="StartContainer for \"6033f64ef74f8dee7837f58dae96707492fc90977daa171382b03e3bc8c097a4\"" Jul 11 00:23:32.993211 systemd[1]: Started cri-containerd-6033f64ef74f8dee7837f58dae96707492fc90977daa171382b03e3bc8c097a4.scope - libcontainer container 6033f64ef74f8dee7837f58dae96707492fc90977daa171382b03e3bc8c097a4. Jul 11 00:23:33.013243 containerd[1425]: time="2025-07-11T00:23:33.013204778Z" level=info msg="StartContainer for \"6033f64ef74f8dee7837f58dae96707492fc90977daa171382b03e3bc8c097a4\" returns successfully" Jul 11 00:23:33.026850 systemd[1]: cri-containerd-6033f64ef74f8dee7837f58dae96707492fc90977daa171382b03e3bc8c097a4.scope: Deactivated successfully. Jul 11 00:23:33.053866 containerd[1425]: time="2025-07-11T00:23:33.053806421Z" level=info msg="shim disconnected" id=6033f64ef74f8dee7837f58dae96707492fc90977daa171382b03e3bc8c097a4 namespace=k8s.io Jul 11 00:23:33.053866 containerd[1425]: time="2025-07-11T00:23:33.053862782Z" level=warning msg="cleaning up after shim disconnected" id=6033f64ef74f8dee7837f58dae96707492fc90977daa171382b03e3bc8c097a4 namespace=k8s.io Jul 11 00:23:33.053866 containerd[1425]: time="2025-07-11T00:23:33.053872062Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:23:33.071256 kubelet[2449]: E0711 00:23:33.071205 2449 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 11 00:23:33.253731 kubelet[2449]: E0711 00:23:33.253608 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:33.257556 containerd[1425]: time="2025-07-11T00:23:33.257196363Z" level=info msg="CreateContainer within sandbox \"18d2dca9278a772cad41ba6e84609e7b316db9281cd71c2bb36daefdc495c5f0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 11 00:23:33.277401 containerd[1425]: time="2025-07-11T00:23:33.277278901Z" level=info msg="CreateContainer within sandbox \"18d2dca9278a772cad41ba6e84609e7b316db9281cd71c2bb36daefdc495c5f0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8224a1f29b8e455301ea7dcbe9899746d2e000c85edde2b93cadf20acdcc8358\"" Jul 11 00:23:33.280434 containerd[1425]: time="2025-07-11T00:23:33.280398193Z" level=info msg="StartContainer for \"8224a1f29b8e455301ea7dcbe9899746d2e000c85edde2b93cadf20acdcc8358\"" Jul 11 00:23:33.309292 systemd[1]: Started cri-containerd-8224a1f29b8e455301ea7dcbe9899746d2e000c85edde2b93cadf20acdcc8358.scope - libcontainer container 8224a1f29b8e455301ea7dcbe9899746d2e000c85edde2b93cadf20acdcc8358. Jul 11 00:23:33.336569 containerd[1425]: time="2025-07-11T00:23:33.336529417Z" level=info msg="StartContainer for \"8224a1f29b8e455301ea7dcbe9899746d2e000c85edde2b93cadf20acdcc8358\" returns successfully" Jul 11 00:23:33.345977 systemd[1]: cri-containerd-8224a1f29b8e455301ea7dcbe9899746d2e000c85edde2b93cadf20acdcc8358.scope: Deactivated successfully. Jul 11 00:23:33.363513 containerd[1425]: time="2025-07-11T00:23:33.363394949Z" level=info msg="shim disconnected" id=8224a1f29b8e455301ea7dcbe9899746d2e000c85edde2b93cadf20acdcc8358 namespace=k8s.io Jul 11 00:23:33.363513 containerd[1425]: time="2025-07-11T00:23:33.363445750Z" level=warning msg="cleaning up after shim disconnected" id=8224a1f29b8e455301ea7dcbe9899746d2e000c85edde2b93cadf20acdcc8358 namespace=k8s.io Jul 11 00:23:33.363513 containerd[1425]: time="2025-07-11T00:23:33.363454990Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:23:34.261174 kubelet[2449]: E0711 00:23:34.260924 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:34.265133 containerd[1425]: time="2025-07-11T00:23:34.264943639Z" level=info msg="CreateContainer within sandbox \"18d2dca9278a772cad41ba6e84609e7b316db9281cd71c2bb36daefdc495c5f0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 11 00:23:34.278691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2685382976.mount: Deactivated successfully. Jul 11 00:23:34.282084 containerd[1425]: time="2025-07-11T00:23:34.282036756Z" level=info msg="CreateContainer within sandbox \"18d2dca9278a772cad41ba6e84609e7b316db9281cd71c2bb36daefdc495c5f0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7b56132af791d61ee975e581032ebe9b506ff1943f408891b82955a6f15423df\"" Jul 11 00:23:34.282645 containerd[1425]: time="2025-07-11T00:23:34.282534084Z" level=info msg="StartContainer for \"7b56132af791d61ee975e581032ebe9b506ff1943f408891b82955a6f15423df\"" Jul 11 00:23:34.310323 systemd[1]: Started cri-containerd-7b56132af791d61ee975e581032ebe9b506ff1943f408891b82955a6f15423df.scope - libcontainer container 7b56132af791d61ee975e581032ebe9b506ff1943f408891b82955a6f15423df. Jul 11 00:23:34.333188 systemd[1]: cri-containerd-7b56132af791d61ee975e581032ebe9b506ff1943f408891b82955a6f15423df.scope: Deactivated successfully. Jul 11 00:23:34.335072 containerd[1425]: time="2025-07-11T00:23:34.333686554Z" level=info msg="StartContainer for \"7b56132af791d61ee975e581032ebe9b506ff1943f408891b82955a6f15423df\" returns successfully" Jul 11 00:23:34.353353 containerd[1425]: time="2025-07-11T00:23:34.353300873Z" level=info msg="shim disconnected" id=7b56132af791d61ee975e581032ebe9b506ff1943f408891b82955a6f15423df namespace=k8s.io Jul 11 00:23:34.353353 containerd[1425]: time="2025-07-11T00:23:34.353349633Z" level=warning msg="cleaning up after shim disconnected" id=7b56132af791d61ee975e581032ebe9b506ff1943f408891b82955a6f15423df namespace=k8s.io Jul 11 00:23:34.353353 containerd[1425]: time="2025-07-11T00:23:34.353358834Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:23:34.606511 kubelet[2449]: I0711 00:23:34.606459 2449 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-11T00:23:34Z","lastTransitionTime":"2025-07-11T00:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 11 00:23:34.741595 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b56132af791d61ee975e581032ebe9b506ff1943f408891b82955a6f15423df-rootfs.mount: Deactivated successfully. Jul 11 00:23:35.265159 kubelet[2449]: E0711 00:23:35.265088 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:35.272164 containerd[1425]: time="2025-07-11T00:23:35.272108983Z" level=info msg="CreateContainer within sandbox \"18d2dca9278a772cad41ba6e84609e7b316db9281cd71c2bb36daefdc495c5f0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 11 00:23:35.285563 containerd[1425]: time="2025-07-11T00:23:35.285523713Z" level=info msg="CreateContainer within sandbox \"18d2dca9278a772cad41ba6e84609e7b316db9281cd71c2bb36daefdc495c5f0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d8f3a8ecbba3fe27e5164cc1cc0d7ceca548b2a3b86df724b031a1de934cbe51\"" Jul 11 00:23:35.287160 containerd[1425]: time="2025-07-11T00:23:35.287126818Z" level=info msg="StartContainer for \"d8f3a8ecbba3fe27e5164cc1cc0d7ceca548b2a3b86df724b031a1de934cbe51\"" Jul 11 00:23:35.315215 systemd[1]: Started cri-containerd-d8f3a8ecbba3fe27e5164cc1cc0d7ceca548b2a3b86df724b031a1de934cbe51.scope - libcontainer container d8f3a8ecbba3fe27e5164cc1cc0d7ceca548b2a3b86df724b031a1de934cbe51. Jul 11 00:23:35.335572 systemd[1]: cri-containerd-d8f3a8ecbba3fe27e5164cc1cc0d7ceca548b2a3b86df724b031a1de934cbe51.scope: Deactivated successfully. Jul 11 00:23:35.337959 containerd[1425]: time="2025-07-11T00:23:35.337758850Z" level=info msg="StartContainer for \"d8f3a8ecbba3fe27e5164cc1cc0d7ceca548b2a3b86df724b031a1de934cbe51\" returns successfully" Jul 11 00:23:35.358074 containerd[1425]: time="2025-07-11T00:23:35.357931406Z" level=info msg="shim disconnected" id=d8f3a8ecbba3fe27e5164cc1cc0d7ceca548b2a3b86df724b031a1de934cbe51 namespace=k8s.io Jul 11 00:23:35.358074 containerd[1425]: time="2025-07-11T00:23:35.357998407Z" level=warning msg="cleaning up after shim disconnected" id=d8f3a8ecbba3fe27e5164cc1cc0d7ceca548b2a3b86df724b031a1de934cbe51 namespace=k8s.io Jul 11 00:23:35.358074 containerd[1425]: time="2025-07-11T00:23:35.358007967Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:23:35.741677 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8f3a8ecbba3fe27e5164cc1cc0d7ceca548b2a3b86df724b031a1de934cbe51-rootfs.mount: Deactivated successfully. Jul 11 00:23:36.269028 kubelet[2449]: E0711 00:23:36.268997 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:36.272285 containerd[1425]: time="2025-07-11T00:23:36.272246681Z" level=info msg="CreateContainer within sandbox \"18d2dca9278a772cad41ba6e84609e7b316db9281cd71c2bb36daefdc495c5f0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 11 00:23:36.286471 containerd[1425]: time="2025-07-11T00:23:36.286428135Z" level=info msg="CreateContainer within sandbox \"18d2dca9278a772cad41ba6e84609e7b316db9281cd71c2bb36daefdc495c5f0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9296b45db7468d83adb6ed787e5ec7ea9e3d046f7426c3e246208747fa32f6dd\"" Jul 11 00:23:36.290708 containerd[1425]: time="2025-07-11T00:23:36.289393020Z" level=info msg="StartContainer for \"9296b45db7468d83adb6ed787e5ec7ea9e3d046f7426c3e246208747fa32f6dd\"" Jul 11 00:23:36.313204 systemd[1]: Started cri-containerd-9296b45db7468d83adb6ed787e5ec7ea9e3d046f7426c3e246208747fa32f6dd.scope - libcontainer container 9296b45db7468d83adb6ed787e5ec7ea9e3d046f7426c3e246208747fa32f6dd. Jul 11 00:23:36.335696 containerd[1425]: time="2025-07-11T00:23:36.335654878Z" level=info msg="StartContainer for \"9296b45db7468d83adb6ed787e5ec7ea9e3d046f7426c3e246208747fa32f6dd\" returns successfully" Jul 11 00:23:36.597109 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 11 00:23:37.276759 kubelet[2449]: E0711 00:23:37.276367 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:37.293374 kubelet[2449]: I0711 00:23:37.292314 2449 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-v5g4f" podStartSLOduration=6.292298391 podStartE2EDuration="6.292298391s" podCreationTimestamp="2025-07-11 00:23:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:23:37.292120068 +0000 UTC m=+84.354950000" watchObservedRunningTime="2025-07-11 00:23:37.292298391 +0000 UTC m=+84.355128323" Jul 11 00:23:38.905913 kubelet[2449]: E0711 00:23:38.905864 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:39.401623 systemd-networkd[1363]: lxc_health: Link UP Jul 11 00:23:39.413030 systemd-networkd[1363]: lxc_health: Gained carrier Jul 11 00:23:40.907352 kubelet[2449]: E0711 00:23:40.907040 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:41.234254 systemd-networkd[1363]: lxc_health: Gained IPv6LL Jul 11 00:23:41.286572 kubelet[2449]: E0711 00:23:41.286540 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:42.288331 kubelet[2449]: E0711 00:23:42.288197 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:46.068033 sshd[4265]: pam_unix(sshd:session): session closed for user core Jul 11 00:23:46.071953 systemd[1]: sshd@25-10.0.0.102:22-10.0.0.1:56250.service: Deactivated successfully. Jul 11 00:23:46.073825 systemd[1]: session-26.scope: Deactivated successfully. Jul 11 00:23:46.075718 systemd-logind[1406]: Session 26 logged out. Waiting for processes to exit. Jul 11 00:23:46.077411 systemd-logind[1406]: Removed session 26.