Jul 12 00:16:45.944698 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 12 00:16:45.944720 kernel: Linux version 6.6.96-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jul 11 22:42:11 -00 2025 Jul 12 00:16:45.944730 kernel: KASLR enabled Jul 12 00:16:45.944736 kernel: efi: EFI v2.7 by EDK II Jul 12 00:16:45.944742 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jul 12 00:16:45.944748 kernel: random: crng init done Jul 12 00:16:45.944755 kernel: ACPI: Early table checksum verification disabled Jul 12 00:16:45.944761 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jul 12 00:16:45.944767 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 12 00:16:45.944775 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:16:45.944781 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:16:45.944787 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:16:45.944793 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:16:45.944799 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:16:45.944807 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:16:45.944815 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:16:45.944822 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:16:45.944828 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:16:45.944835 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 12 00:16:45.944841 kernel: NUMA: Failed to initialise from firmware Jul 12 00:16:45.944848 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 12 00:16:45.944854 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Jul 12 00:16:45.944861 kernel: Zone ranges: Jul 12 00:16:45.944867 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 12 00:16:45.944874 kernel: DMA32 empty Jul 12 00:16:45.944882 kernel: Normal empty Jul 12 00:16:45.944888 kernel: Movable zone start for each node Jul 12 00:16:45.944894 kernel: Early memory node ranges Jul 12 00:16:45.944901 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jul 12 00:16:45.944907 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 12 00:16:45.944913 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 12 00:16:45.944920 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 12 00:16:45.944926 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 12 00:16:45.944933 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 12 00:16:45.944939 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 12 00:16:45.944945 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 12 00:16:45.944952 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 12 00:16:45.944960 kernel: psci: probing for conduit method from ACPI. Jul 12 00:16:45.944967 kernel: psci: PSCIv1.1 detected in firmware. Jul 12 00:16:45.944973 kernel: psci: Using standard PSCI v0.2 function IDs Jul 12 00:16:45.944982 kernel: psci: Trusted OS migration not required Jul 12 00:16:45.944989 kernel: psci: SMC Calling Convention v1.1 Jul 12 00:16:45.944996 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 12 00:16:45.945005 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 12 00:16:45.945012 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 12 00:16:45.945019 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 12 00:16:45.945026 kernel: Detected PIPT I-cache on CPU0 Jul 12 00:16:45.945033 kernel: CPU features: detected: GIC system register CPU interface Jul 12 00:16:45.945039 kernel: CPU features: detected: Hardware dirty bit management Jul 12 00:16:45.945046 kernel: CPU features: detected: Spectre-v4 Jul 12 00:16:45.945053 kernel: CPU features: detected: Spectre-BHB Jul 12 00:16:45.945060 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 12 00:16:45.945067 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 12 00:16:45.945075 kernel: CPU features: detected: ARM erratum 1418040 Jul 12 00:16:45.945082 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 12 00:16:45.945113 kernel: alternatives: applying boot alternatives Jul 12 00:16:45.945121 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=52e0eba0325ad9e58f7b221f0132165c94b480ebf93a398f4fe935660ba9e15c Jul 12 00:16:45.945129 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 12 00:16:45.945135 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 12 00:16:45.945143 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 12 00:16:45.945149 kernel: Fallback order for Node 0: 0 Jul 12 00:16:45.945156 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 12 00:16:45.945163 kernel: Policy zone: DMA Jul 12 00:16:45.945170 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 12 00:16:45.945179 kernel: software IO TLB: area num 4. Jul 12 00:16:45.945186 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 12 00:16:45.945193 kernel: Memory: 2386400K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185888K reserved, 0K cma-reserved) Jul 12 00:16:45.945200 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 12 00:16:45.945207 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 12 00:16:45.945214 kernel: rcu: RCU event tracing is enabled. Jul 12 00:16:45.945221 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 12 00:16:45.945228 kernel: Trampoline variant of Tasks RCU enabled. Jul 12 00:16:45.945235 kernel: Tracing variant of Tasks RCU enabled. Jul 12 00:16:45.945242 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 12 00:16:45.945249 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 12 00:16:45.945256 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 12 00:16:45.945264 kernel: GICv3: 256 SPIs implemented Jul 12 00:16:45.945271 kernel: GICv3: 0 Extended SPIs implemented Jul 12 00:16:45.945277 kernel: Root IRQ handler: gic_handle_irq Jul 12 00:16:45.945290 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 12 00:16:45.945297 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 12 00:16:45.945304 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 12 00:16:45.945311 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jul 12 00:16:45.945318 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jul 12 00:16:45.945325 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 12 00:16:45.945332 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 12 00:16:45.945338 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 12 00:16:45.945347 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:16:45.945354 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 12 00:16:45.945361 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 12 00:16:45.945368 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 12 00:16:45.945374 kernel: arm-pv: using stolen time PV Jul 12 00:16:45.945382 kernel: Console: colour dummy device 80x25 Jul 12 00:16:45.945389 kernel: ACPI: Core revision 20230628 Jul 12 00:16:45.945396 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 12 00:16:45.945403 kernel: pid_max: default: 32768 minimum: 301 Jul 12 00:16:45.945410 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 12 00:16:45.945418 kernel: landlock: Up and running. Jul 12 00:16:45.945425 kernel: SELinux: Initializing. Jul 12 00:16:45.945432 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:16:45.945439 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:16:45.945446 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 12 00:16:45.945453 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 12 00:16:45.945460 kernel: rcu: Hierarchical SRCU implementation. Jul 12 00:16:45.945467 kernel: rcu: Max phase no-delay instances is 400. Jul 12 00:16:45.945474 kernel: Platform MSI: ITS@0x8080000 domain created Jul 12 00:16:45.945482 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 12 00:16:45.945489 kernel: Remapping and enabling EFI services. Jul 12 00:16:45.945496 kernel: smp: Bringing up secondary CPUs ... Jul 12 00:16:45.945503 kernel: Detected PIPT I-cache on CPU1 Jul 12 00:16:45.945510 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 12 00:16:45.945517 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 12 00:16:45.945524 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:16:45.945531 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 12 00:16:45.945539 kernel: Detected PIPT I-cache on CPU2 Jul 12 00:16:45.945546 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 12 00:16:45.945555 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 12 00:16:45.945562 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:16:45.945574 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 12 00:16:45.945582 kernel: Detected PIPT I-cache on CPU3 Jul 12 00:16:45.945590 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 12 00:16:45.945597 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 12 00:16:45.945605 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:16:45.945612 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 12 00:16:45.945619 kernel: smp: Brought up 1 node, 4 CPUs Jul 12 00:16:45.945628 kernel: SMP: Total of 4 processors activated. Jul 12 00:16:45.945635 kernel: CPU features: detected: 32-bit EL0 Support Jul 12 00:16:45.945643 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 12 00:16:45.945650 kernel: CPU features: detected: Common not Private translations Jul 12 00:16:45.945657 kernel: CPU features: detected: CRC32 instructions Jul 12 00:16:45.945665 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 12 00:16:45.945672 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 12 00:16:45.945679 kernel: CPU features: detected: LSE atomic instructions Jul 12 00:16:45.945688 kernel: CPU features: detected: Privileged Access Never Jul 12 00:16:45.945695 kernel: CPU features: detected: RAS Extension Support Jul 12 00:16:45.945703 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 12 00:16:45.945710 kernel: CPU: All CPU(s) started at EL1 Jul 12 00:16:45.945717 kernel: alternatives: applying system-wide alternatives Jul 12 00:16:45.945725 kernel: devtmpfs: initialized Jul 12 00:16:45.945732 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 12 00:16:45.945740 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 12 00:16:45.945747 kernel: pinctrl core: initialized pinctrl subsystem Jul 12 00:16:45.945756 kernel: SMBIOS 3.0.0 present. Jul 12 00:16:45.945763 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jul 12 00:16:45.945770 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 12 00:16:45.945778 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 12 00:16:45.945785 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 12 00:16:45.945793 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 12 00:16:45.945800 kernel: audit: initializing netlink subsys (disabled) Jul 12 00:16:45.945808 kernel: audit: type=2000 audit(0.026:1): state=initialized audit_enabled=0 res=1 Jul 12 00:16:45.945815 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 12 00:16:45.945824 kernel: cpuidle: using governor menu Jul 12 00:16:45.945831 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 12 00:16:45.945838 kernel: ASID allocator initialised with 32768 entries Jul 12 00:16:45.945846 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 12 00:16:45.945853 kernel: Serial: AMBA PL011 UART driver Jul 12 00:16:45.945861 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 12 00:16:45.945868 kernel: Modules: 0 pages in range for non-PLT usage Jul 12 00:16:45.945875 kernel: Modules: 509008 pages in range for PLT usage Jul 12 00:16:45.945883 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 12 00:16:45.945891 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 12 00:16:45.945899 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 12 00:16:45.945906 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 12 00:16:45.945914 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 12 00:16:45.945921 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 12 00:16:45.945928 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 12 00:16:45.945936 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 12 00:16:45.945943 kernel: ACPI: Added _OSI(Module Device) Jul 12 00:16:45.945950 kernel: ACPI: Added _OSI(Processor Device) Jul 12 00:16:45.945959 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 12 00:16:45.945970 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 12 00:16:45.945981 kernel: ACPI: Interpreter enabled Jul 12 00:16:45.945988 kernel: ACPI: Using GIC for interrupt routing Jul 12 00:16:45.945996 kernel: ACPI: MCFG table detected, 1 entries Jul 12 00:16:45.946003 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 12 00:16:45.946011 kernel: printk: console [ttyAMA0] enabled Jul 12 00:16:45.946018 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 12 00:16:45.946206 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 12 00:16:45.946299 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 12 00:16:45.946370 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 12 00:16:45.946436 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 12 00:16:45.946501 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 12 00:16:45.946511 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 12 00:16:45.946519 kernel: PCI host bridge to bus 0000:00 Jul 12 00:16:45.946590 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 12 00:16:45.946655 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 12 00:16:45.946714 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 12 00:16:45.946773 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 12 00:16:45.946853 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 12 00:16:45.946932 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 12 00:16:45.947002 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 12 00:16:45.947074 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 12 00:16:45.947173 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 12 00:16:45.947247 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 12 00:16:45.947324 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 12 00:16:45.947394 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 12 00:16:45.947457 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 12 00:16:45.947521 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 12 00:16:45.947586 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 12 00:16:45.947596 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 12 00:16:45.947603 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 12 00:16:45.947611 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 12 00:16:45.947618 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 12 00:16:45.947625 kernel: iommu: Default domain type: Translated Jul 12 00:16:45.947633 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 12 00:16:45.947640 kernel: efivars: Registered efivars operations Jul 12 00:16:45.947650 kernel: vgaarb: loaded Jul 12 00:16:45.947657 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 12 00:16:45.947665 kernel: VFS: Disk quotas dquot_6.6.0 Jul 12 00:16:45.947672 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 12 00:16:45.947680 kernel: pnp: PnP ACPI init Jul 12 00:16:45.947759 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 12 00:16:45.947770 kernel: pnp: PnP ACPI: found 1 devices Jul 12 00:16:45.947778 kernel: NET: Registered PF_INET protocol family Jul 12 00:16:45.947786 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 12 00:16:45.947795 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 12 00:16:45.947803 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 12 00:16:45.947810 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 12 00:16:45.947818 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 12 00:16:45.947825 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 12 00:16:45.947832 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:16:45.947840 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:16:45.947847 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 12 00:16:45.947856 kernel: PCI: CLS 0 bytes, default 64 Jul 12 00:16:45.947864 kernel: kvm [1]: HYP mode not available Jul 12 00:16:45.947871 kernel: Initialise system trusted keyrings Jul 12 00:16:45.947879 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 12 00:16:45.947886 kernel: Key type asymmetric registered Jul 12 00:16:45.947893 kernel: Asymmetric key parser 'x509' registered Jul 12 00:16:45.947901 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 12 00:16:45.947908 kernel: io scheduler mq-deadline registered Jul 12 00:16:45.947915 kernel: io scheduler kyber registered Jul 12 00:16:45.947923 kernel: io scheduler bfq registered Jul 12 00:16:45.947932 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 12 00:16:45.947939 kernel: ACPI: button: Power Button [PWRB] Jul 12 00:16:45.947947 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 12 00:16:45.948018 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 12 00:16:45.948029 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 12 00:16:45.948036 kernel: thunder_xcv, ver 1.0 Jul 12 00:16:45.948043 kernel: thunder_bgx, ver 1.0 Jul 12 00:16:45.948051 kernel: nicpf, ver 1.0 Jul 12 00:16:45.948058 kernel: nicvf, ver 1.0 Jul 12 00:16:45.948159 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 12 00:16:45.948237 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-12T00:16:45 UTC (1752279405) Jul 12 00:16:45.948247 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 12 00:16:45.948255 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 12 00:16:45.948265 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 12 00:16:45.948272 kernel: watchdog: Hard watchdog permanently disabled Jul 12 00:16:45.948284 kernel: NET: Registered PF_INET6 protocol family Jul 12 00:16:45.948293 kernel: Segment Routing with IPv6 Jul 12 00:16:45.948305 kernel: In-situ OAM (IOAM) with IPv6 Jul 12 00:16:45.948324 kernel: NET: Registered PF_PACKET protocol family Jul 12 00:16:45.948333 kernel: Key type dns_resolver registered Jul 12 00:16:45.948343 kernel: registered taskstats version 1 Jul 12 00:16:45.948350 kernel: Loading compiled-in X.509 certificates Jul 12 00:16:45.948357 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.96-flatcar: ed6b382df707adbd5942eaa048a1031fe26cbf15' Jul 12 00:16:45.948364 kernel: Key type .fscrypt registered Jul 12 00:16:45.948371 kernel: Key type fscrypt-provisioning registered Jul 12 00:16:45.948379 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 12 00:16:45.948388 kernel: ima: Allocated hash algorithm: sha1 Jul 12 00:16:45.948396 kernel: ima: No architecture policies found Jul 12 00:16:45.948403 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 12 00:16:45.948410 kernel: clk: Disabling unused clocks Jul 12 00:16:45.948417 kernel: Freeing unused kernel memory: 39424K Jul 12 00:16:45.948424 kernel: Run /init as init process Jul 12 00:16:45.948431 kernel: with arguments: Jul 12 00:16:45.948438 kernel: /init Jul 12 00:16:45.948446 kernel: with environment: Jul 12 00:16:45.948454 kernel: HOME=/ Jul 12 00:16:45.948461 kernel: TERM=linux Jul 12 00:16:45.948468 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 12 00:16:45.948478 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 12 00:16:45.948487 systemd[1]: Detected virtualization kvm. Jul 12 00:16:45.948495 systemd[1]: Detected architecture arm64. Jul 12 00:16:45.948503 systemd[1]: Running in initrd. Jul 12 00:16:45.948512 systemd[1]: No hostname configured, using default hostname. Jul 12 00:16:45.948520 systemd[1]: Hostname set to . Jul 12 00:16:45.948528 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:16:45.948536 systemd[1]: Queued start job for default target initrd.target. Jul 12 00:16:45.948544 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:16:45.948551 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:16:45.948560 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 12 00:16:45.948568 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 00:16:45.948577 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 12 00:16:45.948586 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 12 00:16:45.948595 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 12 00:16:45.948603 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 12 00:16:45.948611 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:16:45.948619 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:16:45.948627 systemd[1]: Reached target paths.target - Path Units. Jul 12 00:16:45.948651 systemd[1]: Reached target slices.target - Slice Units. Jul 12 00:16:45.948659 systemd[1]: Reached target swap.target - Swaps. Jul 12 00:16:45.948666 systemd[1]: Reached target timers.target - Timer Units. Jul 12 00:16:45.948674 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:16:45.948682 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:16:45.948690 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 12 00:16:45.948698 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 12 00:16:45.948706 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:16:45.948715 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 00:16:45.948724 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:16:45.948732 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 00:16:45.948740 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 12 00:16:45.948748 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 00:16:45.948755 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 12 00:16:45.948763 systemd[1]: Starting systemd-fsck-usr.service... Jul 12 00:16:45.948771 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 00:16:45.948779 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 00:16:45.948789 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:16:45.948797 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 12 00:16:45.948805 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:16:45.948812 systemd[1]: Finished systemd-fsck-usr.service. Jul 12 00:16:45.948821 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 12 00:16:45.948830 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:16:45.948838 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:16:45.948864 systemd-journald[239]: Collecting audit messages is disabled. Jul 12 00:16:45.948884 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 00:16:45.948894 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 12 00:16:45.948902 systemd-journald[239]: Journal started Jul 12 00:16:45.948921 systemd-journald[239]: Runtime Journal (/run/log/journal/00f2f2e79ffa49c28efb6ddfe4fd56f8) is 5.9M, max 47.3M, 41.4M free. Jul 12 00:16:45.932530 systemd-modules-load[240]: Inserted module 'overlay' Jul 12 00:16:45.958652 kernel: Bridge firewalling registered Jul 12 00:16:45.958676 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 00:16:45.951621 systemd-modules-load[240]: Inserted module 'br_netfilter' Jul 12 00:16:45.961218 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 00:16:45.962334 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 00:16:45.965686 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:16:45.968015 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 00:16:45.970381 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:16:45.976848 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:16:45.978029 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:16:45.981134 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:16:45.983704 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 12 00:16:45.985632 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 00:16:46.001095 dracut-cmdline[277]: dracut-dracut-053 Jul 12 00:16:46.001095 dracut-cmdline[277]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=52e0eba0325ad9e58f7b221f0132165c94b480ebf93a398f4fe935660ba9e15c Jul 12 00:16:46.014077 systemd-resolved[278]: Positive Trust Anchors: Jul 12 00:16:46.014110 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:16:46.014145 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 00:16:46.018925 systemd-resolved[278]: Defaulting to hostname 'linux'. Jul 12 00:16:46.019965 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 00:16:46.020971 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:16:46.064119 kernel: SCSI subsystem initialized Jul 12 00:16:46.068102 kernel: Loading iSCSI transport class v2.0-870. Jul 12 00:16:46.076121 kernel: iscsi: registered transport (tcp) Jul 12 00:16:46.090236 kernel: iscsi: registered transport (qla4xxx) Jul 12 00:16:46.090294 kernel: QLogic iSCSI HBA Driver Jul 12 00:16:46.133602 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 12 00:16:46.146290 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 12 00:16:46.164394 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 12 00:16:46.164442 kernel: device-mapper: uevent: version 1.0.3 Jul 12 00:16:46.165199 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 12 00:16:46.219119 kernel: raid6: neonx8 gen() 15628 MB/s Jul 12 00:16:46.236104 kernel: raid6: neonx4 gen() 15629 MB/s Jul 12 00:16:46.253156 kernel: raid6: neonx2 gen() 13236 MB/s Jul 12 00:16:46.270124 kernel: raid6: neonx1 gen() 10352 MB/s Jul 12 00:16:46.287108 kernel: raid6: int64x8 gen() 6938 MB/s Jul 12 00:16:46.304123 kernel: raid6: int64x4 gen() 7338 MB/s Jul 12 00:16:46.321101 kernel: raid6: int64x2 gen() 6130 MB/s Jul 12 00:16:46.338107 kernel: raid6: int64x1 gen() 5059 MB/s Jul 12 00:16:46.338124 kernel: raid6: using algorithm neonx4 gen() 15629 MB/s Jul 12 00:16:46.355120 kernel: raid6: .... xor() 12465 MB/s, rmw enabled Jul 12 00:16:46.355153 kernel: raid6: using neon recovery algorithm Jul 12 00:16:46.360246 kernel: xor: measuring software checksum speed Jul 12 00:16:46.360261 kernel: 8regs : 19807 MB/sec Jul 12 00:16:46.361314 kernel: 32regs : 19141 MB/sec Jul 12 00:16:46.361326 kernel: arm64_neon : 27087 MB/sec Jul 12 00:16:46.361336 kernel: xor: using function: arm64_neon (27087 MB/sec) Jul 12 00:16:46.416120 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 12 00:16:46.427440 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:16:46.440338 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:16:46.451839 systemd-udevd[461]: Using default interface naming scheme 'v255'. Jul 12 00:16:46.455307 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:16:46.471290 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 12 00:16:46.483755 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Jul 12 00:16:46.510905 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:16:46.519247 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 00:16:46.559671 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:16:46.568780 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 12 00:16:46.580632 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 12 00:16:46.583721 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:16:46.584801 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:16:46.586327 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 00:16:46.593336 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 12 00:16:46.604242 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:16:46.616852 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 12 00:16:46.617294 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 12 00:16:46.631036 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 12 00:16:46.631142 kernel: GPT:9289727 != 19775487 Jul 12 00:16:46.631156 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 12 00:16:46.631166 kernel: GPT:9289727 != 19775487 Jul 12 00:16:46.631175 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 12 00:16:46.632197 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:16:46.633790 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:16:46.633902 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:16:46.642198 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:16:46.643182 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:16:46.643385 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:16:46.645368 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:16:46.653154 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:16:46.655718 kernel: BTRFS: device fsid 394cecf3-1fd4-438a-991e-dc2b4121da0c devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (518) Jul 12 00:16:46.659124 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (511) Jul 12 00:16:46.665844 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 12 00:16:46.668472 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:16:46.673671 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 12 00:16:46.683674 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 12 00:16:46.685017 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 12 00:16:46.690896 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 12 00:16:46.703259 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 12 00:16:46.705284 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:16:46.711930 disk-uuid[551]: Primary Header is updated. Jul 12 00:16:46.711930 disk-uuid[551]: Secondary Entries is updated. Jul 12 00:16:46.711930 disk-uuid[551]: Secondary Header is updated. Jul 12 00:16:46.719232 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:16:46.726140 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:16:47.730111 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:16:47.730244 disk-uuid[552]: The operation has completed successfully. Jul 12 00:16:47.753865 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 12 00:16:47.753966 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 12 00:16:47.774303 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 12 00:16:47.778818 sh[574]: Success Jul 12 00:16:47.790188 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 12 00:16:47.818346 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 12 00:16:47.833504 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 12 00:16:47.835132 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 12 00:16:47.846119 kernel: BTRFS info (device dm-0): first mount of filesystem 394cecf3-1fd4-438a-991e-dc2b4121da0c Jul 12 00:16:47.846170 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:16:47.846191 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 12 00:16:47.846202 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 12 00:16:47.846211 kernel: BTRFS info (device dm-0): using free space tree Jul 12 00:16:47.849687 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 12 00:16:47.850833 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 12 00:16:47.861299 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 12 00:16:47.862659 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 12 00:16:47.871219 kernel: BTRFS info (device vda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:16:47.871257 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:16:47.871268 kernel: BTRFS info (device vda6): using free space tree Jul 12 00:16:47.874141 kernel: BTRFS info (device vda6): auto enabling async discard Jul 12 00:16:47.881864 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 12 00:16:47.882902 kernel: BTRFS info (device vda6): last unmount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:16:47.888030 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 12 00:16:47.900251 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 12 00:16:47.962437 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:16:47.978318 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 00:16:48.019129 systemd-networkd[761]: lo: Link UP Jul 12 00:16:48.019137 systemd-networkd[761]: lo: Gained carrier Jul 12 00:16:48.019817 systemd-networkd[761]: Enumeration completed Jul 12 00:16:48.019946 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 00:16:48.020472 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:16:48.020475 systemd-networkd[761]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:16:48.021335 systemd-networkd[761]: eth0: Link UP Jul 12 00:16:48.021338 systemd-networkd[761]: eth0: Gained carrier Jul 12 00:16:48.021344 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:16:48.022071 systemd[1]: Reached target network.target - Network. Jul 12 00:16:48.039384 ignition[664]: Ignition 2.19.0 Jul 12 00:16:48.039400 ignition[664]: Stage: fetch-offline Jul 12 00:16:48.039434 ignition[664]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:16:48.039443 ignition[664]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:16:48.039647 ignition[664]: parsed url from cmdline: "" Jul 12 00:16:48.039651 ignition[664]: no config URL provided Jul 12 00:16:48.039655 ignition[664]: reading system config file "/usr/lib/ignition/user.ign" Jul 12 00:16:48.039662 ignition[664]: no config at "/usr/lib/ignition/user.ign" Jul 12 00:16:48.043142 systemd-networkd[761]: eth0: DHCPv4 address 10.0.0.81/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 12 00:16:48.039685 ignition[664]: op(1): [started] loading QEMU firmware config module Jul 12 00:16:48.039690 ignition[664]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 12 00:16:48.045032 ignition[664]: op(1): [finished] loading QEMU firmware config module Jul 12 00:16:48.082995 ignition[664]: parsing config with SHA512: 341c08a3ff2d867f41a7255e49ba21cb0ba0d3901690245cc2b192b50ce597fbf4b4eec50704fd8b4dd007e38a15cec2990c6c3d6b8eacfe67d63d1e5ed43a95 Jul 12 00:16:48.087888 unknown[664]: fetched base config from "system" Jul 12 00:16:48.087905 unknown[664]: fetched user config from "qemu" Jul 12 00:16:48.088862 ignition[664]: fetch-offline: fetch-offline passed Jul 12 00:16:48.088935 ignition[664]: Ignition finished successfully Jul 12 00:16:48.090487 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:16:48.091523 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 12 00:16:48.102226 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 12 00:16:48.111713 ignition[772]: Ignition 2.19.0 Jul 12 00:16:48.111724 ignition[772]: Stage: kargs Jul 12 00:16:48.111882 ignition[772]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:16:48.111891 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:16:48.112866 ignition[772]: kargs: kargs passed Jul 12 00:16:48.112911 ignition[772]: Ignition finished successfully Jul 12 00:16:48.115070 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 12 00:16:48.116839 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 12 00:16:48.130079 ignition[780]: Ignition 2.19.0 Jul 12 00:16:48.130109 ignition[780]: Stage: disks Jul 12 00:16:48.130287 ignition[780]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:16:48.130298 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:16:48.131160 ignition[780]: disks: disks passed Jul 12 00:16:48.133587 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 12 00:16:48.131209 ignition[780]: Ignition finished successfully Jul 12 00:16:48.136343 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 12 00:16:48.137706 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 12 00:16:48.138576 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 00:16:48.139700 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 00:16:48.141054 systemd[1]: Reached target basic.target - Basic System. Jul 12 00:16:48.152258 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 12 00:16:48.161851 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 12 00:16:48.166980 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 12 00:16:48.174213 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 12 00:16:48.214932 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 12 00:16:48.216081 kernel: EXT4-fs (vda9): mounted filesystem 44c8362f-9431-4909-bc9a-f90e514bd0e9 r/w with ordered data mode. Quota mode: none. Jul 12 00:16:48.215992 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 12 00:16:48.227164 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:16:48.228645 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 12 00:16:48.229645 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 12 00:16:48.229711 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 12 00:16:48.229760 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:16:48.235592 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (798) Jul 12 00:16:48.235329 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 12 00:16:48.236806 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 12 00:16:48.240322 kernel: BTRFS info (device vda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:16:48.240340 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:16:48.240355 kernel: BTRFS info (device vda6): using free space tree Jul 12 00:16:48.242242 kernel: BTRFS info (device vda6): auto enabling async discard Jul 12 00:16:48.242999 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:16:48.281403 initrd-setup-root[823]: cut: /sysroot/etc/passwd: No such file or directory Jul 12 00:16:48.285668 initrd-setup-root[830]: cut: /sysroot/etc/group: No such file or directory Jul 12 00:16:48.289523 initrd-setup-root[837]: cut: /sysroot/etc/shadow: No such file or directory Jul 12 00:16:48.292571 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory Jul 12 00:16:48.363750 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 12 00:16:48.374215 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 12 00:16:48.375594 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 12 00:16:48.380111 kernel: BTRFS info (device vda6): last unmount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:16:48.397301 ignition[912]: INFO : Ignition 2.19.0 Jul 12 00:16:48.398149 ignition[912]: INFO : Stage: mount Jul 12 00:16:48.398673 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:16:48.398673 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:16:48.400150 ignition[912]: INFO : mount: mount passed Jul 12 00:16:48.400150 ignition[912]: INFO : Ignition finished successfully Jul 12 00:16:48.403156 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 12 00:16:48.404181 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 12 00:16:48.419768 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 12 00:16:48.843942 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 12 00:16:48.855351 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:16:48.860102 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (925) Jul 12 00:16:48.862252 kernel: BTRFS info (device vda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:16:48.862288 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:16:48.862299 kernel: BTRFS info (device vda6): using free space tree Jul 12 00:16:48.864103 kernel: BTRFS info (device vda6): auto enabling async discard Jul 12 00:16:48.865207 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:16:48.880625 ignition[942]: INFO : Ignition 2.19.0 Jul 12 00:16:48.880625 ignition[942]: INFO : Stage: files Jul 12 00:16:48.881936 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:16:48.881936 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:16:48.881936 ignition[942]: DEBUG : files: compiled without relabeling support, skipping Jul 12 00:16:48.884446 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 12 00:16:48.884446 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 12 00:16:48.887374 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 12 00:16:48.888425 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 12 00:16:48.888425 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 12 00:16:48.887868 unknown[942]: wrote ssh authorized keys file for user: core Jul 12 00:16:48.891395 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 12 00:16:48.891395 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jul 12 00:16:49.056821 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 12 00:16:49.221407 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 12 00:16:49.221407 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 12 00:16:49.224201 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 12 00:16:49.557945 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 12 00:16:49.575398 systemd-networkd[761]: eth0: Gained IPv6LL Jul 12 00:16:49.643978 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 12 00:16:49.645719 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 12 00:16:49.645719 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 12 00:16:49.645719 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:16:49.645719 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:16:49.645719 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:16:49.645719 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:16:49.645719 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:16:49.645719 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:16:49.645719 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:16:49.645719 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:16:49.645719 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 12 00:16:49.645719 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 12 00:16:49.645719 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 12 00:16:49.645719 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jul 12 00:16:50.074823 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 12 00:16:50.435902 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 12 00:16:50.435902 ignition[942]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 12 00:16:50.438468 ignition[942]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:16:50.438468 ignition[942]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:16:50.438468 ignition[942]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 12 00:16:50.438468 ignition[942]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 12 00:16:50.438468 ignition[942]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 12 00:16:50.438468 ignition[942]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 12 00:16:50.438468 ignition[942]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 12 00:16:50.438468 ignition[942]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 12 00:16:50.460550 ignition[942]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 12 00:16:50.464124 ignition[942]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 12 00:16:50.465257 ignition[942]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 12 00:16:50.465257 ignition[942]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 12 00:16:50.465257 ignition[942]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 12 00:16:50.465257 ignition[942]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:16:50.465257 ignition[942]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:16:50.465257 ignition[942]: INFO : files: files passed Jul 12 00:16:50.465257 ignition[942]: INFO : Ignition finished successfully Jul 12 00:16:50.466762 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 12 00:16:50.477302 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 12 00:16:50.480195 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 12 00:16:50.481461 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 12 00:16:50.483143 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 12 00:16:50.487538 initrd-setup-root-after-ignition[970]: grep: /sysroot/oem/oem-release: No such file or directory Jul 12 00:16:50.490552 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:16:50.490552 initrd-setup-root-after-ignition[972]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:16:50.493462 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:16:50.492177 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:16:50.495020 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 12 00:16:50.502275 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 12 00:16:50.524038 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 12 00:16:50.524151 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 12 00:16:50.526166 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 12 00:16:50.529281 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 12 00:16:50.530808 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 12 00:16:50.531593 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 12 00:16:50.545861 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:16:50.555226 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 12 00:16:50.564850 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:16:50.566799 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:16:50.567730 systemd[1]: Stopped target timers.target - Timer Units. Jul 12 00:16:50.569293 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 12 00:16:50.569415 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:16:50.571554 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 12 00:16:50.573247 systemd[1]: Stopped target basic.target - Basic System. Jul 12 00:16:50.574650 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 12 00:16:50.576005 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:16:50.577654 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 12 00:16:50.579256 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 12 00:16:50.580778 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:16:50.582367 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 12 00:16:50.583977 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 12 00:16:50.585428 systemd[1]: Stopped target swap.target - Swaps. Jul 12 00:16:50.586715 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 12 00:16:50.586834 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:16:50.588840 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:16:50.590429 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:16:50.591968 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 12 00:16:50.596123 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:16:50.597026 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 12 00:16:50.597160 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 12 00:16:50.599635 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 12 00:16:50.599752 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:16:50.601364 systemd[1]: Stopped target paths.target - Path Units. Jul 12 00:16:50.602695 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 12 00:16:50.606142 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:16:50.607349 systemd[1]: Stopped target slices.target - Slice Units. Jul 12 00:16:50.609279 systemd[1]: Stopped target sockets.target - Socket Units. Jul 12 00:16:50.610749 systemd[1]: iscsid.socket: Deactivated successfully. Jul 12 00:16:50.610837 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:16:50.612292 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 12 00:16:50.612372 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:16:50.613816 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 12 00:16:50.613926 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:16:50.615585 systemd[1]: ignition-files.service: Deactivated successfully. Jul 12 00:16:50.615680 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 12 00:16:50.627303 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 12 00:16:50.629498 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 12 00:16:50.631226 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 12 00:16:50.632289 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:16:50.634446 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 12 00:16:50.634781 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:16:50.638865 ignition[997]: INFO : Ignition 2.19.0 Jul 12 00:16:50.638865 ignition[997]: INFO : Stage: umount Jul 12 00:16:50.645189 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:16:50.645189 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:16:50.645189 ignition[997]: INFO : umount: umount passed Jul 12 00:16:50.645189 ignition[997]: INFO : Ignition finished successfully Jul 12 00:16:50.641433 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 12 00:16:50.641520 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 12 00:16:50.646315 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 12 00:16:50.646402 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 12 00:16:50.651571 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 12 00:16:50.652625 systemd[1]: Stopped target network.target - Network. Jul 12 00:16:50.653866 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 12 00:16:50.653924 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 12 00:16:50.655650 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 12 00:16:50.655699 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 12 00:16:50.660654 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 12 00:16:50.660705 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 12 00:16:50.662414 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 12 00:16:50.662453 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 12 00:16:50.665064 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 12 00:16:50.666540 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 12 00:16:50.675124 systemd-networkd[761]: eth0: DHCPv6 lease lost Jul 12 00:16:50.676639 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 12 00:16:50.677621 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 12 00:16:50.679333 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 12 00:16:50.679376 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:16:50.690176 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 12 00:16:50.690810 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 12 00:16:50.690861 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:16:50.692504 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:16:50.695263 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 12 00:16:50.695385 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 12 00:16:50.698755 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:16:50.698828 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:16:50.702159 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 12 00:16:50.702204 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 12 00:16:50.703663 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 12 00:16:50.703710 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:16:50.706857 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 12 00:16:50.707699 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 12 00:16:50.713037 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 12 00:16:50.713194 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:16:50.715356 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 12 00:16:50.715403 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 12 00:16:50.716735 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 12 00:16:50.716768 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:16:50.718376 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 12 00:16:50.718420 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:16:50.720719 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 12 00:16:50.720764 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 12 00:16:50.722691 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:16:50.722737 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:16:50.735236 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 12 00:16:50.735989 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 12 00:16:50.736046 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:16:50.737667 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 12 00:16:50.737709 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 00:16:50.739225 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 12 00:16:50.739271 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:16:50.740968 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:16:50.741004 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:16:50.742744 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 12 00:16:50.742835 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 12 00:16:50.744958 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 12 00:16:50.745036 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 12 00:16:50.746671 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 12 00:16:50.747539 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 12 00:16:50.747600 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 12 00:16:50.749848 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 12 00:16:50.763156 systemd[1]: Switching root. Jul 12 00:16:50.793852 systemd-journald[239]: Journal stopped Jul 12 00:16:51.493024 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Jul 12 00:16:51.493080 kernel: SELinux: policy capability network_peer_controls=1 Jul 12 00:16:51.493108 kernel: SELinux: policy capability open_perms=1 Jul 12 00:16:51.493119 kernel: SELinux: policy capability extended_socket_class=1 Jul 12 00:16:51.493129 kernel: SELinux: policy capability always_check_network=0 Jul 12 00:16:51.493140 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 12 00:16:51.493154 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 12 00:16:51.493164 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 12 00:16:51.493174 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 12 00:16:51.493184 kernel: audit: type=1403 audit(1752279410.971:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 12 00:16:51.493196 systemd[1]: Successfully loaded SELinux policy in 35.201ms. Jul 12 00:16:51.493221 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.344ms. Jul 12 00:16:51.493234 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 12 00:16:51.493246 systemd[1]: Detected virtualization kvm. Jul 12 00:16:51.493257 systemd[1]: Detected architecture arm64. Jul 12 00:16:51.493278 systemd[1]: Detected first boot. Jul 12 00:16:51.493290 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:16:51.493302 zram_generator::config[1041]: No configuration found. Jul 12 00:16:51.493314 systemd[1]: Populated /etc with preset unit settings. Jul 12 00:16:51.493330 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 12 00:16:51.493341 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 12 00:16:51.493352 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 12 00:16:51.493365 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 12 00:16:51.493378 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 12 00:16:51.493390 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 12 00:16:51.493401 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 12 00:16:51.493412 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 12 00:16:51.493423 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 12 00:16:51.493434 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 12 00:16:51.493446 systemd[1]: Created slice user.slice - User and Session Slice. Jul 12 00:16:51.493458 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:16:51.493469 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:16:51.493483 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 12 00:16:51.493494 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 12 00:16:51.493505 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 12 00:16:51.493517 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 00:16:51.493528 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 12 00:16:51.493539 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:16:51.493551 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 12 00:16:51.493562 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 12 00:16:51.493573 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 12 00:16:51.493586 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 12 00:16:51.493597 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:16:51.493609 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 00:16:51.493621 systemd[1]: Reached target slices.target - Slice Units. Jul 12 00:16:51.493632 systemd[1]: Reached target swap.target - Swaps. Jul 12 00:16:51.493643 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 12 00:16:51.493655 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 12 00:16:51.493666 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:16:51.493679 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 00:16:51.493690 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:16:51.493702 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 12 00:16:51.493713 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 12 00:16:51.493724 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 12 00:16:51.493736 systemd[1]: Mounting media.mount - External Media Directory... Jul 12 00:16:51.493750 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 12 00:16:51.493762 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 12 00:16:51.493774 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 12 00:16:51.493787 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 12 00:16:51.493799 systemd[1]: Reached target machines.target - Containers. Jul 12 00:16:51.493810 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 12 00:16:51.493821 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:16:51.493833 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 00:16:51.493844 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 12 00:16:51.493856 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:16:51.493868 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 00:16:51.493881 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:16:51.493892 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 12 00:16:51.493903 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:16:51.493915 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 12 00:16:51.493927 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 12 00:16:51.493938 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 12 00:16:51.493949 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 12 00:16:51.493960 systemd[1]: Stopped systemd-fsck-usr.service. Jul 12 00:16:51.493973 kernel: loop: module loaded Jul 12 00:16:51.493983 kernel: fuse: init (API version 7.39) Jul 12 00:16:51.493994 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 00:16:51.494005 kernel: ACPI: bus type drm_connector registered Jul 12 00:16:51.494016 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 00:16:51.494028 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 12 00:16:51.494039 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 12 00:16:51.494050 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 00:16:51.494076 systemd-journald[1112]: Collecting audit messages is disabled. Jul 12 00:16:51.494109 systemd[1]: verity-setup.service: Deactivated successfully. Jul 12 00:16:51.494122 systemd[1]: Stopped verity-setup.service. Jul 12 00:16:51.494135 systemd-journald[1112]: Journal started Jul 12 00:16:51.494157 systemd-journald[1112]: Runtime Journal (/run/log/journal/00f2f2e79ffa49c28efb6ddfe4fd56f8) is 5.9M, max 47.3M, 41.4M free. Jul 12 00:16:51.320200 systemd[1]: Queued start job for default target multi-user.target. Jul 12 00:16:51.337939 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 12 00:16:51.338308 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 12 00:16:51.497258 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 00:16:51.497820 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 12 00:16:51.498799 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 12 00:16:51.499710 systemd[1]: Mounted media.mount - External Media Directory. Jul 12 00:16:51.500530 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 12 00:16:51.501496 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 12 00:16:51.502388 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 12 00:16:51.503314 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 12 00:16:51.504401 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:16:51.505529 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 12 00:16:51.505656 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 12 00:16:51.506778 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:16:51.506906 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:16:51.509356 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:16:51.509490 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 00:16:51.510533 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:16:51.510649 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:16:51.511797 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 12 00:16:51.511934 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 12 00:16:51.512993 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:16:51.513155 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:16:51.514411 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 00:16:51.516376 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 12 00:16:51.517492 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 12 00:16:51.531198 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 12 00:16:51.537182 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 12 00:16:51.538864 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 12 00:16:51.539739 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 12 00:16:51.539773 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 00:16:51.541664 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 12 00:16:51.543701 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 12 00:16:51.545752 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 12 00:16:51.546638 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:16:51.549354 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 12 00:16:51.551111 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 12 00:16:51.551970 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:16:51.554257 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 12 00:16:51.555068 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 00:16:51.558243 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:16:51.563293 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 12 00:16:51.568946 systemd-journald[1112]: Time spent on flushing to /var/log/journal/00f2f2e79ffa49c28efb6ddfe4fd56f8 is 21.326ms for 858 entries. Jul 12 00:16:51.568946 systemd-journald[1112]: System Journal (/var/log/journal/00f2f2e79ffa49c28efb6ddfe4fd56f8) is 8.0M, max 195.6M, 187.6M free. Jul 12 00:16:51.609809 systemd-journald[1112]: Received client request to flush runtime journal. Jul 12 00:16:51.610467 kernel: loop0: detected capacity change from 0 to 114432 Jul 12 00:16:51.610511 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 12 00:16:51.567840 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 12 00:16:51.570531 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:16:51.571676 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 12 00:16:51.574349 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 12 00:16:51.575773 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 12 00:16:51.585147 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 12 00:16:51.586647 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 12 00:16:51.594601 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 12 00:16:51.599390 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 12 00:16:51.603561 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:16:51.611632 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 12 00:16:51.618906 udevadm[1165]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 12 00:16:51.619211 systemd-tmpfiles[1154]: ACLs are not supported, ignoring. Jul 12 00:16:51.619226 systemd-tmpfiles[1154]: ACLs are not supported, ignoring. Jul 12 00:16:51.625003 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 00:16:51.633253 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 12 00:16:51.634755 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 12 00:16:51.635396 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 12 00:16:51.641365 kernel: loop1: detected capacity change from 0 to 114328 Jul 12 00:16:51.655064 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 12 00:16:51.664359 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 00:16:51.677594 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Jul 12 00:16:51.677927 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Jul 12 00:16:51.682359 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:16:51.688449 kernel: loop2: detected capacity change from 0 to 211168 Jul 12 00:16:51.719130 kernel: loop3: detected capacity change from 0 to 114432 Jul 12 00:16:51.724105 kernel: loop4: detected capacity change from 0 to 114328 Jul 12 00:16:51.729130 kernel: loop5: detected capacity change from 0 to 211168 Jul 12 00:16:51.733878 (sd-merge)[1179]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 12 00:16:51.734667 (sd-merge)[1179]: Merged extensions into '/usr'. Jul 12 00:16:51.737993 systemd[1]: Reloading requested from client PID 1152 ('systemd-sysext') (unit systemd-sysext.service)... Jul 12 00:16:51.738011 systemd[1]: Reloading... Jul 12 00:16:51.793199 zram_generator::config[1202]: No configuration found. Jul 12 00:16:51.861150 ldconfig[1147]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 12 00:16:51.884645 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:16:51.921316 systemd[1]: Reloading finished in 182 ms. Jul 12 00:16:51.956535 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 12 00:16:51.959167 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 12 00:16:51.972663 systemd[1]: Starting ensure-sysext.service... Jul 12 00:16:51.974745 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 00:16:51.983024 systemd[1]: Reloading requested from client PID 1241 ('systemctl') (unit ensure-sysext.service)... Jul 12 00:16:51.983039 systemd[1]: Reloading... Jul 12 00:16:52.013491 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 12 00:16:52.013746 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 12 00:16:52.014391 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 12 00:16:52.014601 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Jul 12 00:16:52.014648 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Jul 12 00:16:52.017786 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 00:16:52.017798 systemd-tmpfiles[1242]: Skipping /boot Jul 12 00:16:52.025124 zram_generator::config[1269]: No configuration found. Jul 12 00:16:52.029605 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 00:16:52.029619 systemd-tmpfiles[1242]: Skipping /boot Jul 12 00:16:52.136294 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:16:52.172958 systemd[1]: Reloading finished in 189 ms. Jul 12 00:16:52.190249 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 12 00:16:52.202627 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:16:52.210519 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 12 00:16:52.213050 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 12 00:16:52.215389 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 12 00:16:52.220444 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 00:16:52.233542 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:16:52.235776 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 12 00:16:52.239394 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:16:52.243392 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:16:52.251430 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:16:52.256858 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:16:52.257769 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:16:52.261503 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 12 00:16:52.263553 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 12 00:16:52.265349 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:16:52.265498 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:16:52.271771 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:16:52.271933 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:16:52.275544 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:16:52.280491 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:16:52.281640 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:16:52.285630 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 12 00:16:52.287428 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:16:52.287622 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:16:52.288797 systemd-udevd[1316]: Using default interface naming scheme 'v255'. Jul 12 00:16:52.291335 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 12 00:16:52.293422 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 12 00:16:52.295166 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:16:52.295322 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:16:52.305032 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:16:52.322926 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:16:52.325164 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 00:16:52.328449 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:16:52.331343 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:16:52.332737 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:16:52.332821 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:16:52.333402 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 12 00:16:52.337152 systemd[1]: Finished ensure-sysext.service. Jul 12 00:16:52.338252 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 12 00:16:52.339525 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:16:52.339667 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:16:52.340767 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:16:52.342015 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:16:52.342158 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 00:16:52.344413 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:16:52.344542 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:16:52.346584 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:16:52.346757 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:16:52.350400 augenrules[1348]: No rules Jul 12 00:16:52.352496 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 12 00:16:52.366385 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 00:16:52.367182 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:16:52.367268 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 00:16:52.370204 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 12 00:16:52.397301 systemd-resolved[1309]: Positive Trust Anchors: Jul 12 00:16:52.399111 systemd-resolved[1309]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:16:52.399146 systemd-resolved[1309]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 00:16:52.405192 systemd-resolved[1309]: Defaulting to hostname 'linux'. Jul 12 00:16:52.411287 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 00:16:52.412373 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:16:52.416116 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1351) Jul 12 00:16:52.446897 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 12 00:16:52.448713 systemd-networkd[1373]: lo: Link UP Jul 12 00:16:52.448721 systemd-networkd[1373]: lo: Gained carrier Jul 12 00:16:52.449446 systemd-networkd[1373]: Enumeration completed Jul 12 00:16:52.449762 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 00:16:52.450037 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:16:52.450048 systemd-networkd[1373]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:16:52.450798 systemd-networkd[1373]: eth0: Link UP Jul 12 00:16:52.450806 systemd-networkd[1373]: eth0: Gained carrier Jul 12 00:16:52.450824 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:16:52.451748 systemd[1]: Reached target network.target - Network. Jul 12 00:16:52.459313 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 12 00:16:52.460988 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 12 00:16:52.462732 systemd[1]: Reached target time-set.target - System Time Set. Jul 12 00:16:52.467178 systemd-networkd[1373]: eth0: DHCPv4 address 10.0.0.81/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 12 00:16:52.467785 systemd-timesyncd[1378]: Network configuration changed, trying to establish connection. Jul 12 00:16:52.469051 systemd-timesyncd[1378]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 12 00:16:52.469123 systemd-timesyncd[1378]: Initial clock synchronization to Sat 2025-07-12 00:16:52.722219 UTC. Jul 12 00:16:52.470148 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:16:52.477978 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 12 00:16:52.487378 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 12 00:16:52.501006 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 12 00:16:52.516294 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:16:52.517563 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 12 00:16:52.520780 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 12 00:16:52.537279 lvm[1397]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:16:52.560167 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:16:52.571638 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 12 00:16:52.572851 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:16:52.573801 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 00:16:52.574688 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 12 00:16:52.575617 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 12 00:16:52.576723 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 12 00:16:52.577634 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 12 00:16:52.578551 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 12 00:16:52.579439 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 12 00:16:52.579473 systemd[1]: Reached target paths.target - Path Units. Jul 12 00:16:52.580113 systemd[1]: Reached target timers.target - Timer Units. Jul 12 00:16:52.581508 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 12 00:16:52.583644 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 12 00:16:52.593077 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 12 00:16:52.595014 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 12 00:16:52.596348 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 12 00:16:52.597218 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 00:16:52.597907 systemd[1]: Reached target basic.target - Basic System. Jul 12 00:16:52.598693 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 12 00:16:52.598728 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 12 00:16:52.599602 systemd[1]: Starting containerd.service - containerd container runtime... Jul 12 00:16:52.601307 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 12 00:16:52.602862 lvm[1406]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:16:52.603633 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 12 00:16:52.606274 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 12 00:16:52.607904 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 12 00:16:52.610367 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 12 00:16:52.613479 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 12 00:16:52.618319 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 12 00:16:52.620275 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 12 00:16:52.626268 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 12 00:16:52.627234 jq[1409]: false Jul 12 00:16:52.629602 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 12 00:16:52.630018 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 12 00:16:52.631203 systemd[1]: Starting update-engine.service - Update Engine... Jul 12 00:16:52.634005 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 12 00:16:52.638129 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 12 00:16:52.640490 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 12 00:16:52.641116 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 12 00:16:52.644824 systemd[1]: motdgen.service: Deactivated successfully. Jul 12 00:16:52.644987 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 12 00:16:52.646839 jq[1426]: true Jul 12 00:16:52.649560 dbus-daemon[1408]: [system] SELinux support is enabled Jul 12 00:16:52.651391 extend-filesystems[1410]: Found loop3 Jul 12 00:16:52.651391 extend-filesystems[1410]: Found loop4 Jul 12 00:16:52.651391 extend-filesystems[1410]: Found loop5 Jul 12 00:16:52.651391 extend-filesystems[1410]: Found vda Jul 12 00:16:52.651391 extend-filesystems[1410]: Found vda1 Jul 12 00:16:52.651391 extend-filesystems[1410]: Found vda2 Jul 12 00:16:52.651391 extend-filesystems[1410]: Found vda3 Jul 12 00:16:52.651391 extend-filesystems[1410]: Found usr Jul 12 00:16:52.651391 extend-filesystems[1410]: Found vda4 Jul 12 00:16:52.651391 extend-filesystems[1410]: Found vda6 Jul 12 00:16:52.651391 extend-filesystems[1410]: Found vda7 Jul 12 00:16:52.651391 extend-filesystems[1410]: Found vda9 Jul 12 00:16:52.651391 extend-filesystems[1410]: Checking size of /dev/vda9 Jul 12 00:16:52.653467 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 12 00:16:52.662064 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 12 00:16:52.662125 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 12 00:16:52.662490 (ntainerd)[1434]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 12 00:16:52.663452 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 12 00:16:52.663481 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 12 00:16:52.665186 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 12 00:16:52.665390 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 12 00:16:52.673106 jq[1435]: true Jul 12 00:16:52.685709 tar[1428]: linux-arm64/LICENSE Jul 12 00:16:52.685709 tar[1428]: linux-arm64/helm Jul 12 00:16:52.695027 extend-filesystems[1410]: Resized partition /dev/vda9 Jul 12 00:16:52.703227 extend-filesystems[1448]: resize2fs 1.47.1 (20-May-2024) Jul 12 00:16:52.706866 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1356) Jul 12 00:16:52.706893 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 12 00:16:52.707621 update_engine[1423]: I20250712 00:16:52.707140 1423 main.cc:92] Flatcar Update Engine starting Jul 12 00:16:52.710626 systemd[1]: Started update-engine.service - Update Engine. Jul 12 00:16:52.710935 update_engine[1423]: I20250712 00:16:52.710665 1423 update_check_scheduler.cc:74] Next update check in 6m1s Jul 12 00:16:52.719284 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 12 00:16:52.731582 systemd-logind[1417]: Watching system buttons on /dev/input/event0 (Power Button) Jul 12 00:16:52.732155 systemd-logind[1417]: New seat seat0. Jul 12 00:16:52.733064 systemd[1]: Started systemd-logind.service - User Login Management. Jul 12 00:16:52.754828 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 12 00:16:52.771828 extend-filesystems[1448]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 12 00:16:52.771828 extend-filesystems[1448]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 12 00:16:52.771828 extend-filesystems[1448]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 12 00:16:52.779369 extend-filesystems[1410]: Resized filesystem in /dev/vda9 Jul 12 00:16:52.775600 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 12 00:16:52.775772 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 12 00:16:52.784697 bash[1462]: Updated "/home/core/.ssh/authorized_keys" Jul 12 00:16:52.786698 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 12 00:16:52.788594 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 12 00:16:52.791727 locksmithd[1458]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 12 00:16:52.908200 containerd[1434]: time="2025-07-12T00:16:52.908058200Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 12 00:16:52.932846 containerd[1434]: time="2025-07-12T00:16:52.932710760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:16:52.934149 containerd[1434]: time="2025-07-12T00:16:52.934112000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:16:52.934149 containerd[1434]: time="2025-07-12T00:16:52.934144920Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 12 00:16:52.934214 containerd[1434]: time="2025-07-12T00:16:52.934161280Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 12 00:16:52.934341 containerd[1434]: time="2025-07-12T00:16:52.934320800Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 12 00:16:52.934382 containerd[1434]: time="2025-07-12T00:16:52.934345520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 12 00:16:52.934410 containerd[1434]: time="2025-07-12T00:16:52.934396000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:16:52.934434 containerd[1434]: time="2025-07-12T00:16:52.934410600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:16:52.934600 containerd[1434]: time="2025-07-12T00:16:52.934560360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:16:52.934600 containerd[1434]: time="2025-07-12T00:16:52.934581840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 12 00:16:52.934655 containerd[1434]: time="2025-07-12T00:16:52.934594280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:16:52.934655 containerd[1434]: time="2025-07-12T00:16:52.934615400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 12 00:16:52.934710 containerd[1434]: time="2025-07-12T00:16:52.934685360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:16:52.934898 containerd[1434]: time="2025-07-12T00:16:52.934880480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:16:52.934996 containerd[1434]: time="2025-07-12T00:16:52.934977280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:16:52.934996 containerd[1434]: time="2025-07-12T00:16:52.934994920Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 12 00:16:52.935139 containerd[1434]: time="2025-07-12T00:16:52.935082600Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 12 00:16:52.935169 containerd[1434]: time="2025-07-12T00:16:52.935151280Z" level=info msg="metadata content store policy set" policy=shared Jul 12 00:16:52.941866 containerd[1434]: time="2025-07-12T00:16:52.941836080Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 12 00:16:52.941944 containerd[1434]: time="2025-07-12T00:16:52.941903760Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 12 00:16:52.941944 containerd[1434]: time="2025-07-12T00:16:52.941922600Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 12 00:16:52.941944 containerd[1434]: time="2025-07-12T00:16:52.941940480Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 12 00:16:52.942003 containerd[1434]: time="2025-07-12T00:16:52.941954520Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 12 00:16:52.942226 containerd[1434]: time="2025-07-12T00:16:52.942081600Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 12 00:16:52.942560 containerd[1434]: time="2025-07-12T00:16:52.942403840Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 12 00:16:52.942560 containerd[1434]: time="2025-07-12T00:16:52.942511400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 12 00:16:52.942560 containerd[1434]: time="2025-07-12T00:16:52.942526760Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 12 00:16:52.942560 containerd[1434]: time="2025-07-12T00:16:52.942538960Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 12 00:16:52.942560 containerd[1434]: time="2025-07-12T00:16:52.942552680Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 12 00:16:52.942560 containerd[1434]: time="2025-07-12T00:16:52.942565560Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 12 00:16:52.942707 containerd[1434]: time="2025-07-12T00:16:52.942577880Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 12 00:16:52.942707 containerd[1434]: time="2025-07-12T00:16:52.942591320Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 12 00:16:52.942707 containerd[1434]: time="2025-07-12T00:16:52.942605360Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 12 00:16:52.942707 containerd[1434]: time="2025-07-12T00:16:52.942617560Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 12 00:16:52.942707 containerd[1434]: time="2025-07-12T00:16:52.942629680Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 12 00:16:52.942707 containerd[1434]: time="2025-07-12T00:16:52.942641360Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 12 00:16:52.942707 containerd[1434]: time="2025-07-12T00:16:52.942660680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 12 00:16:52.942707 containerd[1434]: time="2025-07-12T00:16:52.942674480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 12 00:16:52.942707 containerd[1434]: time="2025-07-12T00:16:52.942687600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 12 00:16:52.942707 containerd[1434]: time="2025-07-12T00:16:52.942699920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 12 00:16:52.942707 containerd[1434]: time="2025-07-12T00:16:52.942712280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 12 00:16:52.942901 containerd[1434]: time="2025-07-12T00:16:52.942726120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 12 00:16:52.942901 containerd[1434]: time="2025-07-12T00:16:52.942739720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 12 00:16:52.942901 containerd[1434]: time="2025-07-12T00:16:52.942753080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 12 00:16:52.942901 containerd[1434]: time="2025-07-12T00:16:52.942765640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 12 00:16:52.942901 containerd[1434]: time="2025-07-12T00:16:52.942779880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 12 00:16:52.942901 containerd[1434]: time="2025-07-12T00:16:52.942792120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 12 00:16:52.942901 containerd[1434]: time="2025-07-12T00:16:52.942804720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 12 00:16:52.942901 containerd[1434]: time="2025-07-12T00:16:52.942817600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 12 00:16:52.942901 containerd[1434]: time="2025-07-12T00:16:52.942835840Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 12 00:16:52.942901 containerd[1434]: time="2025-07-12T00:16:52.942863400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 12 00:16:52.942901 containerd[1434]: time="2025-07-12T00:16:52.942875600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 12 00:16:52.942901 containerd[1434]: time="2025-07-12T00:16:52.942887240Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 12 00:16:52.943547 containerd[1434]: time="2025-07-12T00:16:52.943521880Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 12 00:16:52.943599 containerd[1434]: time="2025-07-12T00:16:52.943554040Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 12 00:16:52.943599 containerd[1434]: time="2025-07-12T00:16:52.943566240Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 12 00:16:52.943599 containerd[1434]: time="2025-07-12T00:16:52.943578320Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 12 00:16:52.943599 containerd[1434]: time="2025-07-12T00:16:52.943588520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 12 00:16:52.943950 containerd[1434]: time="2025-07-12T00:16:52.943601200Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 12 00:16:52.943950 containerd[1434]: time="2025-07-12T00:16:52.943611400Z" level=info msg="NRI interface is disabled by configuration." Jul 12 00:16:52.943950 containerd[1434]: time="2025-07-12T00:16:52.943621440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 12 00:16:52.944274 containerd[1434]: time="2025-07-12T00:16:52.944174440Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 12 00:16:52.944274 containerd[1434]: time="2025-07-12T00:16:52.944277000Z" level=info msg="Connect containerd service" Jul 12 00:16:52.944421 containerd[1434]: time="2025-07-12T00:16:52.944321200Z" level=info msg="using legacy CRI server" Jul 12 00:16:52.944421 containerd[1434]: time="2025-07-12T00:16:52.944328880Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 12 00:16:52.947641 containerd[1434]: time="2025-07-12T00:16:52.947112680Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 12 00:16:52.948361 containerd[1434]: time="2025-07-12T00:16:52.948326160Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:16:52.948552 containerd[1434]: time="2025-07-12T00:16:52.948523440Z" level=info msg="Start subscribing containerd event" Jul 12 00:16:52.948585 containerd[1434]: time="2025-07-12T00:16:52.948570360Z" level=info msg="Start recovering state" Jul 12 00:16:52.948649 containerd[1434]: time="2025-07-12T00:16:52.948634920Z" level=info msg="Start event monitor" Jul 12 00:16:52.948684 containerd[1434]: time="2025-07-12T00:16:52.948654000Z" level=info msg="Start snapshots syncer" Jul 12 00:16:52.948684 containerd[1434]: time="2025-07-12T00:16:52.948664360Z" level=info msg="Start cni network conf syncer for default" Jul 12 00:16:52.948684 containerd[1434]: time="2025-07-12T00:16:52.948676360Z" level=info msg="Start streaming server" Jul 12 00:16:52.949267 containerd[1434]: time="2025-07-12T00:16:52.949238760Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 12 00:16:52.949305 containerd[1434]: time="2025-07-12T00:16:52.949296040Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 12 00:16:52.949438 systemd[1]: Started containerd.service - containerd container runtime. Jul 12 00:16:52.950635 containerd[1434]: time="2025-07-12T00:16:52.950604800Z" level=info msg="containerd successfully booted in 0.043927s" Jul 12 00:16:53.114300 tar[1428]: linux-arm64/README.md Jul 12 00:16:53.124045 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 12 00:16:53.710742 sshd_keygen[1425]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 12 00:16:53.730767 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 12 00:16:53.745399 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 12 00:16:53.751073 systemd[1]: issuegen.service: Deactivated successfully. Jul 12 00:16:53.752213 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 12 00:16:53.754800 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 12 00:16:53.768919 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 12 00:16:53.773443 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 12 00:16:53.775495 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 12 00:16:53.776699 systemd[1]: Reached target getty.target - Login Prompts. Jul 12 00:16:54.247003 systemd-networkd[1373]: eth0: Gained IPv6LL Jul 12 00:16:54.250027 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 12 00:16:54.251743 systemd[1]: Reached target network-online.target - Network is Online. Jul 12 00:16:54.267436 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 12 00:16:54.270082 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:16:54.272092 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 12 00:16:54.293519 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 12 00:16:54.294694 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 12 00:16:54.297037 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 12 00:16:54.301638 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 12 00:16:54.885081 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:16:54.886717 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 12 00:16:54.890329 (kubelet)[1521]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:16:54.892318 systemd[1]: Startup finished in 628ms (kernel) + 5.240s (initrd) + 3.960s (userspace) = 9.829s. Jul 12 00:16:55.396290 kubelet[1521]: E0712 00:16:55.396231 1521 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:16:55.398942 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:16:55.399084 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:16:58.656049 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 12 00:16:58.657477 systemd[1]: Started sshd@0-10.0.0.81:22-10.0.0.1:50912.service - OpenSSH per-connection server daemon (10.0.0.1:50912). Jul 12 00:16:58.744997 sshd[1534]: Accepted publickey for core from 10.0.0.1 port 50912 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:16:58.747802 sshd[1534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:16:58.761696 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 12 00:16:58.773362 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 12 00:16:58.776023 systemd-logind[1417]: New session 1 of user core. Jul 12 00:16:58.787034 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 12 00:16:58.790393 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 12 00:16:58.804565 (systemd)[1538]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:16:58.898752 systemd[1538]: Queued start job for default target default.target. Jul 12 00:16:58.909178 systemd[1538]: Created slice app.slice - User Application Slice. Jul 12 00:16:58.909208 systemd[1538]: Reached target paths.target - Paths. Jul 12 00:16:58.909231 systemd[1538]: Reached target timers.target - Timers. Jul 12 00:16:58.910447 systemd[1538]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 12 00:16:58.920492 systemd[1538]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 12 00:16:58.920557 systemd[1538]: Reached target sockets.target - Sockets. Jul 12 00:16:58.920570 systemd[1538]: Reached target basic.target - Basic System. Jul 12 00:16:58.920607 systemd[1538]: Reached target default.target - Main User Target. Jul 12 00:16:58.920634 systemd[1538]: Startup finished in 105ms. Jul 12 00:16:58.924726 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 12 00:16:58.933285 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 12 00:16:58.997016 systemd[1]: Started sshd@1-10.0.0.81:22-10.0.0.1:50922.service - OpenSSH per-connection server daemon (10.0.0.1:50922). Jul 12 00:16:59.041887 sshd[1549]: Accepted publickey for core from 10.0.0.1 port 50922 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:16:59.043342 sshd[1549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:16:59.047534 systemd-logind[1417]: New session 2 of user core. Jul 12 00:16:59.058315 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 12 00:16:59.111237 sshd[1549]: pam_unix(sshd:session): session closed for user core Jul 12 00:16:59.124627 systemd[1]: sshd@1-10.0.0.81:22-10.0.0.1:50922.service: Deactivated successfully. Jul 12 00:16:59.126796 systemd[1]: session-2.scope: Deactivated successfully. Jul 12 00:16:59.128123 systemd-logind[1417]: Session 2 logged out. Waiting for processes to exit. Jul 12 00:16:59.135417 systemd[1]: Started sshd@2-10.0.0.81:22-10.0.0.1:50932.service - OpenSSH per-connection server daemon (10.0.0.1:50932). Jul 12 00:16:59.139126 systemd-logind[1417]: Removed session 2. Jul 12 00:16:59.170152 sshd[1556]: Accepted publickey for core from 10.0.0.1 port 50932 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:16:59.171473 sshd[1556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:16:59.176278 systemd-logind[1417]: New session 3 of user core. Jul 12 00:16:59.190292 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 12 00:16:59.239114 sshd[1556]: pam_unix(sshd:session): session closed for user core Jul 12 00:16:59.245288 systemd[1]: sshd@2-10.0.0.81:22-10.0.0.1:50932.service: Deactivated successfully. Jul 12 00:16:59.248294 systemd[1]: session-3.scope: Deactivated successfully. Jul 12 00:16:59.249834 systemd-logind[1417]: Session 3 logged out. Waiting for processes to exit. Jul 12 00:16:59.251020 systemd[1]: Started sshd@3-10.0.0.81:22-10.0.0.1:50948.service - OpenSSH per-connection server daemon (10.0.0.1:50948). Jul 12 00:16:59.254965 systemd-logind[1417]: Removed session 3. Jul 12 00:16:59.286629 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 50948 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:16:59.287990 sshd[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:16:59.294359 systemd-logind[1417]: New session 4 of user core. Jul 12 00:16:59.305274 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 12 00:16:59.357437 sshd[1563]: pam_unix(sshd:session): session closed for user core Jul 12 00:16:59.365268 systemd[1]: sshd@3-10.0.0.81:22-10.0.0.1:50948.service: Deactivated successfully. Jul 12 00:16:59.366563 systemd[1]: session-4.scope: Deactivated successfully. Jul 12 00:16:59.368205 systemd-logind[1417]: Session 4 logged out. Waiting for processes to exit. Jul 12 00:16:59.369292 systemd[1]: Started sshd@4-10.0.0.81:22-10.0.0.1:50956.service - OpenSSH per-connection server daemon (10.0.0.1:50956). Jul 12 00:16:59.373674 systemd-logind[1417]: Removed session 4. Jul 12 00:16:59.411887 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 50956 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:16:59.413192 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:16:59.418154 systemd-logind[1417]: New session 5 of user core. Jul 12 00:16:59.426278 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 12 00:16:59.491880 sudo[1573]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 12 00:16:59.492182 sudo[1573]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:16:59.507919 sudo[1573]: pam_unix(sudo:session): session closed for user root Jul 12 00:16:59.509643 sshd[1570]: pam_unix(sshd:session): session closed for user core Jul 12 00:16:59.516516 systemd[1]: sshd@4-10.0.0.81:22-10.0.0.1:50956.service: Deactivated successfully. Jul 12 00:16:59.517971 systemd[1]: session-5.scope: Deactivated successfully. Jul 12 00:16:59.519631 systemd-logind[1417]: Session 5 logged out. Waiting for processes to exit. Jul 12 00:16:59.530376 systemd[1]: Started sshd@5-10.0.0.81:22-10.0.0.1:50964.service - OpenSSH per-connection server daemon (10.0.0.1:50964). Jul 12 00:16:59.531405 systemd-logind[1417]: Removed session 5. Jul 12 00:16:59.565511 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 50964 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:16:59.566843 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:16:59.570842 systemd-logind[1417]: New session 6 of user core. Jul 12 00:16:59.589276 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 12 00:16:59.641577 sudo[1582]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 12 00:16:59.642322 sudo[1582]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:16:59.648653 sudo[1582]: pam_unix(sudo:session): session closed for user root Jul 12 00:16:59.653656 sudo[1581]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 12 00:16:59.653938 sudo[1581]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:16:59.671399 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 12 00:16:59.672817 auditctl[1585]: No rules Jul 12 00:16:59.673702 systemd[1]: audit-rules.service: Deactivated successfully. Jul 12 00:16:59.673918 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 12 00:16:59.675576 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 12 00:16:59.702478 augenrules[1603]: No rules Jul 12 00:16:59.703877 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 12 00:16:59.705386 sudo[1581]: pam_unix(sudo:session): session closed for user root Jul 12 00:16:59.707040 sshd[1578]: pam_unix(sshd:session): session closed for user core Jul 12 00:16:59.719515 systemd[1]: sshd@5-10.0.0.81:22-10.0.0.1:50964.service: Deactivated successfully. Jul 12 00:16:59.721016 systemd[1]: session-6.scope: Deactivated successfully. Jul 12 00:16:59.722539 systemd-logind[1417]: Session 6 logged out. Waiting for processes to exit. Jul 12 00:16:59.735483 systemd[1]: Started sshd@6-10.0.0.81:22-10.0.0.1:50974.service - OpenSSH per-connection server daemon (10.0.0.1:50974). Jul 12 00:16:59.740195 systemd-logind[1417]: Removed session 6. Jul 12 00:16:59.767733 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 50974 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:16:59.768975 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:16:59.773160 systemd-logind[1417]: New session 7 of user core. Jul 12 00:16:59.784258 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 12 00:16:59.837085 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 12 00:16:59.837378 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:17:00.185388 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 12 00:17:00.185961 (dockerd)[1632]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 12 00:17:00.495240 dockerd[1632]: time="2025-07-12T00:17:00.494983756Z" level=info msg="Starting up" Jul 12 00:17:00.673993 dockerd[1632]: time="2025-07-12T00:17:00.673940155Z" level=info msg="Loading containers: start." Jul 12 00:17:00.813137 kernel: Initializing XFRM netlink socket Jul 12 00:17:00.882743 systemd-networkd[1373]: docker0: Link UP Jul 12 00:17:00.898515 dockerd[1632]: time="2025-07-12T00:17:00.898466090Z" level=info msg="Loading containers: done." Jul 12 00:17:00.913472 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck285130428-merged.mount: Deactivated successfully. Jul 12 00:17:00.916788 dockerd[1632]: time="2025-07-12T00:17:00.916737074Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 12 00:17:00.916865 dockerd[1632]: time="2025-07-12T00:17:00.916849454Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 12 00:17:00.917011 dockerd[1632]: time="2025-07-12T00:17:00.916991194Z" level=info msg="Daemon has completed initialization" Jul 12 00:17:00.955138 dockerd[1632]: time="2025-07-12T00:17:00.955000136Z" level=info msg="API listen on /run/docker.sock" Jul 12 00:17:00.955377 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 12 00:17:01.488386 containerd[1434]: time="2025-07-12T00:17:01.488347325Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 12 00:17:02.130542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2100568856.mount: Deactivated successfully. Jul 12 00:17:03.354140 containerd[1434]: time="2025-07-12T00:17:03.354045593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:03.355693 containerd[1434]: time="2025-07-12T00:17:03.355445516Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=27351718" Jul 12 00:17:03.356497 containerd[1434]: time="2025-07-12T00:17:03.356458200Z" level=info msg="ImageCreate event name:\"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:03.359541 containerd[1434]: time="2025-07-12T00:17:03.359493227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:03.362156 containerd[1434]: time="2025-07-12T00:17:03.362091406Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"27348516\" in 1.873694187s" Jul 12 00:17:03.362156 containerd[1434]: time="2025-07-12T00:17:03.362147872Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\"" Jul 12 00:17:03.365807 containerd[1434]: time="2025-07-12T00:17:03.365775917Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 12 00:17:04.357177 containerd[1434]: time="2025-07-12T00:17:04.357127131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:04.358363 containerd[1434]: time="2025-07-12T00:17:04.358330516Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=23537625" Jul 12 00:17:04.360915 containerd[1434]: time="2025-07-12T00:17:04.360850869Z" level=info msg="ImageCreate event name:\"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:04.364294 containerd[1434]: time="2025-07-12T00:17:04.364239786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:04.366137 containerd[1434]: time="2025-07-12T00:17:04.365843306Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"25092541\" in 1.000023119s" Jul 12 00:17:04.366137 containerd[1434]: time="2025-07-12T00:17:04.365880254Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\"" Jul 12 00:17:04.366593 containerd[1434]: time="2025-07-12T00:17:04.366563597Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 12 00:17:05.389534 containerd[1434]: time="2025-07-12T00:17:05.389462798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:05.390162 containerd[1434]: time="2025-07-12T00:17:05.390126604Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=18293517" Jul 12 00:17:05.390872 containerd[1434]: time="2025-07-12T00:17:05.390833281Z" level=info msg="ImageCreate event name:\"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:05.393850 containerd[1434]: time="2025-07-12T00:17:05.393812074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:05.395101 containerd[1434]: time="2025-07-12T00:17:05.395059095Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"19848451\" in 1.028464321s" Jul 12 00:17:05.395143 containerd[1434]: time="2025-07-12T00:17:05.395107562Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\"" Jul 12 00:17:05.395685 containerd[1434]: time="2025-07-12T00:17:05.395646336Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 12 00:17:05.574956 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 12 00:17:05.584289 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:17:05.703239 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:17:05.707484 (kubelet)[1851]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:17:05.743980 kubelet[1851]: E0712 00:17:05.743931 1851 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:17:05.746668 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:17:05.746794 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:17:06.417709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount922263219.mount: Deactivated successfully. Jul 12 00:17:06.789722 containerd[1434]: time="2025-07-12T00:17:06.789445015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:06.790565 containerd[1434]: time="2025-07-12T00:17:06.790347704Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=28199474" Jul 12 00:17:06.791291 containerd[1434]: time="2025-07-12T00:17:06.791238767Z" level=info msg="ImageCreate event name:\"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:06.793209 containerd[1434]: time="2025-07-12T00:17:06.793170163Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:06.793887 containerd[1434]: time="2025-07-12T00:17:06.793849854Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"28198491\" in 1.398163755s" Jul 12 00:17:06.793929 containerd[1434]: time="2025-07-12T00:17:06.793885532Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\"" Jul 12 00:17:06.794403 containerd[1434]: time="2025-07-12T00:17:06.794367245Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 12 00:17:07.285667 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3231000494.mount: Deactivated successfully. Jul 12 00:17:08.078853 containerd[1434]: time="2025-07-12T00:17:08.078801071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:08.079700 containerd[1434]: time="2025-07-12T00:17:08.079651006Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Jul 12 00:17:08.080308 containerd[1434]: time="2025-07-12T00:17:08.080282452Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:08.083988 containerd[1434]: time="2025-07-12T00:17:08.083946356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:08.085290 containerd[1434]: time="2025-07-12T00:17:08.085250463Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.290840636s" Jul 12 00:17:08.085335 containerd[1434]: time="2025-07-12T00:17:08.085293968Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jul 12 00:17:08.086311 containerd[1434]: time="2025-07-12T00:17:08.086278074Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 12 00:17:08.581224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3658883082.mount: Deactivated successfully. Jul 12 00:17:08.587852 containerd[1434]: time="2025-07-12T00:17:08.587798784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:08.589391 containerd[1434]: time="2025-07-12T00:17:08.589311458Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 12 00:17:08.590201 containerd[1434]: time="2025-07-12T00:17:08.590168423Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:08.592242 containerd[1434]: time="2025-07-12T00:17:08.592209223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:08.593195 containerd[1434]: time="2025-07-12T00:17:08.593155970Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 506.840899ms" Jul 12 00:17:08.593258 containerd[1434]: time="2025-07-12T00:17:08.593194132Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 12 00:17:08.593749 containerd[1434]: time="2025-07-12T00:17:08.593718041Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 12 00:17:09.007149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3779295672.mount: Deactivated successfully. Jul 12 00:17:10.499084 containerd[1434]: time="2025-07-12T00:17:10.499027676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:10.499993 containerd[1434]: time="2025-07-12T00:17:10.499947193Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334601" Jul 12 00:17:10.500592 containerd[1434]: time="2025-07-12T00:17:10.500566331Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:10.503761 containerd[1434]: time="2025-07-12T00:17:10.503727877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:10.505156 containerd[1434]: time="2025-07-12T00:17:10.505122503Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 1.911315252s" Jul 12 00:17:10.505199 containerd[1434]: time="2025-07-12T00:17:10.505159303Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jul 12 00:17:15.824588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 12 00:17:15.834308 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:17:15.929689 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:17:15.933434 (kubelet)[2011]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:17:15.966602 kubelet[2011]: E0712 00:17:15.966552 2011 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:17:15.969341 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:17:15.969580 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:17:16.524846 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:17:16.541312 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:17:16.561608 systemd[1]: Reloading requested from client PID 2027 ('systemctl') (unit session-7.scope)... Jul 12 00:17:16.561758 systemd[1]: Reloading... Jul 12 00:17:16.629125 zram_generator::config[2069]: No configuration found. Jul 12 00:17:16.829716 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:17:16.884304 systemd[1]: Reloading finished in 322 ms. Jul 12 00:17:16.925435 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 12 00:17:16.925498 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 12 00:17:16.926227 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:17:16.928215 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:17:17.031246 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:17:17.036594 (kubelet)[2112]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:17:17.068593 kubelet[2112]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:17:17.068593 kubelet[2112]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 12 00:17:17.068593 kubelet[2112]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:17:17.068981 kubelet[2112]: I0712 00:17:17.068628 2112 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:17:18.040045 kubelet[2112]: I0712 00:17:18.039991 2112 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 12 00:17:18.040045 kubelet[2112]: I0712 00:17:18.040027 2112 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:17:18.040278 kubelet[2112]: I0712 00:17:18.040247 2112 server.go:956] "Client rotation is on, will bootstrap in background" Jul 12 00:17:18.090765 kubelet[2112]: E0712 00:17:18.090715 2112 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.81:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 12 00:17:18.093095 kubelet[2112]: I0712 00:17:18.092962 2112 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:17:18.102123 kubelet[2112]: E0712 00:17:18.102069 2112 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:17:18.102123 kubelet[2112]: I0712 00:17:18.102123 2112 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:17:18.105069 kubelet[2112]: I0712 00:17:18.104938 2112 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:17:18.105385 kubelet[2112]: I0712 00:17:18.105354 2112 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:17:18.105531 kubelet[2112]: I0712 00:17:18.105387 2112 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 00:17:18.105697 kubelet[2112]: I0712 00:17:18.105686 2112 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:17:18.105697 kubelet[2112]: I0712 00:17:18.105697 2112 container_manager_linux.go:303] "Creating device plugin manager" Jul 12 00:17:18.105990 kubelet[2112]: I0712 00:17:18.105975 2112 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:17:18.108992 kubelet[2112]: I0712 00:17:18.108961 2112 kubelet.go:480] "Attempting to sync node with API server" Jul 12 00:17:18.108992 kubelet[2112]: I0712 00:17:18.108988 2112 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:17:18.109105 kubelet[2112]: I0712 00:17:18.109012 2112 kubelet.go:386] "Adding apiserver pod source" Jul 12 00:17:18.110093 kubelet[2112]: I0712 00:17:18.110060 2112 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:17:18.114928 kubelet[2112]: E0712 00:17:18.113232 2112 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 12 00:17:18.115700 kubelet[2112]: I0712 00:17:18.115289 2112 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 12 00:17:18.117263 kubelet[2112]: E0712 00:17:18.117233 2112 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 12 00:17:18.117340 kubelet[2112]: I0712 00:17:18.117263 2112 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 12 00:17:18.117639 kubelet[2112]: W0712 00:17:18.117616 2112 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 12 00:17:18.121533 kubelet[2112]: I0712 00:17:18.121509 2112 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 12 00:17:18.123362 kubelet[2112]: I0712 00:17:18.121568 2112 server.go:1289] "Started kubelet" Jul 12 00:17:18.123362 kubelet[2112]: I0712 00:17:18.121654 2112 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:17:18.123362 kubelet[2112]: I0712 00:17:18.122215 2112 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:17:18.123362 kubelet[2112]: I0712 00:17:18.122546 2112 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:17:18.123362 kubelet[2112]: I0712 00:17:18.123039 2112 server.go:317] "Adding debug handlers to kubelet server" Jul 12 00:17:18.125793 kubelet[2112]: I0712 00:17:18.125606 2112 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:17:18.126125 kubelet[2112]: I0712 00:17:18.126074 2112 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:17:18.127267 kubelet[2112]: I0712 00:17:18.127241 2112 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 12 00:17:18.128111 kubelet[2112]: E0712 00:17:18.127339 2112 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:17:18.128111 kubelet[2112]: I0712 00:17:18.127967 2112 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 12 00:17:18.128111 kubelet[2112]: I0712 00:17:18.128010 2112 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:17:18.128111 kubelet[2112]: E0712 00:17:18.128025 2112 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="200ms" Jul 12 00:17:18.134182 kubelet[2112]: I0712 00:17:18.133471 2112 factory.go:223] Registration of the systemd container factory successfully Jul 12 00:17:18.134182 kubelet[2112]: I0712 00:17:18.133580 2112 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:17:18.134182 kubelet[2112]: E0712 00:17:18.134019 2112 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 12 00:17:18.134520 kubelet[2112]: E0712 00:17:18.134500 2112 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 00:17:18.134836 kubelet[2112]: I0712 00:17:18.134807 2112 factory.go:223] Registration of the containerd container factory successfully Jul 12 00:17:18.135526 kubelet[2112]: E0712 00:17:18.134454 2112 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.81:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.81:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185158dd707470c7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-12 00:17:18.121529543 +0000 UTC m=+1.081455571,LastTimestamp:2025-07-12 00:17:18.121529543 +0000 UTC m=+1.081455571,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 12 00:17:18.144346 kubelet[2112]: I0712 00:17:18.144302 2112 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 12 00:17:18.145497 kubelet[2112]: I0712 00:17:18.145469 2112 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 12 00:17:18.145497 kubelet[2112]: I0712 00:17:18.145495 2112 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 12 00:17:18.145607 kubelet[2112]: I0712 00:17:18.145515 2112 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 12 00:17:18.145607 kubelet[2112]: I0712 00:17:18.145522 2112 kubelet.go:2436] "Starting kubelet main sync loop" Jul 12 00:17:18.145607 kubelet[2112]: E0712 00:17:18.145561 2112 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:17:18.149466 kubelet[2112]: E0712 00:17:18.149428 2112 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 12 00:17:18.150469 kubelet[2112]: I0712 00:17:18.150433 2112 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 12 00:17:18.150469 kubelet[2112]: I0712 00:17:18.150459 2112 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 12 00:17:18.152793 kubelet[2112]: I0712 00:17:18.152765 2112 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:17:18.224293 kubelet[2112]: I0712 00:17:18.224260 2112 policy_none.go:49] "None policy: Start" Jul 12 00:17:18.224293 kubelet[2112]: I0712 00:17:18.224295 2112 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 12 00:17:18.224293 kubelet[2112]: I0712 00:17:18.224307 2112 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:17:18.227676 kubelet[2112]: E0712 00:17:18.227615 2112 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:17:18.231633 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 12 00:17:18.246125 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 12 00:17:18.246353 kubelet[2112]: E0712 00:17:18.246111 2112 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 12 00:17:18.249005 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 12 00:17:18.264121 kubelet[2112]: E0712 00:17:18.264070 2112 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 12 00:17:18.264613 kubelet[2112]: I0712 00:17:18.264290 2112 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:17:18.264613 kubelet[2112]: I0712 00:17:18.264305 2112 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:17:18.264695 kubelet[2112]: I0712 00:17:18.264640 2112 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:17:18.265518 kubelet[2112]: E0712 00:17:18.265489 2112 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 12 00:17:18.265574 kubelet[2112]: E0712 00:17:18.265526 2112 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 12 00:17:18.328853 kubelet[2112]: E0712 00:17:18.328790 2112 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="400ms" Jul 12 00:17:18.367413 kubelet[2112]: I0712 00:17:18.367358 2112 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 00:17:18.367823 kubelet[2112]: E0712 00:17:18.367781 2112 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" Jul 12 00:17:18.455596 systemd[1]: Created slice kubepods-burstable-pod5505ba027364220217378afd332d02a3.slice - libcontainer container kubepods-burstable-pod5505ba027364220217378afd332d02a3.slice. Jul 12 00:17:18.465122 kubelet[2112]: E0712 00:17:18.464900 2112 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:17:18.468840 systemd[1]: Created slice kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice - libcontainer container kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice. Jul 12 00:17:18.486719 kubelet[2112]: E0712 00:17:18.486612 2112 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:17:18.491667 systemd[1]: Created slice kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice - libcontainer container kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice. Jul 12 00:17:18.493248 kubelet[2112]: E0712 00:17:18.493189 2112 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:17:18.529433 kubelet[2112]: I0712 00:17:18.529395 2112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5505ba027364220217378afd332d02a3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5505ba027364220217378afd332d02a3\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:17:18.529433 kubelet[2112]: I0712 00:17:18.529438 2112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:17:18.529627 kubelet[2112]: I0712 00:17:18.529456 2112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:17:18.529627 kubelet[2112]: I0712 00:17:18.529502 2112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5505ba027364220217378afd332d02a3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5505ba027364220217378afd332d02a3\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:17:18.529627 kubelet[2112]: I0712 00:17:18.529537 2112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5505ba027364220217378afd332d02a3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5505ba027364220217378afd332d02a3\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:17:18.529627 kubelet[2112]: I0712 00:17:18.529558 2112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:17:18.529627 kubelet[2112]: I0712 00:17:18.529574 2112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:17:18.529775 kubelet[2112]: I0712 00:17:18.529589 2112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:17:18.529775 kubelet[2112]: I0712 00:17:18.529608 2112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 12 00:17:18.569640 kubelet[2112]: I0712 00:17:18.569614 2112 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 00:17:18.569994 kubelet[2112]: E0712 00:17:18.569967 2112 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" Jul 12 00:17:18.730027 kubelet[2112]: E0712 00:17:18.729902 2112 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="800ms" Jul 12 00:17:18.766382 kubelet[2112]: E0712 00:17:18.766341 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:18.767040 containerd[1434]: time="2025-07-12T00:17:18.767001069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5505ba027364220217378afd332d02a3,Namespace:kube-system,Attempt:0,}" Jul 12 00:17:18.787739 kubelet[2112]: E0712 00:17:18.787701 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:18.788174 containerd[1434]: time="2025-07-12T00:17:18.788138555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,}" Jul 12 00:17:18.794711 kubelet[2112]: E0712 00:17:18.794470 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:18.798278 containerd[1434]: time="2025-07-12T00:17:18.798148791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,}" Jul 12 00:17:18.971139 kubelet[2112]: I0712 00:17:18.971082 2112 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 00:17:18.971446 kubelet[2112]: E0712 00:17:18.971425 2112 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" Jul 12 00:17:19.040474 kubelet[2112]: E0712 00:17:19.040353 2112 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 12 00:17:19.290282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount644809980.mount: Deactivated successfully. Jul 12 00:17:19.296154 containerd[1434]: time="2025-07-12T00:17:19.296051017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:17:19.297798 containerd[1434]: time="2025-07-12T00:17:19.297567267Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:17:19.298229 containerd[1434]: time="2025-07-12T00:17:19.298192441Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 12 00:17:19.299133 containerd[1434]: time="2025-07-12T00:17:19.299066459Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:17:19.299819 containerd[1434]: time="2025-07-12T00:17:19.299681904Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 12 00:17:19.300635 containerd[1434]: time="2025-07-12T00:17:19.300607894Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:17:19.301138 containerd[1434]: time="2025-07-12T00:17:19.301111548Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 12 00:17:19.303115 containerd[1434]: time="2025-07-12T00:17:19.303046089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:17:19.307301 containerd[1434]: time="2025-07-12T00:17:19.307245854Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 519.028853ms" Jul 12 00:17:19.308125 containerd[1434]: time="2025-07-12T00:17:19.308066420Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 509.812076ms" Jul 12 00:17:19.310369 containerd[1434]: time="2025-07-12T00:17:19.310308022Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 543.224542ms" Jul 12 00:17:19.488117 containerd[1434]: time="2025-07-12T00:17:19.487988278Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:17:19.488426 containerd[1434]: time="2025-07-12T00:17:19.488204730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:17:19.488426 containerd[1434]: time="2025-07-12T00:17:19.488244329Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:17:19.488426 containerd[1434]: time="2025-07-12T00:17:19.488255180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:17:19.488426 containerd[1434]: time="2025-07-12T00:17:19.488330174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:17:19.488426 containerd[1434]: time="2025-07-12T00:17:19.488107555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:17:19.488426 containerd[1434]: time="2025-07-12T00:17:19.488123451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:17:19.488426 containerd[1434]: time="2025-07-12T00:17:19.488233999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:17:19.489322 containerd[1434]: time="2025-07-12T00:17:19.489242830Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:17:19.490253 containerd[1434]: time="2025-07-12T00:17:19.489987722Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:17:19.490253 containerd[1434]: time="2025-07-12T00:17:19.490022396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:17:19.490253 containerd[1434]: time="2025-07-12T00:17:19.490157649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:17:19.507280 systemd[1]: Started cri-containerd-076804885f17845ffa6d769d83db29420e5977686399c45742681be168ff4fa7.scope - libcontainer container 076804885f17845ffa6d769d83db29420e5977686399c45742681be168ff4fa7. Jul 12 00:17:19.511113 systemd[1]: Started cri-containerd-27d3081fe7ea59065d762fdb26e4ed15adeaae69e4f7bf925f0534e6b616d896.scope - libcontainer container 27d3081fe7ea59065d762fdb26e4ed15adeaae69e4f7bf925f0534e6b616d896. Jul 12 00:17:19.512306 systemd[1]: Started cri-containerd-b7ae64e249dd6622b6fd2bbd8ab1ba4df799b15123ff1dbc3431443a52d764df.scope - libcontainer container b7ae64e249dd6622b6fd2bbd8ab1ba4df799b15123ff1dbc3431443a52d764df. Jul 12 00:17:19.531228 kubelet[2112]: E0712 00:17:19.531163 2112 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="1.6s" Jul 12 00:17:19.541717 containerd[1434]: time="2025-07-12T00:17:19.541671251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5505ba027364220217378afd332d02a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"076804885f17845ffa6d769d83db29420e5977686399c45742681be168ff4fa7\"" Jul 12 00:17:19.543946 kubelet[2112]: E0712 00:17:19.543867 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:19.548490 containerd[1434]: time="2025-07-12T00:17:19.548401342Z" level=info msg="CreateContainer within sandbox \"076804885f17845ffa6d769d83db29420e5977686399c45742681be168ff4fa7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 12 00:17:19.549601 containerd[1434]: time="2025-07-12T00:17:19.549566646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7ae64e249dd6622b6fd2bbd8ab1ba4df799b15123ff1dbc3431443a52d764df\"" Jul 12 00:17:19.550436 kubelet[2112]: E0712 00:17:19.550407 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:19.554651 containerd[1434]: time="2025-07-12T00:17:19.554617007Z" level=info msg="CreateContainer within sandbox \"b7ae64e249dd6622b6fd2bbd8ab1ba4df799b15123ff1dbc3431443a52d764df\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 12 00:17:19.556127 containerd[1434]: time="2025-07-12T00:17:19.556092457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"27d3081fe7ea59065d762fdb26e4ed15adeaae69e4f7bf925f0534e6b616d896\"" Jul 12 00:17:19.557157 kubelet[2112]: E0712 00:17:19.557111 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:19.560627 containerd[1434]: time="2025-07-12T00:17:19.560561046Z" level=info msg="CreateContainer within sandbox \"27d3081fe7ea59065d762fdb26e4ed15adeaae69e4f7bf925f0534e6b616d896\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 12 00:17:19.564743 containerd[1434]: time="2025-07-12T00:17:19.564681894Z" level=info msg="CreateContainer within sandbox \"076804885f17845ffa6d769d83db29420e5977686399c45742681be168ff4fa7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d7aea5647c55d3bbacfbf8510eb772a6bd31584464d6e5f3589953287e75ae08\"" Jul 12 00:17:19.565339 containerd[1434]: time="2025-07-12T00:17:19.565313034Z" level=info msg="StartContainer for \"d7aea5647c55d3bbacfbf8510eb772a6bd31584464d6e5f3589953287e75ae08\"" Jul 12 00:17:19.569609 containerd[1434]: time="2025-07-12T00:17:19.569567974Z" level=info msg="CreateContainer within sandbox \"b7ae64e249dd6622b6fd2bbd8ab1ba4df799b15123ff1dbc3431443a52d764df\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b7e870c708efa513de4e095391b6b4954156aa6ee4aae0f89405b087c8a327ed\"" Jul 12 00:17:19.570097 containerd[1434]: time="2025-07-12T00:17:19.570064101Z" level=info msg="StartContainer for \"b7e870c708efa513de4e095391b6b4954156aa6ee4aae0f89405b087c8a327ed\"" Jul 12 00:17:19.578191 containerd[1434]: time="2025-07-12T00:17:19.578148002Z" level=info msg="CreateContainer within sandbox \"27d3081fe7ea59065d762fdb26e4ed15adeaae69e4f7bf925f0534e6b616d896\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f8d47a6f50fa346039d29e214e78b6a501caa2dad2bc9e83b0c248269a61af49\"" Jul 12 00:17:19.579469 containerd[1434]: time="2025-07-12T00:17:19.578925606Z" level=info msg="StartContainer for \"f8d47a6f50fa346039d29e214e78b6a501caa2dad2bc9e83b0c248269a61af49\"" Jul 12 00:17:19.590284 systemd[1]: Started cri-containerd-d7aea5647c55d3bbacfbf8510eb772a6bd31584464d6e5f3589953287e75ae08.scope - libcontainer container d7aea5647c55d3bbacfbf8510eb772a6bd31584464d6e5f3589953287e75ae08. Jul 12 00:17:19.593037 systemd[1]: Started cri-containerd-b7e870c708efa513de4e095391b6b4954156aa6ee4aae0f89405b087c8a327ed.scope - libcontainer container b7e870c708efa513de4e095391b6b4954156aa6ee4aae0f89405b087c8a327ed. Jul 12 00:17:19.606074 kubelet[2112]: E0712 00:17:19.606022 2112 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 12 00:17:19.612281 systemd[1]: Started cri-containerd-f8d47a6f50fa346039d29e214e78b6a501caa2dad2bc9e83b0c248269a61af49.scope - libcontainer container f8d47a6f50fa346039d29e214e78b6a501caa2dad2bc9e83b0c248269a61af49. Jul 12 00:17:19.641449 containerd[1434]: time="2025-07-12T00:17:19.641406501Z" level=info msg="StartContainer for \"d7aea5647c55d3bbacfbf8510eb772a6bd31584464d6e5f3589953287e75ae08\" returns successfully" Jul 12 00:17:19.662467 containerd[1434]: time="2025-07-12T00:17:19.662281447Z" level=info msg="StartContainer for \"f8d47a6f50fa346039d29e214e78b6a501caa2dad2bc9e83b0c248269a61af49\" returns successfully" Jul 12 00:17:19.662467 containerd[1434]: time="2025-07-12T00:17:19.662393116Z" level=info msg="StartContainer for \"b7e870c708efa513de4e095391b6b4954156aa6ee4aae0f89405b087c8a327ed\" returns successfully" Jul 12 00:17:19.703865 kubelet[2112]: E0712 00:17:19.703814 2112 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 12 00:17:19.710732 kubelet[2112]: E0712 00:17:19.710692 2112 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 12 00:17:19.773434 kubelet[2112]: I0712 00:17:19.773393 2112 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 00:17:19.773725 kubelet[2112]: E0712 00:17:19.773700 2112 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" Jul 12 00:17:20.155970 kubelet[2112]: E0712 00:17:20.155939 2112 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:17:20.156127 kubelet[2112]: E0712 00:17:20.156065 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:20.157972 kubelet[2112]: E0712 00:17:20.157787 2112 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:17:20.157972 kubelet[2112]: E0712 00:17:20.157896 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:20.165354 kubelet[2112]: E0712 00:17:20.165330 2112 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:17:20.165601 kubelet[2112]: E0712 00:17:20.165552 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:21.165671 kubelet[2112]: E0712 00:17:21.165631 2112 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:17:21.166242 kubelet[2112]: E0712 00:17:21.165757 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:21.166242 kubelet[2112]: E0712 00:17:21.166163 2112 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:17:21.166296 kubelet[2112]: E0712 00:17:21.166268 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:21.375930 kubelet[2112]: I0712 00:17:21.375895 2112 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 00:17:21.762677 kubelet[2112]: I0712 00:17:21.762634 2112 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 12 00:17:21.762677 kubelet[2112]: E0712 00:17:21.762679 2112 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 12 00:17:21.790208 kubelet[2112]: E0712 00:17:21.790165 2112 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:17:21.891196 kubelet[2112]: E0712 00:17:21.891133 2112 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:17:21.991633 kubelet[2112]: E0712 00:17:21.991594 2112 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:17:22.092459 kubelet[2112]: E0712 00:17:22.092421 2112 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:17:22.166599 kubelet[2112]: E0712 00:17:22.166569 2112 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:17:22.166930 kubelet[2112]: E0712 00:17:22.166698 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:22.192751 kubelet[2112]: E0712 00:17:22.192717 2112 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:17:22.328717 kubelet[2112]: I0712 00:17:22.328550 2112 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 12 00:17:22.333548 kubelet[2112]: E0712 00:17:22.333505 2112 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 12 00:17:22.333548 kubelet[2112]: I0712 00:17:22.333537 2112 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 12 00:17:22.335096 kubelet[2112]: E0712 00:17:22.335060 2112 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 12 00:17:22.335162 kubelet[2112]: I0712 00:17:22.335104 2112 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 12 00:17:22.336610 kubelet[2112]: E0712 00:17:22.336588 2112 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 12 00:17:23.115145 kubelet[2112]: I0712 00:17:23.115111 2112 apiserver.go:52] "Watching apiserver" Jul 12 00:17:23.128964 kubelet[2112]: I0712 00:17:23.128925 2112 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 12 00:17:24.019211 systemd[1]: Reloading requested from client PID 2403 ('systemctl') (unit session-7.scope)... Jul 12 00:17:24.020156 systemd[1]: Reloading... Jul 12 00:17:24.112120 zram_generator::config[2445]: No configuration found. Jul 12 00:17:24.224782 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:17:24.293057 systemd[1]: Reloading finished in 272 ms. Jul 12 00:17:24.329530 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:17:24.333602 kubelet[2112]: I0712 00:17:24.329506 2112 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:17:24.344148 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 00:17:24.344426 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:17:24.344489 systemd[1]: kubelet.service: Consumed 1.514s CPU time, 130.7M memory peak, 0B memory swap peak. Jul 12 00:17:24.356557 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:17:24.478271 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:17:24.483320 (kubelet)[2484]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:17:24.547226 kubelet[2484]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:17:24.547226 kubelet[2484]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 12 00:17:24.547226 kubelet[2484]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:17:24.547226 kubelet[2484]: I0712 00:17:24.547183 2484 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:17:24.558227 kubelet[2484]: I0712 00:17:24.558138 2484 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 12 00:17:24.558227 kubelet[2484]: I0712 00:17:24.558172 2484 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:17:24.561281 kubelet[2484]: I0712 00:17:24.558863 2484 server.go:956] "Client rotation is on, will bootstrap in background" Jul 12 00:17:24.567584 kubelet[2484]: I0712 00:17:24.567185 2484 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 12 00:17:24.570121 kubelet[2484]: I0712 00:17:24.570064 2484 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:17:24.573886 kubelet[2484]: E0712 00:17:24.573808 2484 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:17:24.573886 kubelet[2484]: I0712 00:17:24.573855 2484 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:17:24.577598 kubelet[2484]: I0712 00:17:24.577228 2484 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:17:24.577598 kubelet[2484]: I0712 00:17:24.577482 2484 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:17:24.577746 kubelet[2484]: I0712 00:17:24.577504 2484 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 00:17:24.577829 kubelet[2484]: I0712 00:17:24.577812 2484 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:17:24.577860 kubelet[2484]: I0712 00:17:24.577835 2484 container_manager_linux.go:303] "Creating device plugin manager" Jul 12 00:17:24.578010 kubelet[2484]: I0712 00:17:24.577906 2484 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:17:24.578077 kubelet[2484]: I0712 00:17:24.578062 2484 kubelet.go:480] "Attempting to sync node with API server" Jul 12 00:17:24.578133 kubelet[2484]: I0712 00:17:24.578077 2484 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:17:24.578156 kubelet[2484]: I0712 00:17:24.578151 2484 kubelet.go:386] "Adding apiserver pod source" Jul 12 00:17:24.578190 kubelet[2484]: I0712 00:17:24.578168 2484 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:17:24.579664 kubelet[2484]: I0712 00:17:24.579569 2484 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 12 00:17:24.580441 kubelet[2484]: I0712 00:17:24.580302 2484 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 12 00:17:24.585037 kubelet[2484]: I0712 00:17:24.584059 2484 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 12 00:17:24.585037 kubelet[2484]: I0712 00:17:24.584125 2484 server.go:1289] "Started kubelet" Jul 12 00:17:24.585037 kubelet[2484]: I0712 00:17:24.584260 2484 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:17:24.585037 kubelet[2484]: I0712 00:17:24.584536 2484 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:17:24.585037 kubelet[2484]: I0712 00:17:24.584864 2484 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:17:24.587597 kubelet[2484]: I0712 00:17:24.587573 2484 server.go:317] "Adding debug handlers to kubelet server" Jul 12 00:17:24.588575 kubelet[2484]: I0712 00:17:24.588547 2484 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:17:24.588949 kubelet[2484]: I0712 00:17:24.588916 2484 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:17:24.591587 kubelet[2484]: I0712 00:17:24.591558 2484 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 12 00:17:24.591725 kubelet[2484]: E0712 00:17:24.591703 2484 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:17:24.600100 kubelet[2484]: I0712 00:17:24.594007 2484 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:17:24.600384 kubelet[2484]: I0712 00:17:24.600361 2484 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 12 00:17:24.600708 kubelet[2484]: E0712 00:17:24.600674 2484 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 00:17:24.604822 kubelet[2484]: I0712 00:17:24.603674 2484 factory.go:223] Registration of the containerd container factory successfully Jul 12 00:17:24.604822 kubelet[2484]: I0712 00:17:24.603705 2484 factory.go:223] Registration of the systemd container factory successfully Jul 12 00:17:24.604822 kubelet[2484]: I0712 00:17:24.603810 2484 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:17:24.617281 kubelet[2484]: I0712 00:17:24.617232 2484 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 12 00:17:24.629760 kubelet[2484]: I0712 00:17:24.629720 2484 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 12 00:17:24.629760 kubelet[2484]: I0712 00:17:24.629754 2484 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 12 00:17:24.629915 kubelet[2484]: I0712 00:17:24.629773 2484 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 12 00:17:24.629915 kubelet[2484]: I0712 00:17:24.629781 2484 kubelet.go:2436] "Starting kubelet main sync loop" Jul 12 00:17:24.629915 kubelet[2484]: E0712 00:17:24.629838 2484 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:17:24.653524 kubelet[2484]: I0712 00:17:24.652596 2484 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 12 00:17:24.653524 kubelet[2484]: I0712 00:17:24.652616 2484 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 12 00:17:24.653524 kubelet[2484]: I0712 00:17:24.652637 2484 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:17:24.653524 kubelet[2484]: I0712 00:17:24.652783 2484 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 12 00:17:24.653524 kubelet[2484]: I0712 00:17:24.652795 2484 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 12 00:17:24.653524 kubelet[2484]: I0712 00:17:24.652813 2484 policy_none.go:49] "None policy: Start" Jul 12 00:17:24.653524 kubelet[2484]: I0712 00:17:24.652821 2484 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 12 00:17:24.653524 kubelet[2484]: I0712 00:17:24.652830 2484 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:17:24.653524 kubelet[2484]: I0712 00:17:24.652916 2484 state_mem.go:75] "Updated machine memory state" Jul 12 00:17:24.657111 kubelet[2484]: E0712 00:17:24.657068 2484 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 12 00:17:24.657794 kubelet[2484]: I0712 00:17:24.657772 2484 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:17:24.657850 kubelet[2484]: I0712 00:17:24.657790 2484 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:17:24.658651 kubelet[2484]: I0712 00:17:24.658619 2484 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:17:24.658867 kubelet[2484]: E0712 00:17:24.658846 2484 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 12 00:17:24.731076 kubelet[2484]: I0712 00:17:24.731028 2484 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 12 00:17:24.731220 kubelet[2484]: I0712 00:17:24.731200 2484 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 12 00:17:24.731322 kubelet[2484]: I0712 00:17:24.731306 2484 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 12 00:17:24.764931 kubelet[2484]: I0712 00:17:24.764884 2484 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 00:17:24.775469 kubelet[2484]: I0712 00:17:24.775428 2484 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 12 00:17:24.775562 kubelet[2484]: I0712 00:17:24.775523 2484 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 12 00:17:24.802958 kubelet[2484]: I0712 00:17:24.801507 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:17:24.802958 kubelet[2484]: I0712 00:17:24.801551 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5505ba027364220217378afd332d02a3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5505ba027364220217378afd332d02a3\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:17:24.802958 kubelet[2484]: I0712 00:17:24.801572 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5505ba027364220217378afd332d02a3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5505ba027364220217378afd332d02a3\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:17:24.802958 kubelet[2484]: I0712 00:17:24.801588 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5505ba027364220217378afd332d02a3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5505ba027364220217378afd332d02a3\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:17:24.802958 kubelet[2484]: I0712 00:17:24.801604 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 12 00:17:24.803186 kubelet[2484]: I0712 00:17:24.801618 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:17:24.803186 kubelet[2484]: I0712 00:17:24.801633 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:17:24.803186 kubelet[2484]: I0712 00:17:24.801652 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:17:24.803186 kubelet[2484]: I0712 00:17:24.801692 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:17:25.030867 sudo[2526]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 12 00:17:25.031209 sudo[2526]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 12 00:17:25.038012 kubelet[2484]: E0712 00:17:25.037982 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:25.038185 kubelet[2484]: E0712 00:17:25.038166 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:25.038305 kubelet[2484]: E0712 00:17:25.038285 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:25.467242 sudo[2526]: pam_unix(sudo:session): session closed for user root Jul 12 00:17:25.578995 kubelet[2484]: I0712 00:17:25.578946 2484 apiserver.go:52] "Watching apiserver" Jul 12 00:17:25.601542 kubelet[2484]: I0712 00:17:25.601495 2484 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 12 00:17:25.641813 kubelet[2484]: I0712 00:17:25.641780 2484 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 12 00:17:25.641984 kubelet[2484]: I0712 00:17:25.641966 2484 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 12 00:17:25.642182 kubelet[2484]: I0712 00:17:25.642165 2484 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 12 00:17:25.649580 kubelet[2484]: E0712 00:17:25.648613 2484 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 12 00:17:25.649580 kubelet[2484]: E0712 00:17:25.648789 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:25.650537 kubelet[2484]: E0712 00:17:25.650511 2484 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 12 00:17:25.650577 kubelet[2484]: E0712 00:17:25.650556 2484 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 12 00:17:25.650654 kubelet[2484]: E0712 00:17:25.650630 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:25.650694 kubelet[2484]: E0712 00:17:25.650681 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:25.669285 kubelet[2484]: I0712 00:17:25.669203 2484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.669145755 podStartE2EDuration="1.669145755s" podCreationTimestamp="2025-07-12 00:17:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:17:25.667898074 +0000 UTC m=+1.179863769" watchObservedRunningTime="2025-07-12 00:17:25.669145755 +0000 UTC m=+1.181111450" Jul 12 00:17:25.693668 kubelet[2484]: I0712 00:17:25.693614 2484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.693598226 podStartE2EDuration="1.693598226s" podCreationTimestamp="2025-07-12 00:17:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:17:25.686503152 +0000 UTC m=+1.198468807" watchObservedRunningTime="2025-07-12 00:17:25.693598226 +0000 UTC m=+1.205563921" Jul 12 00:17:25.693814 kubelet[2484]: I0712 00:17:25.693740 2484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.6937359810000001 podStartE2EDuration="1.693735981s" podCreationTimestamp="2025-07-12 00:17:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:17:25.693437778 +0000 UTC m=+1.205403513" watchObservedRunningTime="2025-07-12 00:17:25.693735981 +0000 UTC m=+1.205701676" Jul 12 00:17:26.643731 kubelet[2484]: E0712 00:17:26.643694 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:26.643731 kubelet[2484]: E0712 00:17:26.643658 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:26.644210 kubelet[2484]: E0712 00:17:26.643700 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:27.646471 kubelet[2484]: E0712 00:17:27.646200 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:27.646471 kubelet[2484]: E0712 00:17:27.646358 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:27.646471 kubelet[2484]: E0712 00:17:27.646410 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:27.649019 sudo[1614]: pam_unix(sudo:session): session closed for user root Jul 12 00:17:27.652389 sshd[1611]: pam_unix(sshd:session): session closed for user core Jul 12 00:17:27.656497 systemd-logind[1417]: Session 7 logged out. Waiting for processes to exit. Jul 12 00:17:27.656717 systemd[1]: sshd@6-10.0.0.81:22-10.0.0.1:50974.service: Deactivated successfully. Jul 12 00:17:27.659622 systemd[1]: session-7.scope: Deactivated successfully. Jul 12 00:17:27.660810 systemd[1]: session-7.scope: Consumed 8.866s CPU time, 156.8M memory peak, 0B memory swap peak. Jul 12 00:17:27.661519 systemd-logind[1417]: Removed session 7. Jul 12 00:17:28.648253 kubelet[2484]: E0712 00:17:28.648226 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:29.122738 kubelet[2484]: E0712 00:17:29.122679 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:29.288448 kubelet[2484]: I0712 00:17:29.288413 2484 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 12 00:17:29.289332 containerd[1434]: time="2025-07-12T00:17:29.289281257Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 12 00:17:29.289604 kubelet[2484]: I0712 00:17:29.289444 2484 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 12 00:17:29.650346 kubelet[2484]: E0712 00:17:29.650312 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:30.022342 systemd[1]: Created slice kubepods-besteffort-pod2f50a58e_9ed5_4e2d_ac50_606eab35f545.slice - libcontainer container kubepods-besteffort-pod2f50a58e_9ed5_4e2d_ac50_606eab35f545.slice. Jul 12 00:17:30.035498 kubelet[2484]: I0712 00:17:30.034951 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19b20408-1012-434a-b121-a0c59391be23-xtables-lock\") pod \"cilium-5swtd\" (UID: \"19b20408-1012-434a-b121-a0c59391be23\") " pod="kube-system/cilium-5swtd" Jul 12 00:17:30.035498 kubelet[2484]: I0712 00:17:30.034993 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/19b20408-1012-434a-b121-a0c59391be23-host-proc-sys-kernel\") pod \"cilium-5swtd\" (UID: \"19b20408-1012-434a-b121-a0c59391be23\") " pod="kube-system/cilium-5swtd" Jul 12 00:17:30.035498 kubelet[2484]: I0712 00:17:30.035012 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/19b20408-1012-434a-b121-a0c59391be23-hubble-tls\") pod \"cilium-5swtd\" (UID: \"19b20408-1012-434a-b121-a0c59391be23\") " pod="kube-system/cilium-5swtd" Jul 12 00:17:30.035498 kubelet[2484]: I0712 00:17:30.035028 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbn7v\" (UniqueName: \"kubernetes.io/projected/19b20408-1012-434a-b121-a0c59391be23-kube-api-access-sbn7v\") pod \"cilium-5swtd\" (UID: \"19b20408-1012-434a-b121-a0c59391be23\") " pod="kube-system/cilium-5swtd" Jul 12 00:17:30.035498 kubelet[2484]: I0712 00:17:30.035047 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f50a58e-9ed5-4e2d-ac50-606eab35f545-xtables-lock\") pod \"kube-proxy-t5bmm\" (UID: \"2f50a58e-9ed5-4e2d-ac50-606eab35f545\") " pod="kube-system/kube-proxy-t5bmm" Jul 12 00:17:30.035749 kubelet[2484]: I0712 00:17:30.035062 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f50a58e-9ed5-4e2d-ac50-606eab35f545-lib-modules\") pod \"kube-proxy-t5bmm\" (UID: \"2f50a58e-9ed5-4e2d-ac50-606eab35f545\") " pod="kube-system/kube-proxy-t5bmm" Jul 12 00:17:30.035749 kubelet[2484]: I0712 00:17:30.035077 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/19b20408-1012-434a-b121-a0c59391be23-hostproc\") pod \"cilium-5swtd\" (UID: \"19b20408-1012-434a-b121-a0c59391be23\") " pod="kube-system/cilium-5swtd" Jul 12 00:17:30.035749 kubelet[2484]: I0712 00:17:30.035104 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/19b20408-1012-434a-b121-a0c59391be23-etc-cni-netd\") pod \"cilium-5swtd\" (UID: \"19b20408-1012-434a-b121-a0c59391be23\") " pod="kube-system/cilium-5swtd" Jul 12 00:17:30.035749 kubelet[2484]: I0712 00:17:30.035119 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/19b20408-1012-434a-b121-a0c59391be23-cilium-run\") pod \"cilium-5swtd\" (UID: \"19b20408-1012-434a-b121-a0c59391be23\") " pod="kube-system/cilium-5swtd" Jul 12 00:17:30.035749 kubelet[2484]: I0712 00:17:30.035133 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/19b20408-1012-434a-b121-a0c59391be23-bpf-maps\") pod \"cilium-5swtd\" (UID: \"19b20408-1012-434a-b121-a0c59391be23\") " pod="kube-system/cilium-5swtd" Jul 12 00:17:30.035749 kubelet[2484]: I0712 00:17:30.035146 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19b20408-1012-434a-b121-a0c59391be23-lib-modules\") pod \"cilium-5swtd\" (UID: \"19b20408-1012-434a-b121-a0c59391be23\") " pod="kube-system/cilium-5swtd" Jul 12 00:17:30.035886 kubelet[2484]: I0712 00:17:30.035159 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/19b20408-1012-434a-b121-a0c59391be23-clustermesh-secrets\") pod \"cilium-5swtd\" (UID: \"19b20408-1012-434a-b121-a0c59391be23\") " pod="kube-system/cilium-5swtd" Jul 12 00:17:30.035886 kubelet[2484]: I0712 00:17:30.035176 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19b20408-1012-434a-b121-a0c59391be23-cilium-config-path\") pod \"cilium-5swtd\" (UID: \"19b20408-1012-434a-b121-a0c59391be23\") " pod="kube-system/cilium-5swtd" Jul 12 00:17:30.035886 kubelet[2484]: I0712 00:17:30.035190 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/19b20408-1012-434a-b121-a0c59391be23-host-proc-sys-net\") pod \"cilium-5swtd\" (UID: \"19b20408-1012-434a-b121-a0c59391be23\") " pod="kube-system/cilium-5swtd" Jul 12 00:17:30.035886 kubelet[2484]: I0712 00:17:30.035206 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/19b20408-1012-434a-b121-a0c59391be23-cilium-cgroup\") pod \"cilium-5swtd\" (UID: \"19b20408-1012-434a-b121-a0c59391be23\") " pod="kube-system/cilium-5swtd" Jul 12 00:17:30.035886 kubelet[2484]: I0712 00:17:30.035229 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/19b20408-1012-434a-b121-a0c59391be23-cni-path\") pod \"cilium-5swtd\" (UID: \"19b20408-1012-434a-b121-a0c59391be23\") " pod="kube-system/cilium-5swtd" Jul 12 00:17:30.035886 kubelet[2484]: I0712 00:17:30.035246 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2f50a58e-9ed5-4e2d-ac50-606eab35f545-kube-proxy\") pod \"kube-proxy-t5bmm\" (UID: \"2f50a58e-9ed5-4e2d-ac50-606eab35f545\") " pod="kube-system/kube-proxy-t5bmm" Jul 12 00:17:30.036111 kubelet[2484]: I0712 00:17:30.035261 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvwmh\" (UniqueName: \"kubernetes.io/projected/2f50a58e-9ed5-4e2d-ac50-606eab35f545-kube-api-access-qvwmh\") pod \"kube-proxy-t5bmm\" (UID: \"2f50a58e-9ed5-4e2d-ac50-606eab35f545\") " pod="kube-system/kube-proxy-t5bmm" Jul 12 00:17:30.042957 systemd[1]: Created slice kubepods-burstable-pod19b20408_1012_434a_b121_a0c59391be23.slice - libcontainer container kubepods-burstable-pod19b20408_1012_434a_b121_a0c59391be23.slice. Jul 12 00:17:30.341523 kubelet[2484]: E0712 00:17:30.341480 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:30.342419 containerd[1434]: time="2025-07-12T00:17:30.342323620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t5bmm,Uid:2f50a58e-9ed5-4e2d-ac50-606eab35f545,Namespace:kube-system,Attempt:0,}" Jul 12 00:17:30.345075 kubelet[2484]: E0712 00:17:30.344905 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:30.345315 containerd[1434]: time="2025-07-12T00:17:30.345280158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5swtd,Uid:19b20408-1012-434a-b121-a0c59391be23,Namespace:kube-system,Attempt:0,}" Jul 12 00:17:30.385879 containerd[1434]: time="2025-07-12T00:17:30.385683572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:17:30.388696 containerd[1434]: time="2025-07-12T00:17:30.385728551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:17:30.388696 containerd[1434]: time="2025-07-12T00:17:30.388532947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:17:30.388696 containerd[1434]: time="2025-07-12T00:17:30.388634268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:17:30.392669 containerd[1434]: time="2025-07-12T00:17:30.392099016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:17:30.392669 containerd[1434]: time="2025-07-12T00:17:30.392591179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:17:30.392969 containerd[1434]: time="2025-07-12T00:17:30.392640560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:17:30.392969 containerd[1434]: time="2025-07-12T00:17:30.392810550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:17:30.419330 systemd[1]: Started cri-containerd-0044f170a57ab341cff9b2bd5191a3fb9f86c59fe413dc421e8f59af0a4a060e.scope - libcontainer container 0044f170a57ab341cff9b2bd5191a3fb9f86c59fe413dc421e8f59af0a4a060e. Jul 12 00:17:30.421376 systemd[1]: Started cri-containerd-e600e7cec7165f1439bc7cea42bd857ac3e161b2274d0be912c301feca6f3de7.scope - libcontainer container e600e7cec7165f1439bc7cea42bd857ac3e161b2274d0be912c301feca6f3de7. Jul 12 00:17:30.447724 containerd[1434]: time="2025-07-12T00:17:30.447680606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t5bmm,Uid:2f50a58e-9ed5-4e2d-ac50-606eab35f545,Namespace:kube-system,Attempt:0,} returns sandbox id \"0044f170a57ab341cff9b2bd5191a3fb9f86c59fe413dc421e8f59af0a4a060e\"" Jul 12 00:17:30.447844 containerd[1434]: time="2025-07-12T00:17:30.447768883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5swtd,Uid:19b20408-1012-434a-b121-a0c59391be23,Namespace:kube-system,Attempt:0,} returns sandbox id \"e600e7cec7165f1439bc7cea42bd857ac3e161b2274d0be912c301feca6f3de7\"" Jul 12 00:17:30.454141 kubelet[2484]: E0712 00:17:30.454048 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:30.454634 kubelet[2484]: E0712 00:17:30.454541 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:30.461518 containerd[1434]: time="2025-07-12T00:17:30.461471171Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 12 00:17:30.465529 containerd[1434]: time="2025-07-12T00:17:30.465474741Z" level=info msg="CreateContainer within sandbox \"0044f170a57ab341cff9b2bd5191a3fb9f86c59fe413dc421e8f59af0a4a060e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 12 00:17:30.489710 containerd[1434]: time="2025-07-12T00:17:30.489656508Z" level=info msg="CreateContainer within sandbox \"0044f170a57ab341cff9b2bd5191a3fb9f86c59fe413dc421e8f59af0a4a060e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e1157c9e737fb8dfcf6dc34f37bac4cf2004d79e14bfd22138ccea21d2d96a65\"" Jul 12 00:17:30.490799 containerd[1434]: time="2025-07-12T00:17:30.490758563Z" level=info msg="StartContainer for \"e1157c9e737fb8dfcf6dc34f37bac4cf2004d79e14bfd22138ccea21d2d96a65\"" Jul 12 00:17:30.491961 systemd[1]: Created slice kubepods-besteffort-pod0ecf27f2_b974_47ef_8ac0_99087ea46031.slice - libcontainer container kubepods-besteffort-pod0ecf27f2_b974_47ef_8ac0_99087ea46031.slice. Jul 12 00:17:30.526270 systemd[1]: Started cri-containerd-e1157c9e737fb8dfcf6dc34f37bac4cf2004d79e14bfd22138ccea21d2d96a65.scope - libcontainer container e1157c9e737fb8dfcf6dc34f37bac4cf2004d79e14bfd22138ccea21d2d96a65. Jul 12 00:17:30.539609 kubelet[2484]: I0712 00:17:30.539534 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdmfl\" (UniqueName: \"kubernetes.io/projected/0ecf27f2-b974-47ef-8ac0-99087ea46031-kube-api-access-wdmfl\") pod \"cilium-operator-6c4d7847fc-j5kvc\" (UID: \"0ecf27f2-b974-47ef-8ac0-99087ea46031\") " pod="kube-system/cilium-operator-6c4d7847fc-j5kvc" Jul 12 00:17:30.539825 kubelet[2484]: I0712 00:17:30.539767 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0ecf27f2-b974-47ef-8ac0-99087ea46031-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-j5kvc\" (UID: \"0ecf27f2-b974-47ef-8ac0-99087ea46031\") " pod="kube-system/cilium-operator-6c4d7847fc-j5kvc" Jul 12 00:17:30.555212 containerd[1434]: time="2025-07-12T00:17:30.555067870Z" level=info msg="StartContainer for \"e1157c9e737fb8dfcf6dc34f37bac4cf2004d79e14bfd22138ccea21d2d96a65\" returns successfully" Jul 12 00:17:30.655137 kubelet[2484]: E0712 00:17:30.653676 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:30.655137 kubelet[2484]: E0712 00:17:30.654201 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:30.669496 kubelet[2484]: I0712 00:17:30.669439 2484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t5bmm" podStartSLOduration=1.669423846 podStartE2EDuration="1.669423846s" podCreationTimestamp="2025-07-12 00:17:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:17:30.668527757 +0000 UTC m=+6.180493452" watchObservedRunningTime="2025-07-12 00:17:30.669423846 +0000 UTC m=+6.181389501" Jul 12 00:17:30.794423 kubelet[2484]: E0712 00:17:30.794386 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:30.796010 containerd[1434]: time="2025-07-12T00:17:30.795975129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-j5kvc,Uid:0ecf27f2-b974-47ef-8ac0-99087ea46031,Namespace:kube-system,Attempt:0,}" Jul 12 00:17:30.831039 containerd[1434]: time="2025-07-12T00:17:30.830845222Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:17:30.831039 containerd[1434]: time="2025-07-12T00:17:30.830899964Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:17:30.831039 containerd[1434]: time="2025-07-12T00:17:30.830911089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:17:30.831039 containerd[1434]: time="2025-07-12T00:17:30.830997044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:17:30.851261 systemd[1]: Started cri-containerd-b20922aa8c43468872fa6f4484bdeb748fe08b6b300bfb7475e7120db578eccb.scope - libcontainer container b20922aa8c43468872fa6f4484bdeb748fe08b6b300bfb7475e7120db578eccb. Jul 12 00:17:30.877325 containerd[1434]: time="2025-07-12T00:17:30.877277240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-j5kvc,Uid:0ecf27f2-b974-47ef-8ac0-99087ea46031,Namespace:kube-system,Attempt:0,} returns sandbox id \"b20922aa8c43468872fa6f4484bdeb748fe08b6b300bfb7475e7120db578eccb\"" Jul 12 00:17:30.877967 kubelet[2484]: E0712 00:17:30.877922 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:35.774560 kubelet[2484]: E0712 00:17:35.774480 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:38.067233 update_engine[1423]: I20250712 00:17:38.067148 1423 update_attempter.cc:509] Updating boot flags... Jul 12 00:17:38.149162 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2868) Jul 12 00:17:38.168135 kubelet[2484]: E0712 00:17:38.167866 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:38.208109 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2869) Jul 12 00:17:38.670858 kubelet[2484]: E0712 00:17:38.670828 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:40.843656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount892672692.mount: Deactivated successfully. Jul 12 00:17:43.404295 containerd[1434]: time="2025-07-12T00:17:43.403660326Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:43.404295 containerd[1434]: time="2025-07-12T00:17:43.404256813Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 12 00:17:43.405032 containerd[1434]: time="2025-07-12T00:17:43.404990449Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:43.414973 containerd[1434]: time="2025-07-12T00:17:43.414923481Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 12.953402773s" Jul 12 00:17:43.414973 containerd[1434]: time="2025-07-12T00:17:43.414965690Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 12 00:17:43.415980 containerd[1434]: time="2025-07-12T00:17:43.415942378Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 12 00:17:43.421217 containerd[1434]: time="2025-07-12T00:17:43.421170809Z" level=info msg="CreateContainer within sandbox \"e600e7cec7165f1439bc7cea42bd857ac3e161b2274d0be912c301feca6f3de7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 12 00:17:43.441301 containerd[1434]: time="2025-07-12T00:17:43.441244077Z" level=info msg="CreateContainer within sandbox \"e600e7cec7165f1439bc7cea42bd857ac3e161b2274d0be912c301feca6f3de7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b5c77ace7465f0397f861637281286d895317fa808f96b8c5420cf272bd3daf4\"" Jul 12 00:17:43.442624 containerd[1434]: time="2025-07-12T00:17:43.441785593Z" level=info msg="StartContainer for \"b5c77ace7465f0397f861637281286d895317fa808f96b8c5420cf272bd3daf4\"" Jul 12 00:17:43.477293 systemd[1]: Started cri-containerd-b5c77ace7465f0397f861637281286d895317fa808f96b8c5420cf272bd3daf4.scope - libcontainer container b5c77ace7465f0397f861637281286d895317fa808f96b8c5420cf272bd3daf4. Jul 12 00:17:43.501904 containerd[1434]: time="2025-07-12T00:17:43.501849363Z" level=info msg="StartContainer for \"b5c77ace7465f0397f861637281286d895317fa808f96b8c5420cf272bd3daf4\" returns successfully" Jul 12 00:17:43.544045 systemd[1]: cri-containerd-b5c77ace7465f0397f861637281286d895317fa808f96b8c5420cf272bd3daf4.scope: Deactivated successfully. Jul 12 00:17:43.695915 kubelet[2484]: E0712 00:17:43.695799 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:43.723346 containerd[1434]: time="2025-07-12T00:17:43.708726629Z" level=info msg="shim disconnected" id=b5c77ace7465f0397f861637281286d895317fa808f96b8c5420cf272bd3daf4 namespace=k8s.io Jul 12 00:17:43.723346 containerd[1434]: time="2025-07-12T00:17:43.723332935Z" level=warning msg="cleaning up after shim disconnected" id=b5c77ace7465f0397f861637281286d895317fa808f96b8c5420cf272bd3daf4 namespace=k8s.io Jul 12 00:17:43.723346 containerd[1434]: time="2025-07-12T00:17:43.723351299Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:17:44.436524 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5c77ace7465f0397f861637281286d895317fa808f96b8c5420cf272bd3daf4-rootfs.mount: Deactivated successfully. Jul 12 00:17:44.698481 kubelet[2484]: E0712 00:17:44.698264 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:44.707701 containerd[1434]: time="2025-07-12T00:17:44.707657237Z" level=info msg="CreateContainer within sandbox \"e600e7cec7165f1439bc7cea42bd857ac3e161b2274d0be912c301feca6f3de7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 12 00:17:44.733996 containerd[1434]: time="2025-07-12T00:17:44.733944016Z" level=info msg="CreateContainer within sandbox \"e600e7cec7165f1439bc7cea42bd857ac3e161b2274d0be912c301feca6f3de7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"45bf8f159672e03c4d83654cc40991611813dae76db35780db7c72f41ea8a8de\"" Jul 12 00:17:44.734585 containerd[1434]: time="2025-07-12T00:17:44.734559621Z" level=info msg="StartContainer for \"45bf8f159672e03c4d83654cc40991611813dae76db35780db7c72f41ea8a8de\"" Jul 12 00:17:44.772390 systemd[1]: Started cri-containerd-45bf8f159672e03c4d83654cc40991611813dae76db35780db7c72f41ea8a8de.scope - libcontainer container 45bf8f159672e03c4d83654cc40991611813dae76db35780db7c72f41ea8a8de. Jul 12 00:17:44.839286 containerd[1434]: time="2025-07-12T00:17:44.837240718Z" level=info msg="StartContainer for \"45bf8f159672e03c4d83654cc40991611813dae76db35780db7c72f41ea8a8de\" returns successfully" Jul 12 00:17:44.844990 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:17:44.845240 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:17:44.845308 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:17:44.855717 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:17:44.856002 systemd[1]: cri-containerd-45bf8f159672e03c4d83654cc40991611813dae76db35780db7c72f41ea8a8de.scope: Deactivated successfully. Jul 12 00:17:44.905231 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:17:44.915263 containerd[1434]: time="2025-07-12T00:17:44.915184031Z" level=info msg="shim disconnected" id=45bf8f159672e03c4d83654cc40991611813dae76db35780db7c72f41ea8a8de namespace=k8s.io Jul 12 00:17:44.915263 containerd[1434]: time="2025-07-12T00:17:44.915254365Z" level=warning msg="cleaning up after shim disconnected" id=45bf8f159672e03c4d83654cc40991611813dae76db35780db7c72f41ea8a8de namespace=k8s.io Jul 12 00:17:44.915263 containerd[1434]: time="2025-07-12T00:17:44.915263687Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:17:44.937055 containerd[1434]: time="2025-07-12T00:17:44.937000102Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:44.937527 containerd[1434]: time="2025-07-12T00:17:44.937482880Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 12 00:17:44.938281 containerd[1434]: time="2025-07-12T00:17:44.938253477Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:44.939752 containerd[1434]: time="2025-07-12T00:17:44.939709412Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.523727626s" Jul 12 00:17:44.939791 containerd[1434]: time="2025-07-12T00:17:44.939754061Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 12 00:17:44.944230 containerd[1434]: time="2025-07-12T00:17:44.944056615Z" level=info msg="CreateContainer within sandbox \"b20922aa8c43468872fa6f4484bdeb748fe08b6b300bfb7475e7120db578eccb\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 12 00:17:44.956209 containerd[1434]: time="2025-07-12T00:17:44.956107623Z" level=info msg="CreateContainer within sandbox \"b20922aa8c43468872fa6f4484bdeb748fe08b6b300bfb7475e7120db578eccb\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b99a30e831a3fc1c1e7faadaa3dbbc0f54edf95bd4fe2df39a42f2a4dcc59147\"" Jul 12 00:17:44.957696 containerd[1434]: time="2025-07-12T00:17:44.956800084Z" level=info msg="StartContainer for \"b99a30e831a3fc1c1e7faadaa3dbbc0f54edf95bd4fe2df39a42f2a4dcc59147\"" Jul 12 00:17:44.987304 systemd[1]: Started cri-containerd-b99a30e831a3fc1c1e7faadaa3dbbc0f54edf95bd4fe2df39a42f2a4dcc59147.scope - libcontainer container b99a30e831a3fc1c1e7faadaa3dbbc0f54edf95bd4fe2df39a42f2a4dcc59147. Jul 12 00:17:45.014830 containerd[1434]: time="2025-07-12T00:17:45.013802070Z" level=info msg="StartContainer for \"b99a30e831a3fc1c1e7faadaa3dbbc0f54edf95bd4fe2df39a42f2a4dcc59147\" returns successfully" Jul 12 00:17:45.437711 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45bf8f159672e03c4d83654cc40991611813dae76db35780db7c72f41ea8a8de-rootfs.mount: Deactivated successfully. Jul 12 00:17:45.703501 kubelet[2484]: E0712 00:17:45.701448 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:45.708367 kubelet[2484]: E0712 00:17:45.707455 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:45.720699 containerd[1434]: time="2025-07-12T00:17:45.720343858Z" level=info msg="CreateContainer within sandbox \"e600e7cec7165f1439bc7cea42bd857ac3e161b2274d0be912c301feca6f3de7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 12 00:17:45.747029 containerd[1434]: time="2025-07-12T00:17:45.746749267Z" level=info msg="CreateContainer within sandbox \"e600e7cec7165f1439bc7cea42bd857ac3e161b2274d0be912c301feca6f3de7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"93696ad5beee87ddc9b390d4cbd368f16a7ba06e22c486c17da0c515c35e6d74\"" Jul 12 00:17:45.747238 containerd[1434]: time="2025-07-12T00:17:45.747200114Z" level=info msg="StartContainer for \"93696ad5beee87ddc9b390d4cbd368f16a7ba06e22c486c17da0c515c35e6d74\"" Jul 12 00:17:45.766110 kubelet[2484]: I0712 00:17:45.764067 2484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-j5kvc" podStartSLOduration=1.702323084 podStartE2EDuration="15.764049507s" podCreationTimestamp="2025-07-12 00:17:30 +0000 UTC" firstStartedPulling="2025-07-12 00:17:30.878833202 +0000 UTC m=+6.390798897" lastFinishedPulling="2025-07-12 00:17:44.940559625 +0000 UTC m=+20.452525320" observedRunningTime="2025-07-12 00:17:45.720526494 +0000 UTC m=+21.232492189" watchObservedRunningTime="2025-07-12 00:17:45.764049507 +0000 UTC m=+21.276015202" Jul 12 00:17:45.809325 systemd[1]: Started cri-containerd-93696ad5beee87ddc9b390d4cbd368f16a7ba06e22c486c17da0c515c35e6d74.scope - libcontainer container 93696ad5beee87ddc9b390d4cbd368f16a7ba06e22c486c17da0c515c35e6d74. Jul 12 00:17:45.837220 containerd[1434]: time="2025-07-12T00:17:45.837067329Z" level=info msg="StartContainer for \"93696ad5beee87ddc9b390d4cbd368f16a7ba06e22c486c17da0c515c35e6d74\" returns successfully" Jul 12 00:17:45.863349 systemd[1]: cri-containerd-93696ad5beee87ddc9b390d4cbd368f16a7ba06e22c486c17da0c515c35e6d74.scope: Deactivated successfully. Jul 12 00:17:45.886227 containerd[1434]: time="2025-07-12T00:17:45.886155183Z" level=info msg="shim disconnected" id=93696ad5beee87ddc9b390d4cbd368f16a7ba06e22c486c17da0c515c35e6d74 namespace=k8s.io Jul 12 00:17:45.886227 containerd[1434]: time="2025-07-12T00:17:45.886221476Z" level=warning msg="cleaning up after shim disconnected" id=93696ad5beee87ddc9b390d4cbd368f16a7ba06e22c486c17da0c515c35e6d74 namespace=k8s.io Jul 12 00:17:45.886557 containerd[1434]: time="2025-07-12T00:17:45.886230997Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:17:46.444582 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-93696ad5beee87ddc9b390d4cbd368f16a7ba06e22c486c17da0c515c35e6d74-rootfs.mount: Deactivated successfully. Jul 12 00:17:46.719028 kubelet[2484]: E0712 00:17:46.718890 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:46.719422 kubelet[2484]: E0712 00:17:46.719103 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:46.750138 containerd[1434]: time="2025-07-12T00:17:46.750071814Z" level=info msg="CreateContainer within sandbox \"e600e7cec7165f1439bc7cea42bd857ac3e161b2274d0be912c301feca6f3de7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 12 00:17:46.766028 containerd[1434]: time="2025-07-12T00:17:46.765393422Z" level=info msg="CreateContainer within sandbox \"e600e7cec7165f1439bc7cea42bd857ac3e161b2274d0be912c301feca6f3de7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e51c0a8bff14a28fd37c4c230cffc2322f2be84bfdc1186982376ad7b1406c0d\"" Jul 12 00:17:46.767700 containerd[1434]: time="2025-07-12T00:17:46.767310018Z" level=info msg="StartContainer for \"e51c0a8bff14a28fd37c4c230cffc2322f2be84bfdc1186982376ad7b1406c0d\"" Jul 12 00:17:46.801290 systemd[1]: Started cri-containerd-e51c0a8bff14a28fd37c4c230cffc2322f2be84bfdc1186982376ad7b1406c0d.scope - libcontainer container e51c0a8bff14a28fd37c4c230cffc2322f2be84bfdc1186982376ad7b1406c0d. Jul 12 00:17:46.821067 systemd[1]: cri-containerd-e51c0a8bff14a28fd37c4c230cffc2322f2be84bfdc1186982376ad7b1406c0d.scope: Deactivated successfully. Jul 12 00:17:46.822993 containerd[1434]: time="2025-07-12T00:17:46.822390937Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19b20408_1012_434a_b121_a0c59391be23.slice/cri-containerd-e51c0a8bff14a28fd37c4c230cffc2322f2be84bfdc1186982376ad7b1406c0d.scope/memory.events\": no such file or directory" Jul 12 00:17:46.825322 containerd[1434]: time="2025-07-12T00:17:46.825268951Z" level=info msg="StartContainer for \"e51c0a8bff14a28fd37c4c230cffc2322f2be84bfdc1186982376ad7b1406c0d\" returns successfully" Jul 12 00:17:46.856928 containerd[1434]: time="2025-07-12T00:17:46.856858263Z" level=info msg="shim disconnected" id=e51c0a8bff14a28fd37c4c230cffc2322f2be84bfdc1186982376ad7b1406c0d namespace=k8s.io Jul 12 00:17:46.856928 containerd[1434]: time="2025-07-12T00:17:46.856922075Z" level=warning msg="cleaning up after shim disconnected" id=e51c0a8bff14a28fd37c4c230cffc2322f2be84bfdc1186982376ad7b1406c0d namespace=k8s.io Jul 12 00:17:46.856928 containerd[1434]: time="2025-07-12T00:17:46.856931677Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:17:47.440646 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e51c0a8bff14a28fd37c4c230cffc2322f2be84bfdc1186982376ad7b1406c0d-rootfs.mount: Deactivated successfully. Jul 12 00:17:47.723591 kubelet[2484]: E0712 00:17:47.723255 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:47.732123 containerd[1434]: time="2025-07-12T00:17:47.732011662Z" level=info msg="CreateContainer within sandbox \"e600e7cec7165f1439bc7cea42bd857ac3e161b2274d0be912c301feca6f3de7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 12 00:17:47.785638 containerd[1434]: time="2025-07-12T00:17:47.785568278Z" level=info msg="CreateContainer within sandbox \"e600e7cec7165f1439bc7cea42bd857ac3e161b2274d0be912c301feca6f3de7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c5027e16e99c0edc8514b72c12719c69c02761061b9080540c493474e7822a6f\"" Jul 12 00:17:47.786474 containerd[1434]: time="2025-07-12T00:17:47.786424150Z" level=info msg="StartContainer for \"c5027e16e99c0edc8514b72c12719c69c02761061b9080540c493474e7822a6f\"" Jul 12 00:17:47.811406 systemd[1]: Started cri-containerd-c5027e16e99c0edc8514b72c12719c69c02761061b9080540c493474e7822a6f.scope - libcontainer container c5027e16e99c0edc8514b72c12719c69c02761061b9080540c493474e7822a6f. Jul 12 00:17:47.842618 containerd[1434]: time="2025-07-12T00:17:47.842561546Z" level=info msg="StartContainer for \"c5027e16e99c0edc8514b72c12719c69c02761061b9080540c493474e7822a6f\" returns successfully" Jul 12 00:17:48.016073 kubelet[2484]: I0712 00:17:48.015506 2484 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 12 00:17:48.098663 systemd[1]: Created slice kubepods-burstable-pode5033730_08f8_471d_9877_660bf2e69899.slice - libcontainer container kubepods-burstable-pode5033730_08f8_471d_9877_660bf2e69899.slice. Jul 12 00:17:48.105977 systemd[1]: Created slice kubepods-burstable-pod9d8cc96b_3cf8_4e23_8cb0_52af2874f625.slice - libcontainer container kubepods-burstable-pod9d8cc96b_3cf8_4e23_8cb0_52af2874f625.slice. Jul 12 00:17:48.194198 kubelet[2484]: I0712 00:17:48.194153 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4jpq\" (UniqueName: \"kubernetes.io/projected/9d8cc96b-3cf8-4e23-8cb0-52af2874f625-kube-api-access-f4jpq\") pod \"coredns-674b8bbfcf-kfgxt\" (UID: \"9d8cc96b-3cf8-4e23-8cb0-52af2874f625\") " pod="kube-system/coredns-674b8bbfcf-kfgxt" Jul 12 00:17:48.194198 kubelet[2484]: I0712 00:17:48.194203 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btn9v\" (UniqueName: \"kubernetes.io/projected/e5033730-08f8-471d-9877-660bf2e69899-kube-api-access-btn9v\") pod \"coredns-674b8bbfcf-8sknb\" (UID: \"e5033730-08f8-471d-9877-660bf2e69899\") " pod="kube-system/coredns-674b8bbfcf-8sknb" Jul 12 00:17:48.194403 kubelet[2484]: I0712 00:17:48.194225 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d8cc96b-3cf8-4e23-8cb0-52af2874f625-config-volume\") pod \"coredns-674b8bbfcf-kfgxt\" (UID: \"9d8cc96b-3cf8-4e23-8cb0-52af2874f625\") " pod="kube-system/coredns-674b8bbfcf-kfgxt" Jul 12 00:17:48.194403 kubelet[2484]: I0712 00:17:48.194243 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5033730-08f8-471d-9877-660bf2e69899-config-volume\") pod \"coredns-674b8bbfcf-8sknb\" (UID: \"e5033730-08f8-471d-9877-660bf2e69899\") " pod="kube-system/coredns-674b8bbfcf-8sknb" Jul 12 00:17:48.403269 kubelet[2484]: E0712 00:17:48.403227 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:48.404639 containerd[1434]: time="2025-07-12T00:17:48.404592620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8sknb,Uid:e5033730-08f8-471d-9877-660bf2e69899,Namespace:kube-system,Attempt:0,}" Jul 12 00:17:48.411183 kubelet[2484]: E0712 00:17:48.411149 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:48.413058 containerd[1434]: time="2025-07-12T00:17:48.411762524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kfgxt,Uid:9d8cc96b-3cf8-4e23-8cb0-52af2874f625,Namespace:kube-system,Attempt:0,}" Jul 12 00:17:48.729045 kubelet[2484]: E0712 00:17:48.727818 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:49.730116 kubelet[2484]: E0712 00:17:49.729856 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:50.223050 systemd-networkd[1373]: cilium_host: Link UP Jul 12 00:17:50.223750 systemd-networkd[1373]: cilium_net: Link UP Jul 12 00:17:50.223756 systemd-networkd[1373]: cilium_net: Gained carrier Jul 12 00:17:50.223944 systemd-networkd[1373]: cilium_host: Gained carrier Jul 12 00:17:50.226239 systemd-networkd[1373]: cilium_host: Gained IPv6LL Jul 12 00:17:50.327192 systemd-networkd[1373]: cilium_vxlan: Link UP Jul 12 00:17:50.327198 systemd-networkd[1373]: cilium_vxlan: Gained carrier Jul 12 00:17:50.342260 systemd-networkd[1373]: cilium_net: Gained IPv6LL Jul 12 00:17:50.685124 kernel: NET: Registered PF_ALG protocol family Jul 12 00:17:50.732026 kubelet[2484]: E0712 00:17:50.731901 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:51.288533 systemd-networkd[1373]: lxc_health: Link UP Jul 12 00:17:51.293256 systemd-networkd[1373]: lxc_health: Gained carrier Jul 12 00:17:51.602685 systemd-networkd[1373]: lxc748738e48b3f: Link UP Jul 12 00:17:51.609140 kernel: eth0: renamed from tmp67632 Jul 12 00:17:51.621275 systemd-networkd[1373]: lxc84155beb4a92: Link UP Jul 12 00:17:51.630145 kernel: eth0: renamed from tmpb052b Jul 12 00:17:51.639893 systemd-networkd[1373]: lxc748738e48b3f: Gained carrier Jul 12 00:17:51.640936 systemd-networkd[1373]: lxc84155beb4a92: Gained carrier Jul 12 00:17:52.294835 systemd-networkd[1373]: cilium_vxlan: Gained IPv6LL Jul 12 00:17:52.357042 kubelet[2484]: E0712 00:17:52.356472 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:52.375178 kubelet[2484]: I0712 00:17:52.374632 2484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5swtd" podStartSLOduration=10.419599224 podStartE2EDuration="23.3746172s" podCreationTimestamp="2025-07-12 00:17:29 +0000 UTC" firstStartedPulling="2025-07-12 00:17:30.460803375 +0000 UTC m=+5.972769071" lastFinishedPulling="2025-07-12 00:17:43.415821352 +0000 UTC m=+18.927787047" observedRunningTime="2025-07-12 00:17:48.744530615 +0000 UTC m=+24.256496310" watchObservedRunningTime="2025-07-12 00:17:52.3746172 +0000 UTC m=+27.886582895" Jul 12 00:17:52.552603 systemd-networkd[1373]: lxc_health: Gained IPv6LL Jul 12 00:17:53.254377 systemd-networkd[1373]: lxc84155beb4a92: Gained IPv6LL Jul 12 00:17:53.318563 systemd-networkd[1373]: lxc748738e48b3f: Gained IPv6LL Jul 12 00:17:54.585601 kubelet[2484]: I0712 00:17:54.585318 2484 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:17:54.585963 kubelet[2484]: E0712 00:17:54.585762 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:54.739646 kubelet[2484]: E0712 00:17:54.739605 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:55.432366 systemd[1]: Started sshd@7-10.0.0.81:22-10.0.0.1:44058.service - OpenSSH per-connection server daemon (10.0.0.1:44058). Jul 12 00:17:55.454072 containerd[1434]: time="2025-07-12T00:17:55.453772947Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:17:55.454072 containerd[1434]: time="2025-07-12T00:17:55.453832955Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:17:55.454072 containerd[1434]: time="2025-07-12T00:17:55.453852478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:17:55.454072 containerd[1434]: time="2025-07-12T00:17:55.453930608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:17:55.454072 containerd[1434]: time="2025-07-12T00:17:55.453654732Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:17:55.454072 containerd[1434]: time="2025-07-12T00:17:55.453708139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:17:55.454072 containerd[1434]: time="2025-07-12T00:17:55.453725021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:17:55.454072 containerd[1434]: time="2025-07-12T00:17:55.453808792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:17:55.480290 systemd[1]: Started cri-containerd-676321814c1777e504366dfe268e09b33408941f7845b1bc06193cdbb2081b5a.scope - libcontainer container 676321814c1777e504366dfe268e09b33408941f7845b1bc06193cdbb2081b5a. Jul 12 00:17:55.481480 systemd[1]: Started cri-containerd-b052b69e6954cfa55cdb354ea9b7297a0ad9d10762516cab44f0bdbade969364.scope - libcontainer container b052b69e6954cfa55cdb354ea9b7297a0ad9d10762516cab44f0bdbade969364. Jul 12 00:17:55.488164 sshd[3738]: Accepted publickey for core from 10.0.0.1 port 44058 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:17:55.489318 sshd[3738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:17:55.495295 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:17:55.495730 systemd-logind[1417]: New session 8 of user core. Jul 12 00:17:55.498642 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:17:55.504275 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 12 00:17:55.519894 containerd[1434]: time="2025-07-12T00:17:55.519853946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8sknb,Uid:e5033730-08f8-471d-9877-660bf2e69899,Namespace:kube-system,Attempt:0,} returns sandbox id \"676321814c1777e504366dfe268e09b33408941f7845b1bc06193cdbb2081b5a\"" Jul 12 00:17:55.520850 containerd[1434]: time="2025-07-12T00:17:55.520823913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kfgxt,Uid:9d8cc96b-3cf8-4e23-8cb0-52af2874f625,Namespace:kube-system,Attempt:0,} returns sandbox id \"b052b69e6954cfa55cdb354ea9b7297a0ad9d10762516cab44f0bdbade969364\"" Jul 12 00:17:55.521406 kubelet[2484]: E0712 00:17:55.521377 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:55.522949 kubelet[2484]: E0712 00:17:55.522922 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:55.530128 containerd[1434]: time="2025-07-12T00:17:55.530052439Z" level=info msg="CreateContainer within sandbox \"b052b69e6954cfa55cdb354ea9b7297a0ad9d10762516cab44f0bdbade969364\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:17:55.531193 containerd[1434]: time="2025-07-12T00:17:55.530962638Z" level=info msg="CreateContainer within sandbox \"676321814c1777e504366dfe268e09b33408941f7845b1bc06193cdbb2081b5a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:17:55.548577 containerd[1434]: time="2025-07-12T00:17:55.548523774Z" level=info msg="CreateContainer within sandbox \"b052b69e6954cfa55cdb354ea9b7297a0ad9d10762516cab44f0bdbade969364\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f8338878e0f2b178d17a0b5fa3783bcc7bfbf38bef44f64d24bced1e5a3363f7\"" Jul 12 00:17:55.549474 containerd[1434]: time="2025-07-12T00:17:55.549437174Z" level=info msg="StartContainer for \"f8338878e0f2b178d17a0b5fa3783bcc7bfbf38bef44f64d24bced1e5a3363f7\"" Jul 12 00:17:55.550838 containerd[1434]: time="2025-07-12T00:17:55.550804912Z" level=info msg="CreateContainer within sandbox \"676321814c1777e504366dfe268e09b33408941f7845b1bc06193cdbb2081b5a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dff41d09e518370f1b35de227546c55fbe22df0a9893e5fdeb4eadf6f72bb94e\"" Jul 12 00:17:55.551408 containerd[1434]: time="2025-07-12T00:17:55.551272134Z" level=info msg="StartContainer for \"dff41d09e518370f1b35de227546c55fbe22df0a9893e5fdeb4eadf6f72bb94e\"" Jul 12 00:17:55.582314 systemd[1]: Started cri-containerd-f8338878e0f2b178d17a0b5fa3783bcc7bfbf38bef44f64d24bced1e5a3363f7.scope - libcontainer container f8338878e0f2b178d17a0b5fa3783bcc7bfbf38bef44f64d24bced1e5a3363f7. Jul 12 00:17:55.585783 systemd[1]: Started cri-containerd-dff41d09e518370f1b35de227546c55fbe22df0a9893e5fdeb4eadf6f72bb94e.scope - libcontainer container dff41d09e518370f1b35de227546c55fbe22df0a9893e5fdeb4eadf6f72bb94e. Jul 12 00:17:55.621974 containerd[1434]: time="2025-07-12T00:17:55.621907248Z" level=info msg="StartContainer for \"f8338878e0f2b178d17a0b5fa3783bcc7bfbf38bef44f64d24bced1e5a3363f7\" returns successfully" Jul 12 00:17:55.627547 containerd[1434]: time="2025-07-12T00:17:55.627505460Z" level=info msg="StartContainer for \"dff41d09e518370f1b35de227546c55fbe22df0a9893e5fdeb4eadf6f72bb94e\" returns successfully" Jul 12 00:17:55.690399 sshd[3738]: pam_unix(sshd:session): session closed for user core Jul 12 00:17:55.696536 systemd[1]: sshd@7-10.0.0.81:22-10.0.0.1:44058.service: Deactivated successfully. Jul 12 00:17:55.698896 systemd[1]: session-8.scope: Deactivated successfully. Jul 12 00:17:55.704846 systemd-logind[1417]: Session 8 logged out. Waiting for processes to exit. Jul 12 00:17:55.707145 systemd-logind[1417]: Removed session 8. Jul 12 00:17:55.742698 kubelet[2484]: E0712 00:17:55.742665 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:55.759187 kubelet[2484]: E0712 00:17:55.756625 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:55.779105 kubelet[2484]: I0712 00:17:55.778649 2484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-kfgxt" podStartSLOduration=25.778633737 podStartE2EDuration="25.778633737s" podCreationTimestamp="2025-07-12 00:17:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:17:55.778067382 +0000 UTC m=+31.290033077" watchObservedRunningTime="2025-07-12 00:17:55.778633737 +0000 UTC m=+31.290599432" Jul 12 00:17:55.795170 kubelet[2484]: I0712 00:17:55.795071 2484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-8sknb" podStartSLOduration=25.795053883 podStartE2EDuration="25.795053883s" podCreationTimestamp="2025-07-12 00:17:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:17:55.794029789 +0000 UTC m=+31.305995484" watchObservedRunningTime="2025-07-12 00:17:55.795053883 +0000 UTC m=+31.307019538" Jul 12 00:17:56.754810 kubelet[2484]: E0712 00:17:56.754411 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:56.755748 kubelet[2484]: E0712 00:17:56.755708 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:57.756329 kubelet[2484]: E0712 00:17:57.756288 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:57.766191 kubelet[2484]: E0712 00:17:57.766150 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:00.708149 systemd[1]: Started sshd@8-10.0.0.81:22-10.0.0.1:44068.service - OpenSSH per-connection server daemon (10.0.0.1:44068). Jul 12 00:18:00.752277 sshd[3917]: Accepted publickey for core from 10.0.0.1 port 44068 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:18:00.754328 sshd[3917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:00.759785 systemd-logind[1417]: New session 9 of user core. Jul 12 00:18:00.767285 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 12 00:18:00.909659 sshd[3917]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:00.917237 systemd[1]: sshd@8-10.0.0.81:22-10.0.0.1:44068.service: Deactivated successfully. Jul 12 00:18:00.919757 systemd[1]: session-9.scope: Deactivated successfully. Jul 12 00:18:00.923568 systemd-logind[1417]: Session 9 logged out. Waiting for processes to exit. Jul 12 00:18:00.925528 systemd-logind[1417]: Removed session 9. Jul 12 00:18:05.929421 systemd[1]: Started sshd@9-10.0.0.81:22-10.0.0.1:53838.service - OpenSSH per-connection server daemon (10.0.0.1:53838). Jul 12 00:18:05.961757 sshd[3937]: Accepted publickey for core from 10.0.0.1 port 53838 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:18:05.963286 sshd[3937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:05.967313 systemd-logind[1417]: New session 10 of user core. Jul 12 00:18:05.978333 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 12 00:18:06.098266 sshd[3937]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:06.107298 systemd[1]: sshd@9-10.0.0.81:22-10.0.0.1:53838.service: Deactivated successfully. Jul 12 00:18:06.110343 systemd[1]: session-10.scope: Deactivated successfully. Jul 12 00:18:06.111739 systemd-logind[1417]: Session 10 logged out. Waiting for processes to exit. Jul 12 00:18:06.119369 systemd[1]: Started sshd@10-10.0.0.81:22-10.0.0.1:53846.service - OpenSSH per-connection server daemon (10.0.0.1:53846). Jul 12 00:18:06.120403 systemd-logind[1417]: Removed session 10. Jul 12 00:18:06.152612 sshd[3952]: Accepted publickey for core from 10.0.0.1 port 53846 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:18:06.154121 sshd[3952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:06.158660 systemd-logind[1417]: New session 11 of user core. Jul 12 00:18:06.169529 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 12 00:18:06.328449 sshd[3952]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:06.339045 systemd[1]: sshd@10-10.0.0.81:22-10.0.0.1:53846.service: Deactivated successfully. Jul 12 00:18:06.342366 systemd[1]: session-11.scope: Deactivated successfully. Jul 12 00:18:06.346873 systemd-logind[1417]: Session 11 logged out. Waiting for processes to exit. Jul 12 00:18:06.352475 systemd[1]: Started sshd@11-10.0.0.81:22-10.0.0.1:53854.service - OpenSSH per-connection server daemon (10.0.0.1:53854). Jul 12 00:18:06.356143 systemd-logind[1417]: Removed session 11. Jul 12 00:18:06.396395 sshd[3966]: Accepted publickey for core from 10.0.0.1 port 53854 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:18:06.399062 sshd[3966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:06.405912 systemd-logind[1417]: New session 12 of user core. Jul 12 00:18:06.413286 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 12 00:18:06.527788 sshd[3966]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:06.531069 systemd-logind[1417]: Session 12 logged out. Waiting for processes to exit. Jul 12 00:18:06.531421 systemd[1]: sshd@11-10.0.0.81:22-10.0.0.1:53854.service: Deactivated successfully. Jul 12 00:18:06.533203 systemd[1]: session-12.scope: Deactivated successfully. Jul 12 00:18:06.533970 systemd-logind[1417]: Removed session 12. Jul 12 00:18:11.541953 systemd[1]: Started sshd@12-10.0.0.81:22-10.0.0.1:53862.service - OpenSSH per-connection server daemon (10.0.0.1:53862). Jul 12 00:18:11.578138 sshd[3981]: Accepted publickey for core from 10.0.0.1 port 53862 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:18:11.578852 sshd[3981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:11.582855 systemd-logind[1417]: New session 13 of user core. Jul 12 00:18:11.593284 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 12 00:18:11.715898 sshd[3981]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:11.719331 systemd-logind[1417]: Session 13 logged out. Waiting for processes to exit. Jul 12 00:18:11.719877 systemd[1]: sshd@12-10.0.0.81:22-10.0.0.1:53862.service: Deactivated successfully. Jul 12 00:18:11.721858 systemd[1]: session-13.scope: Deactivated successfully. Jul 12 00:18:11.724329 systemd-logind[1417]: Removed session 13. Jul 12 00:18:16.727497 systemd[1]: Started sshd@13-10.0.0.81:22-10.0.0.1:56410.service - OpenSSH per-connection server daemon (10.0.0.1:56410). Jul 12 00:18:16.771618 sshd[3996]: Accepted publickey for core from 10.0.0.1 port 56410 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:18:16.772626 sshd[3996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:16.776920 systemd-logind[1417]: New session 14 of user core. Jul 12 00:18:16.788284 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 12 00:18:16.906108 sshd[3996]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:16.923811 systemd[1]: sshd@13-10.0.0.81:22-10.0.0.1:56410.service: Deactivated successfully. Jul 12 00:18:16.930019 systemd[1]: session-14.scope: Deactivated successfully. Jul 12 00:18:16.932491 systemd-logind[1417]: Session 14 logged out. Waiting for processes to exit. Jul 12 00:18:16.934804 systemd[1]: Started sshd@14-10.0.0.81:22-10.0.0.1:56422.service - OpenSSH per-connection server daemon (10.0.0.1:56422). Jul 12 00:18:16.937255 systemd-logind[1417]: Removed session 14. Jul 12 00:18:16.970446 sshd[4011]: Accepted publickey for core from 10.0.0.1 port 56422 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:18:16.971883 sshd[4011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:16.976299 systemd-logind[1417]: New session 15 of user core. Jul 12 00:18:16.986486 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 12 00:18:17.198272 sshd[4011]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:17.210620 systemd[1]: sshd@14-10.0.0.81:22-10.0.0.1:56422.service: Deactivated successfully. Jul 12 00:18:17.213285 systemd[1]: session-15.scope: Deactivated successfully. Jul 12 00:18:17.217659 systemd-logind[1417]: Session 15 logged out. Waiting for processes to exit. Jul 12 00:18:17.231761 systemd[1]: Started sshd@15-10.0.0.81:22-10.0.0.1:56424.service - OpenSSH per-connection server daemon (10.0.0.1:56424). Jul 12 00:18:17.233859 systemd-logind[1417]: Removed session 15. Jul 12 00:18:17.273732 sshd[4023]: Accepted publickey for core from 10.0.0.1 port 56424 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:18:17.275696 sshd[4023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:17.281162 systemd-logind[1417]: New session 16 of user core. Jul 12 00:18:17.290323 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 12 00:18:18.110985 sshd[4023]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:18.116766 systemd[1]: sshd@15-10.0.0.81:22-10.0.0.1:56424.service: Deactivated successfully. Jul 12 00:18:18.119681 systemd[1]: session-16.scope: Deactivated successfully. Jul 12 00:18:18.122219 systemd-logind[1417]: Session 16 logged out. Waiting for processes to exit. Jul 12 00:18:18.130555 systemd[1]: Started sshd@16-10.0.0.81:22-10.0.0.1:56430.service - OpenSSH per-connection server daemon (10.0.0.1:56430). Jul 12 00:18:18.135180 systemd-logind[1417]: Removed session 16. Jul 12 00:18:18.172554 sshd[4041]: Accepted publickey for core from 10.0.0.1 port 56430 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:18:18.174152 sshd[4041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:18.177900 systemd-logind[1417]: New session 17 of user core. Jul 12 00:18:18.190342 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 12 00:18:18.456487 sshd[4041]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:18.469080 systemd[1]: sshd@16-10.0.0.81:22-10.0.0.1:56430.service: Deactivated successfully. Jul 12 00:18:18.471614 systemd[1]: session-17.scope: Deactivated successfully. Jul 12 00:18:18.474996 systemd-logind[1417]: Session 17 logged out. Waiting for processes to exit. Jul 12 00:18:18.484543 systemd[1]: Started sshd@17-10.0.0.81:22-10.0.0.1:56432.service - OpenSSH per-connection server daemon (10.0.0.1:56432). Jul 12 00:18:18.485482 systemd-logind[1417]: Removed session 17. Jul 12 00:18:18.524503 sshd[4054]: Accepted publickey for core from 10.0.0.1 port 56432 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:18:18.527662 sshd[4054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:18.536247 systemd-logind[1417]: New session 18 of user core. Jul 12 00:18:18.548364 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 12 00:18:18.694263 sshd[4054]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:18.697864 systemd[1]: sshd@17-10.0.0.81:22-10.0.0.1:56432.service: Deactivated successfully. Jul 12 00:18:18.700079 systemd[1]: session-18.scope: Deactivated successfully. Jul 12 00:18:18.702627 systemd-logind[1417]: Session 18 logged out. Waiting for processes to exit. Jul 12 00:18:18.703587 systemd-logind[1417]: Removed session 18. Jul 12 00:18:23.705805 systemd[1]: Started sshd@18-10.0.0.81:22-10.0.0.1:48390.service - OpenSSH per-connection server daemon (10.0.0.1:48390). Jul 12 00:18:23.741660 sshd[4070]: Accepted publickey for core from 10.0.0.1 port 48390 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:18:23.743104 sshd[4070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:23.748061 systemd-logind[1417]: New session 19 of user core. Jul 12 00:18:23.755284 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 12 00:18:23.871425 sshd[4070]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:23.874697 systemd[1]: sshd@18-10.0.0.81:22-10.0.0.1:48390.service: Deactivated successfully. Jul 12 00:18:23.876613 systemd[1]: session-19.scope: Deactivated successfully. Jul 12 00:18:23.877465 systemd-logind[1417]: Session 19 logged out. Waiting for processes to exit. Jul 12 00:18:23.878535 systemd-logind[1417]: Removed session 19. Jul 12 00:18:28.881944 systemd[1]: Started sshd@19-10.0.0.81:22-10.0.0.1:48404.service - OpenSSH per-connection server daemon (10.0.0.1:48404). Jul 12 00:18:28.921820 sshd[4086]: Accepted publickey for core from 10.0.0.1 port 48404 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:18:28.923846 sshd[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:28.928151 systemd-logind[1417]: New session 20 of user core. Jul 12 00:18:28.947352 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 12 00:18:29.062913 sshd[4086]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:29.075686 systemd[1]: sshd@19-10.0.0.81:22-10.0.0.1:48404.service: Deactivated successfully. Jul 12 00:18:29.079239 systemd[1]: session-20.scope: Deactivated successfully. Jul 12 00:18:29.080701 systemd-logind[1417]: Session 20 logged out. Waiting for processes to exit. Jul 12 00:18:29.082012 systemd[1]: Started sshd@20-10.0.0.81:22-10.0.0.1:48406.service - OpenSSH per-connection server daemon (10.0.0.1:48406). Jul 12 00:18:29.085580 systemd-logind[1417]: Removed session 20. Jul 12 00:18:29.118145 sshd[4101]: Accepted publickey for core from 10.0.0.1 port 48406 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:18:29.119256 sshd[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:29.124411 systemd-logind[1417]: New session 21 of user core. Jul 12 00:18:29.130289 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 12 00:18:31.143135 containerd[1434]: time="2025-07-12T00:18:31.143033552Z" level=info msg="StopContainer for \"b99a30e831a3fc1c1e7faadaa3dbbc0f54edf95bd4fe2df39a42f2a4dcc59147\" with timeout 30 (s)" Jul 12 00:18:31.145412 containerd[1434]: time="2025-07-12T00:18:31.145304648Z" level=info msg="Stop container \"b99a30e831a3fc1c1e7faadaa3dbbc0f54edf95bd4fe2df39a42f2a4dcc59147\" with signal terminated" Jul 12 00:18:31.160520 systemd[1]: cri-containerd-b99a30e831a3fc1c1e7faadaa3dbbc0f54edf95bd4fe2df39a42f2a4dcc59147.scope: Deactivated successfully. Jul 12 00:18:31.170554 containerd[1434]: time="2025-07-12T00:18:31.170155912Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:18:31.176928 containerd[1434]: time="2025-07-12T00:18:31.176889085Z" level=info msg="StopContainer for \"c5027e16e99c0edc8514b72c12719c69c02761061b9080540c493474e7822a6f\" with timeout 2 (s)" Jul 12 00:18:31.177167 containerd[1434]: time="2025-07-12T00:18:31.177116350Z" level=info msg="Stop container \"c5027e16e99c0edc8514b72c12719c69c02761061b9080540c493474e7822a6f\" with signal terminated" Jul 12 00:18:31.182877 systemd-networkd[1373]: lxc_health: Link DOWN Jul 12 00:18:31.182882 systemd-networkd[1373]: lxc_health: Lost carrier Jul 12 00:18:31.184637 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b99a30e831a3fc1c1e7faadaa3dbbc0f54edf95bd4fe2df39a42f2a4dcc59147-rootfs.mount: Deactivated successfully. Jul 12 00:18:31.193432 containerd[1434]: time="2025-07-12T00:18:31.193312443Z" level=info msg="shim disconnected" id=b99a30e831a3fc1c1e7faadaa3dbbc0f54edf95bd4fe2df39a42f2a4dcc59147 namespace=k8s.io Jul 12 00:18:31.193625 containerd[1434]: time="2025-07-12T00:18:31.193606824Z" level=warning msg="cleaning up after shim disconnected" id=b99a30e831a3fc1c1e7faadaa3dbbc0f54edf95bd4fe2df39a42f2a4dcc59147 namespace=k8s.io Jul 12 00:18:31.193689 containerd[1434]: time="2025-07-12T00:18:31.193676940Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:18:31.202818 systemd[1]: cri-containerd-c5027e16e99c0edc8514b72c12719c69c02761061b9080540c493474e7822a6f.scope: Deactivated successfully. Jul 12 00:18:31.203096 systemd[1]: cri-containerd-c5027e16e99c0edc8514b72c12719c69c02761061b9080540c493474e7822a6f.scope: Consumed 6.915s CPU time. Jul 12 00:18:31.220902 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c5027e16e99c0edc8514b72c12719c69c02761061b9080540c493474e7822a6f-rootfs.mount: Deactivated successfully. Jul 12 00:18:31.227178 containerd[1434]: time="2025-07-12T00:18:31.227114379Z" level=info msg="shim disconnected" id=c5027e16e99c0edc8514b72c12719c69c02761061b9080540c493474e7822a6f namespace=k8s.io Jul 12 00:18:31.227465 containerd[1434]: time="2025-07-12T00:18:31.227356404Z" level=warning msg="cleaning up after shim disconnected" id=c5027e16e99c0edc8514b72c12719c69c02761061b9080540c493474e7822a6f namespace=k8s.io Jul 12 00:18:31.227465 containerd[1434]: time="2025-07-12T00:18:31.227371843Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:18:31.242613 containerd[1434]: time="2025-07-12T00:18:31.242565959Z" level=info msg="StopContainer for \"b99a30e831a3fc1c1e7faadaa3dbbc0f54edf95bd4fe2df39a42f2a4dcc59147\" returns successfully" Jul 12 00:18:31.244195 containerd[1434]: time="2025-07-12T00:18:31.243207279Z" level=info msg="StopPodSandbox for \"b20922aa8c43468872fa6f4484bdeb748fe08b6b300bfb7475e7120db578eccb\"" Jul 12 00:18:31.244195 containerd[1434]: time="2025-07-12T00:18:31.243283634Z" level=info msg="Container to stop \"b99a30e831a3fc1c1e7faadaa3dbbc0f54edf95bd4fe2df39a42f2a4dcc59147\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:18:31.245160 containerd[1434]: time="2025-07-12T00:18:31.244642588Z" level=info msg="StopContainer for \"c5027e16e99c0edc8514b72c12719c69c02761061b9080540c493474e7822a6f\" returns successfully" Jul 12 00:18:31.244930 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b20922aa8c43468872fa6f4484bdeb748fe08b6b300bfb7475e7120db578eccb-shm.mount: Deactivated successfully. Jul 12 00:18:31.247005 containerd[1434]: time="2025-07-12T00:18:31.246712536Z" level=info msg="StopPodSandbox for \"e600e7cec7165f1439bc7cea42bd857ac3e161b2274d0be912c301feca6f3de7\"" Jul 12 00:18:31.247005 containerd[1434]: time="2025-07-12T00:18:31.246751254Z" level=info msg="Container to stop \"45bf8f159672e03c4d83654cc40991611813dae76db35780db7c72f41ea8a8de\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:18:31.247005 containerd[1434]: time="2025-07-12T00:18:31.246769173Z" level=info msg="Container to stop \"e51c0a8bff14a28fd37c4c230cffc2322f2be84bfdc1186982376ad7b1406c0d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:18:31.247005 containerd[1434]: time="2025-07-12T00:18:31.246778412Z" level=info msg="Container to stop \"c5027e16e99c0edc8514b72c12719c69c02761061b9080540c493474e7822a6f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:18:31.247005 containerd[1434]: time="2025-07-12T00:18:31.246788052Z" level=info msg="Container to stop \"b5c77ace7465f0397f861637281286d895317fa808f96b8c5420cf272bd3daf4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:18:31.247005 containerd[1434]: time="2025-07-12T00:18:31.246796971Z" level=info msg="Container to stop \"93696ad5beee87ddc9b390d4cbd368f16a7ba06e22c486c17da0c515c35e6d74\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:18:31.248273 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e600e7cec7165f1439bc7cea42bd857ac3e161b2274d0be912c301feca6f3de7-shm.mount: Deactivated successfully. Jul 12 00:18:31.253652 systemd[1]: cri-containerd-e600e7cec7165f1439bc7cea42bd857ac3e161b2274d0be912c301feca6f3de7.scope: Deactivated successfully. Jul 12 00:18:31.254617 systemd[1]: cri-containerd-b20922aa8c43468872fa6f4484bdeb748fe08b6b300bfb7475e7120db578eccb.scope: Deactivated successfully. Jul 12 00:18:31.274937 containerd[1434]: time="2025-07-12T00:18:31.274711281Z" level=info msg="shim disconnected" id=e600e7cec7165f1439bc7cea42bd857ac3e161b2274d0be912c301feca6f3de7 namespace=k8s.io Jul 12 00:18:31.275176 containerd[1434]: time="2025-07-12T00:18:31.275107376Z" level=warning msg="cleaning up after shim disconnected" id=e600e7cec7165f1439bc7cea42bd857ac3e161b2274d0be912c301feca6f3de7 namespace=k8s.io Jul 12 00:18:31.275176 containerd[1434]: time="2025-07-12T00:18:31.275126174Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:18:31.286004 containerd[1434]: time="2025-07-12T00:18:31.285947648Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:18:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 12 00:18:31.287030 containerd[1434]: time="2025-07-12T00:18:31.287004661Z" level=info msg="TearDown network for sandbox \"e600e7cec7165f1439bc7cea42bd857ac3e161b2274d0be912c301feca6f3de7\" successfully" Jul 12 00:18:31.287094 containerd[1434]: time="2025-07-12T00:18:31.287032299Z" level=info msg="StopPodSandbox for \"e600e7cec7165f1439bc7cea42bd857ac3e161b2274d0be912c301feca6f3de7\" returns successfully" Jul 12 00:18:31.299761 containerd[1434]: time="2025-07-12T00:18:31.299533147Z" level=info msg="shim disconnected" id=b20922aa8c43468872fa6f4484bdeb748fe08b6b300bfb7475e7120db578eccb namespace=k8s.io Jul 12 00:18:31.299761 containerd[1434]: time="2025-07-12T00:18:31.299597262Z" level=warning msg="cleaning up after shim disconnected" id=b20922aa8c43468872fa6f4484bdeb748fe08b6b300bfb7475e7120db578eccb namespace=k8s.io Jul 12 00:18:31.299761 containerd[1434]: time="2025-07-12T00:18:31.299607102Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:18:31.315560 containerd[1434]: time="2025-07-12T00:18:31.315435218Z" level=info msg="TearDown network for sandbox \"b20922aa8c43468872fa6f4484bdeb748fe08b6b300bfb7475e7120db578eccb\" successfully" Jul 12 00:18:31.315560 containerd[1434]: time="2025-07-12T00:18:31.315472576Z" level=info msg="StopPodSandbox for \"b20922aa8c43468872fa6f4484bdeb748fe08b6b300bfb7475e7120db578eccb\" returns successfully" Jul 12 00:18:31.393047 kubelet[2484]: I0712 00:18:31.393007 2484 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbn7v\" (UniqueName: \"kubernetes.io/projected/19b20408-1012-434a-b121-a0c59391be23-kube-api-access-sbn7v\") pod \"19b20408-1012-434a-b121-a0c59391be23\" (UID: \"19b20408-1012-434a-b121-a0c59391be23\") " Jul 12 00:18:31.393047 kubelet[2484]: I0712 00:18:31.393048 2484 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/19b20408-1012-434a-b121-a0c59391be23-etc-cni-netd\") pod \"19b20408-1012-434a-b121-a0c59391be23\" (UID: \"19b20408-1012-434a-b121-a0c59391be23\") " Jul 12 00:18:31.394219 kubelet[2484]: I0712 00:18:31.393074 2484 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19b20408-1012-434a-b121-a0c59391be23-cilium-config-path\") pod \"19b20408-1012-434a-b121-a0c59391be23\" (UID: \"19b20408-1012-434a-b121-a0c59391be23\") " Jul 12 00:18:31.394219 kubelet[2484]: I0712 00:18:31.393100 2484 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/19b20408-1012-434a-b121-a0c59391be23-host-proc-sys-net\") pod \"19b20408-1012-434a-b121-a0c59391be23\" (UID: \"19b20408-1012-434a-b121-a0c59391be23\") " Jul 12 00:18:31.394219 kubelet[2484]: I0712 00:18:31.393118 2484 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19b20408-1012-434a-b121-a0c59391be23-xtables-lock\") pod \"19b20408-1012-434a-b121-a0c59391be23\" (UID: \"19b20408-1012-434a-b121-a0c59391be23\") " Jul 12 00:18:31.394219 kubelet[2484]: I0712 00:18:31.393132 2484 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/19b20408-1012-434a-b121-a0c59391be23-bpf-maps\") pod \"19b20408-1012-434a-b121-a0c59391be23\" (UID: \"19b20408-1012-434a-b121-a0c59391be23\") " Jul 12 00:18:31.394219 kubelet[2484]: I0712 00:18:31.393145 2484 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/19b20408-1012-434a-b121-a0c59391be23-cilium-cgroup\") pod \"19b20408-1012-434a-b121-a0c59391be23\" (UID: \"19b20408-1012-434a-b121-a0c59391be23\") " Jul 12 00:18:31.394219 kubelet[2484]: I0712 00:18:31.393161 2484 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0ecf27f2-b974-47ef-8ac0-99087ea46031-cilium-config-path\") pod \"0ecf27f2-b974-47ef-8ac0-99087ea46031\" (UID: \"0ecf27f2-b974-47ef-8ac0-99087ea46031\") " Jul 12 00:18:31.394613 kubelet[2484]: I0712 00:18:31.393198 2484 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/19b20408-1012-434a-b121-a0c59391be23-host-proc-sys-kernel\") pod \"19b20408-1012-434a-b121-a0c59391be23\" (UID: \"19b20408-1012-434a-b121-a0c59391be23\") " Jul 12 00:18:31.394613 kubelet[2484]: I0712 00:18:31.393225 2484 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/19b20408-1012-434a-b121-a0c59391be23-hubble-tls\") pod \"19b20408-1012-434a-b121-a0c59391be23\" (UID: \"19b20408-1012-434a-b121-a0c59391be23\") " Jul 12 00:18:31.394613 kubelet[2484]: I0712 00:18:31.393239 2484 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/19b20408-1012-434a-b121-a0c59391be23-hostproc\") pod \"19b20408-1012-434a-b121-a0c59391be23\" (UID: \"19b20408-1012-434a-b121-a0c59391be23\") " Jul 12 00:18:31.394613 kubelet[2484]: I0712 00:18:31.393291 2484 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19b20408-1012-434a-b121-a0c59391be23-lib-modules\") pod \"19b20408-1012-434a-b121-a0c59391be23\" (UID: \"19b20408-1012-434a-b121-a0c59391be23\") " Jul 12 00:18:31.394613 kubelet[2484]: I0712 00:18:31.393324 2484 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/19b20408-1012-434a-b121-a0c59391be23-cni-path\") pod \"19b20408-1012-434a-b121-a0c59391be23\" (UID: \"19b20408-1012-434a-b121-a0c59391be23\") " Jul 12 00:18:31.394613 kubelet[2484]: I0712 00:18:31.393346 2484 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdmfl\" (UniqueName: \"kubernetes.io/projected/0ecf27f2-b974-47ef-8ac0-99087ea46031-kube-api-access-wdmfl\") pod \"0ecf27f2-b974-47ef-8ac0-99087ea46031\" (UID: \"0ecf27f2-b974-47ef-8ac0-99087ea46031\") " Jul 12 00:18:31.394786 kubelet[2484]: I0712 00:18:31.393370 2484 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/19b20408-1012-434a-b121-a0c59391be23-clustermesh-secrets\") pod \"19b20408-1012-434a-b121-a0c59391be23\" (UID: \"19b20408-1012-434a-b121-a0c59391be23\") " Jul 12 00:18:31.394786 kubelet[2484]: I0712 00:18:31.393388 2484 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/19b20408-1012-434a-b121-a0c59391be23-cilium-run\") pod \"19b20408-1012-434a-b121-a0c59391be23\" (UID: \"19b20408-1012-434a-b121-a0c59391be23\") " Jul 12 00:18:31.397607 kubelet[2484]: I0712 00:18:31.397565 2484 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19b20408-1012-434a-b121-a0c59391be23-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "19b20408-1012-434a-b121-a0c59391be23" (UID: "19b20408-1012-434a-b121-a0c59391be23"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:18:31.397875 kubelet[2484]: I0712 00:18:31.397633 2484 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19b20408-1012-434a-b121-a0c59391be23-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "19b20408-1012-434a-b121-a0c59391be23" (UID: "19b20408-1012-434a-b121-a0c59391be23"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:18:31.397875 kubelet[2484]: I0712 00:18:31.397649 2484 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19b20408-1012-434a-b121-a0c59391be23-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "19b20408-1012-434a-b121-a0c59391be23" (UID: "19b20408-1012-434a-b121-a0c59391be23"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:18:31.397875 kubelet[2484]: I0712 00:18:31.397664 2484 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19b20408-1012-434a-b121-a0c59391be23-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "19b20408-1012-434a-b121-a0c59391be23" (UID: "19b20408-1012-434a-b121-a0c59391be23"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:18:31.397875 kubelet[2484]: I0712 00:18:31.397678 2484 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19b20408-1012-434a-b121-a0c59391be23-cni-path" (OuterVolumeSpecName: "cni-path") pod "19b20408-1012-434a-b121-a0c59391be23" (UID: "19b20408-1012-434a-b121-a0c59391be23"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:18:31.397875 kubelet[2484]: I0712 00:18:31.397697 2484 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19b20408-1012-434a-b121-a0c59391be23-hostproc" (OuterVolumeSpecName: "hostproc") pod "19b20408-1012-434a-b121-a0c59391be23" (UID: "19b20408-1012-434a-b121-a0c59391be23"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:18:31.398000 kubelet[2484]: I0712 00:18:31.397713 2484 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19b20408-1012-434a-b121-a0c59391be23-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "19b20408-1012-434a-b121-a0c59391be23" (UID: "19b20408-1012-434a-b121-a0c59391be23"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:18:31.398194 kubelet[2484]: I0712 00:18:31.398118 2484 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19b20408-1012-434a-b121-a0c59391be23-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "19b20408-1012-434a-b121-a0c59391be23" (UID: "19b20408-1012-434a-b121-a0c59391be23"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:18:31.398499 kubelet[2484]: I0712 00:18:31.398320 2484 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19b20408-1012-434a-b121-a0c59391be23-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "19b20408-1012-434a-b121-a0c59391be23" (UID: "19b20408-1012-434a-b121-a0c59391be23"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:18:31.399984 kubelet[2484]: I0712 00:18:31.399951 2484 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19b20408-1012-434a-b121-a0c59391be23-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "19b20408-1012-434a-b121-a0c59391be23" (UID: "19b20408-1012-434a-b121-a0c59391be23"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 12 00:18:31.400134 kubelet[2484]: I0712 00:18:31.399968 2484 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19b20408-1012-434a-b121-a0c59391be23-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "19b20408-1012-434a-b121-a0c59391be23" (UID: "19b20408-1012-434a-b121-a0c59391be23"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:18:31.400240 kubelet[2484]: I0712 00:18:31.400199 2484 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19b20408-1012-434a-b121-a0c59391be23-kube-api-access-sbn7v" (OuterVolumeSpecName: "kube-api-access-sbn7v") pod "19b20408-1012-434a-b121-a0c59391be23" (UID: "19b20408-1012-434a-b121-a0c59391be23"). InnerVolumeSpecName "kube-api-access-sbn7v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 00:18:31.400824 kubelet[2484]: I0712 00:18:31.400790 2484 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19b20408-1012-434a-b121-a0c59391be23-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "19b20408-1012-434a-b121-a0c59391be23" (UID: "19b20408-1012-434a-b121-a0c59391be23"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 12 00:18:31.401293 kubelet[2484]: I0712 00:18:31.401252 2484 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19b20408-1012-434a-b121-a0c59391be23-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "19b20408-1012-434a-b121-a0c59391be23" (UID: "19b20408-1012-434a-b121-a0c59391be23"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 00:18:31.403363 kubelet[2484]: I0712 00:18:31.403339 2484 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ecf27f2-b974-47ef-8ac0-99087ea46031-kube-api-access-wdmfl" (OuterVolumeSpecName: "kube-api-access-wdmfl") pod "0ecf27f2-b974-47ef-8ac0-99087ea46031" (UID: "0ecf27f2-b974-47ef-8ac0-99087ea46031"). InnerVolumeSpecName "kube-api-access-wdmfl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 00:18:31.403426 kubelet[2484]: I0712 00:18:31.403077 2484 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ecf27f2-b974-47ef-8ac0-99087ea46031-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0ecf27f2-b974-47ef-8ac0-99087ea46031" (UID: "0ecf27f2-b974-47ef-8ac0-99087ea46031"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 12 00:18:31.494148 kubelet[2484]: I0712 00:18:31.494061 2484 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/19b20408-1012-434a-b121-a0c59391be23-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:31.494148 kubelet[2484]: I0712 00:18:31.494149 2484 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/19b20408-1012-434a-b121-a0c59391be23-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:31.494148 kubelet[2484]: I0712 00:18:31.494160 2484 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbn7v\" (UniqueName: \"kubernetes.io/projected/19b20408-1012-434a-b121-a0c59391be23-kube-api-access-sbn7v\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:31.494327 kubelet[2484]: I0712 00:18:31.494169 2484 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/19b20408-1012-434a-b121-a0c59391be23-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:31.494327 kubelet[2484]: I0712 00:18:31.494179 2484 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19b20408-1012-434a-b121-a0c59391be23-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:31.494327 kubelet[2484]: I0712 00:18:31.494187 2484 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/19b20408-1012-434a-b121-a0c59391be23-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:31.494327 kubelet[2484]: I0712 00:18:31.494195 2484 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19b20408-1012-434a-b121-a0c59391be23-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:31.494327 kubelet[2484]: I0712 00:18:31.494203 2484 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/19b20408-1012-434a-b121-a0c59391be23-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:31.494327 kubelet[2484]: I0712 00:18:31.494214 2484 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/19b20408-1012-434a-b121-a0c59391be23-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:31.494327 kubelet[2484]: I0712 00:18:31.494223 2484 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0ecf27f2-b974-47ef-8ac0-99087ea46031-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:31.494327 kubelet[2484]: I0712 00:18:31.494231 2484 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/19b20408-1012-434a-b121-a0c59391be23-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:31.494520 kubelet[2484]: I0712 00:18:31.494238 2484 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/19b20408-1012-434a-b121-a0c59391be23-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:31.494520 kubelet[2484]: I0712 00:18:31.494247 2484 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/19b20408-1012-434a-b121-a0c59391be23-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:31.494520 kubelet[2484]: I0712 00:18:31.494254 2484 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19b20408-1012-434a-b121-a0c59391be23-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:31.494520 kubelet[2484]: I0712 00:18:31.494261 2484 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/19b20408-1012-434a-b121-a0c59391be23-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:31.494520 kubelet[2484]: I0712 00:18:31.494269 2484 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wdmfl\" (UniqueName: \"kubernetes.io/projected/0ecf27f2-b974-47ef-8ac0-99087ea46031-kube-api-access-wdmfl\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:31.830439 kubelet[2484]: I0712 00:18:31.830404 2484 scope.go:117] "RemoveContainer" containerID="c5027e16e99c0edc8514b72c12719c69c02761061b9080540c493474e7822a6f" Jul 12 00:18:31.835969 containerd[1434]: time="2025-07-12T00:18:31.835920369Z" level=info msg="RemoveContainer for \"c5027e16e99c0edc8514b72c12719c69c02761061b9080540c493474e7822a6f\"" Jul 12 00:18:31.841280 containerd[1434]: time="2025-07-12T00:18:31.841242831Z" level=info msg="RemoveContainer for \"c5027e16e99c0edc8514b72c12719c69c02761061b9080540c493474e7822a6f\" returns successfully" Jul 12 00:18:31.841849 kubelet[2484]: I0712 00:18:31.841818 2484 scope.go:117] "RemoveContainer" containerID="e51c0a8bff14a28fd37c4c230cffc2322f2be84bfdc1186982376ad7b1406c0d" Jul 12 00:18:31.843585 systemd[1]: Removed slice kubepods-burstable-pod19b20408_1012_434a_b121_a0c59391be23.slice - libcontainer container kubepods-burstable-pod19b20408_1012_434a_b121_a0c59391be23.slice. Jul 12 00:18:31.843915 systemd[1]: kubepods-burstable-pod19b20408_1012_434a_b121_a0c59391be23.slice: Consumed 7.080s CPU time. Jul 12 00:18:31.844797 containerd[1434]: time="2025-07-12T00:18:31.844728450Z" level=info msg="RemoveContainer for \"e51c0a8bff14a28fd37c4c230cffc2322f2be84bfdc1186982376ad7b1406c0d\"" Jul 12 00:18:31.848412 systemd[1]: Removed slice kubepods-besteffort-pod0ecf27f2_b974_47ef_8ac0_99087ea46031.slice - libcontainer container kubepods-besteffort-pod0ecf27f2_b974_47ef_8ac0_99087ea46031.slice. Jul 12 00:18:31.851360 containerd[1434]: time="2025-07-12T00:18:31.851326112Z" level=info msg="RemoveContainer for \"e51c0a8bff14a28fd37c4c230cffc2322f2be84bfdc1186982376ad7b1406c0d\" returns successfully" Jul 12 00:18:31.851831 kubelet[2484]: I0712 00:18:31.851806 2484 scope.go:117] "RemoveContainer" containerID="93696ad5beee87ddc9b390d4cbd368f16a7ba06e22c486c17da0c515c35e6d74" Jul 12 00:18:31.853789 containerd[1434]: time="2025-07-12T00:18:31.853760277Z" level=info msg="RemoveContainer for \"93696ad5beee87ddc9b390d4cbd368f16a7ba06e22c486c17da0c515c35e6d74\"" Jul 12 00:18:31.868720 containerd[1434]: time="2025-07-12T00:18:31.868583617Z" level=info msg="RemoveContainer for \"93696ad5beee87ddc9b390d4cbd368f16a7ba06e22c486c17da0c515c35e6d74\" returns successfully" Jul 12 00:18:31.869432 kubelet[2484]: I0712 00:18:31.869374 2484 scope.go:117] "RemoveContainer" containerID="45bf8f159672e03c4d83654cc40991611813dae76db35780db7c72f41ea8a8de" Jul 12 00:18:31.872855 containerd[1434]: time="2025-07-12T00:18:31.872811389Z" level=info msg="RemoveContainer for \"45bf8f159672e03c4d83654cc40991611813dae76db35780db7c72f41ea8a8de\"" Jul 12 00:18:31.878107 containerd[1434]: time="2025-07-12T00:18:31.876590110Z" level=info msg="RemoveContainer for \"45bf8f159672e03c4d83654cc40991611813dae76db35780db7c72f41ea8a8de\" returns successfully" Jul 12 00:18:31.878189 kubelet[2484]: I0712 00:18:31.876789 2484 scope.go:117] "RemoveContainer" containerID="b5c77ace7465f0397f861637281286d895317fa808f96b8c5420cf272bd3daf4" Jul 12 00:18:31.879198 containerd[1434]: time="2025-07-12T00:18:31.878516587Z" level=info msg="RemoveContainer for \"b5c77ace7465f0397f861637281286d895317fa808f96b8c5420cf272bd3daf4\"" Jul 12 00:18:31.887321 containerd[1434]: time="2025-07-12T00:18:31.887188837Z" level=info msg="RemoveContainer for \"b5c77ace7465f0397f861637281286d895317fa808f96b8c5420cf272bd3daf4\" returns successfully" Jul 12 00:18:31.887535 kubelet[2484]: I0712 00:18:31.887456 2484 scope.go:117] "RemoveContainer" containerID="c5027e16e99c0edc8514b72c12719c69c02761061b9080540c493474e7822a6f" Jul 12 00:18:31.888174 containerd[1434]: time="2025-07-12T00:18:31.887725363Z" level=error msg="ContainerStatus for \"c5027e16e99c0edc8514b72c12719c69c02761061b9080540c493474e7822a6f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c5027e16e99c0edc8514b72c12719c69c02761061b9080540c493474e7822a6f\": not found" Jul 12 00:18:31.901359 kubelet[2484]: E0712 00:18:31.900883 2484 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c5027e16e99c0edc8514b72c12719c69c02761061b9080540c493474e7822a6f\": not found" containerID="c5027e16e99c0edc8514b72c12719c69c02761061b9080540c493474e7822a6f" Jul 12 00:18:31.901359 kubelet[2484]: I0712 00:18:31.900923 2484 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c5027e16e99c0edc8514b72c12719c69c02761061b9080540c493474e7822a6f"} err="failed to get container status \"c5027e16e99c0edc8514b72c12719c69c02761061b9080540c493474e7822a6f\": rpc error: code = NotFound desc = an error occurred when try to find container \"c5027e16e99c0edc8514b72c12719c69c02761061b9080540c493474e7822a6f\": not found" Jul 12 00:18:31.901359 kubelet[2484]: I0712 00:18:31.900965 2484 scope.go:117] "RemoveContainer" containerID="e51c0a8bff14a28fd37c4c230cffc2322f2be84bfdc1186982376ad7b1406c0d" Jul 12 00:18:31.901359 kubelet[2484]: E0712 00:18:31.901361 2484 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e51c0a8bff14a28fd37c4c230cffc2322f2be84bfdc1186982376ad7b1406c0d\": not found" containerID="e51c0a8bff14a28fd37c4c230cffc2322f2be84bfdc1186982376ad7b1406c0d" Jul 12 00:18:31.901574 containerd[1434]: time="2025-07-12T00:18:31.901230427Z" level=error msg="ContainerStatus for \"e51c0a8bff14a28fd37c4c230cffc2322f2be84bfdc1186982376ad7b1406c0d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e51c0a8bff14a28fd37c4c230cffc2322f2be84bfdc1186982376ad7b1406c0d\": not found" Jul 12 00:18:31.901607 kubelet[2484]: I0712 00:18:31.901386 2484 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e51c0a8bff14a28fd37c4c230cffc2322f2be84bfdc1186982376ad7b1406c0d"} err="failed to get container status \"e51c0a8bff14a28fd37c4c230cffc2322f2be84bfdc1186982376ad7b1406c0d\": rpc error: code = NotFound desc = an error occurred when try to find container \"e51c0a8bff14a28fd37c4c230cffc2322f2be84bfdc1186982376ad7b1406c0d\": not found" Jul 12 00:18:31.901607 kubelet[2484]: I0712 00:18:31.901404 2484 scope.go:117] "RemoveContainer" containerID="93696ad5beee87ddc9b390d4cbd368f16a7ba06e22c486c17da0c515c35e6d74" Jul 12 00:18:31.902281 containerd[1434]: time="2025-07-12T00:18:31.902245043Z" level=error msg="ContainerStatus for \"93696ad5beee87ddc9b390d4cbd368f16a7ba06e22c486c17da0c515c35e6d74\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"93696ad5beee87ddc9b390d4cbd368f16a7ba06e22c486c17da0c515c35e6d74\": not found" Jul 12 00:18:31.902453 kubelet[2484]: E0712 00:18:31.902354 2484 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"93696ad5beee87ddc9b390d4cbd368f16a7ba06e22c486c17da0c515c35e6d74\": not found" containerID="93696ad5beee87ddc9b390d4cbd368f16a7ba06e22c486c17da0c515c35e6d74" Jul 12 00:18:31.902453 kubelet[2484]: I0712 00:18:31.902376 2484 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"93696ad5beee87ddc9b390d4cbd368f16a7ba06e22c486c17da0c515c35e6d74"} err="failed to get container status \"93696ad5beee87ddc9b390d4cbd368f16a7ba06e22c486c17da0c515c35e6d74\": rpc error: code = NotFound desc = an error occurred when try to find container \"93696ad5beee87ddc9b390d4cbd368f16a7ba06e22c486c17da0c515c35e6d74\": not found" Jul 12 00:18:31.902453 kubelet[2484]: I0712 00:18:31.902400 2484 scope.go:117] "RemoveContainer" containerID="45bf8f159672e03c4d83654cc40991611813dae76db35780db7c72f41ea8a8de" Jul 12 00:18:31.902589 containerd[1434]: time="2025-07-12T00:18:31.902543104Z" level=error msg="ContainerStatus for \"45bf8f159672e03c4d83654cc40991611813dae76db35780db7c72f41ea8a8de\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"45bf8f159672e03c4d83654cc40991611813dae76db35780db7c72f41ea8a8de\": not found" Jul 12 00:18:31.904325 kubelet[2484]: E0712 00:18:31.902639 2484 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"45bf8f159672e03c4d83654cc40991611813dae76db35780db7c72f41ea8a8de\": not found" containerID="45bf8f159672e03c4d83654cc40991611813dae76db35780db7c72f41ea8a8de" Jul 12 00:18:31.904325 kubelet[2484]: I0712 00:18:31.902666 2484 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"45bf8f159672e03c4d83654cc40991611813dae76db35780db7c72f41ea8a8de"} err="failed to get container status \"45bf8f159672e03c4d83654cc40991611813dae76db35780db7c72f41ea8a8de\": rpc error: code = NotFound desc = an error occurred when try to find container \"45bf8f159672e03c4d83654cc40991611813dae76db35780db7c72f41ea8a8de\": not found" Jul 12 00:18:31.904325 kubelet[2484]: I0712 00:18:31.902724 2484 scope.go:117] "RemoveContainer" containerID="b5c77ace7465f0397f861637281286d895317fa808f96b8c5420cf272bd3daf4" Jul 12 00:18:31.904325 kubelet[2484]: E0712 00:18:31.902963 2484 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b5c77ace7465f0397f861637281286d895317fa808f96b8c5420cf272bd3daf4\": not found" containerID="b5c77ace7465f0397f861637281286d895317fa808f96b8c5420cf272bd3daf4" Jul 12 00:18:31.904325 kubelet[2484]: I0712 00:18:31.902982 2484 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b5c77ace7465f0397f861637281286d895317fa808f96b8c5420cf272bd3daf4"} err="failed to get container status \"b5c77ace7465f0397f861637281286d895317fa808f96b8c5420cf272bd3daf4\": rpc error: code = NotFound desc = an error occurred when try to find container \"b5c77ace7465f0397f861637281286d895317fa808f96b8c5420cf272bd3daf4\": not found" Jul 12 00:18:31.904325 kubelet[2484]: I0712 00:18:31.902996 2484 scope.go:117] "RemoveContainer" containerID="b99a30e831a3fc1c1e7faadaa3dbbc0f54edf95bd4fe2df39a42f2a4dcc59147" Jul 12 00:18:31.904545 containerd[1434]: time="2025-07-12T00:18:31.902876523Z" level=error msg="ContainerStatus for \"b5c77ace7465f0397f861637281286d895317fa808f96b8c5420cf272bd3daf4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b5c77ace7465f0397f861637281286d895317fa808f96b8c5420cf272bd3daf4\": not found" Jul 12 00:18:31.906192 containerd[1434]: time="2025-07-12T00:18:31.906149915Z" level=info msg="RemoveContainer for \"b99a30e831a3fc1c1e7faadaa3dbbc0f54edf95bd4fe2df39a42f2a4dcc59147\"" Jul 12 00:18:31.912898 containerd[1434]: time="2025-07-12T00:18:31.912849890Z" level=info msg="RemoveContainer for \"b99a30e831a3fc1c1e7faadaa3dbbc0f54edf95bd4fe2df39a42f2a4dcc59147\" returns successfully" Jul 12 00:18:31.915301 kubelet[2484]: I0712 00:18:31.915261 2484 scope.go:117] "RemoveContainer" containerID="b99a30e831a3fc1c1e7faadaa3dbbc0f54edf95bd4fe2df39a42f2a4dcc59147" Jul 12 00:18:31.915551 containerd[1434]: time="2025-07-12T00:18:31.915505642Z" level=error msg="ContainerStatus for \"b99a30e831a3fc1c1e7faadaa3dbbc0f54edf95bd4fe2df39a42f2a4dcc59147\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b99a30e831a3fc1c1e7faadaa3dbbc0f54edf95bd4fe2df39a42f2a4dcc59147\": not found" Jul 12 00:18:31.915694 kubelet[2484]: E0712 00:18:31.915663 2484 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b99a30e831a3fc1c1e7faadaa3dbbc0f54edf95bd4fe2df39a42f2a4dcc59147\": not found" containerID="b99a30e831a3fc1c1e7faadaa3dbbc0f54edf95bd4fe2df39a42f2a4dcc59147" Jul 12 00:18:31.915730 kubelet[2484]: I0712 00:18:31.915701 2484 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b99a30e831a3fc1c1e7faadaa3dbbc0f54edf95bd4fe2df39a42f2a4dcc59147"} err="failed to get container status \"b99a30e831a3fc1c1e7faadaa3dbbc0f54edf95bd4fe2df39a42f2a4dcc59147\": rpc error: code = NotFound desc = an error occurred when try to find container \"b99a30e831a3fc1c1e7faadaa3dbbc0f54edf95bd4fe2df39a42f2a4dcc59147\": not found" Jul 12 00:18:32.153104 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b20922aa8c43468872fa6f4484bdeb748fe08b6b300bfb7475e7120db578eccb-rootfs.mount: Deactivated successfully. Jul 12 00:18:32.153198 systemd[1]: var-lib-kubelet-pods-0ecf27f2\x2db974\x2d47ef\x2d8ac0\x2d99087ea46031-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwdmfl.mount: Deactivated successfully. Jul 12 00:18:32.153257 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e600e7cec7165f1439bc7cea42bd857ac3e161b2274d0be912c301feca6f3de7-rootfs.mount: Deactivated successfully. Jul 12 00:18:32.153307 systemd[1]: var-lib-kubelet-pods-19b20408\x2d1012\x2d434a\x2db121\x2da0c59391be23-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsbn7v.mount: Deactivated successfully. Jul 12 00:18:32.153359 systemd[1]: var-lib-kubelet-pods-19b20408\x2d1012\x2d434a\x2db121\x2da0c59391be23-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 12 00:18:32.153409 systemd[1]: var-lib-kubelet-pods-19b20408\x2d1012\x2d434a\x2db121\x2da0c59391be23-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 12 00:18:32.635430 kubelet[2484]: I0712 00:18:32.635379 2484 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ecf27f2-b974-47ef-8ac0-99087ea46031" path="/var/lib/kubelet/pods/0ecf27f2-b974-47ef-8ac0-99087ea46031/volumes" Jul 12 00:18:32.635780 kubelet[2484]: I0712 00:18:32.635766 2484 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19b20408-1012-434a-b121-a0c59391be23" path="/var/lib/kubelet/pods/19b20408-1012-434a-b121-a0c59391be23/volumes" Jul 12 00:18:33.109464 sshd[4101]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:33.118982 systemd[1]: sshd@20-10.0.0.81:22-10.0.0.1:48406.service: Deactivated successfully. Jul 12 00:18:33.120979 systemd[1]: session-21.scope: Deactivated successfully. Jul 12 00:18:33.121234 systemd[1]: session-21.scope: Consumed 1.336s CPU time. Jul 12 00:18:33.122871 systemd-logind[1417]: Session 21 logged out. Waiting for processes to exit. Jul 12 00:18:33.132489 systemd[1]: Started sshd@21-10.0.0.81:22-10.0.0.1:59320.service - OpenSSH per-connection server daemon (10.0.0.1:59320). Jul 12 00:18:33.137965 systemd-logind[1417]: Removed session 21. Jul 12 00:18:33.173450 sshd[4264]: Accepted publickey for core from 10.0.0.1 port 59320 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:18:33.175004 sshd[4264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:33.179394 systemd-logind[1417]: New session 22 of user core. Jul 12 00:18:33.195308 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 12 00:18:34.667296 sshd[4264]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:34.675506 kubelet[2484]: E0712 00:18:34.672682 2484 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 12 00:18:34.677210 systemd[1]: sshd@21-10.0.0.81:22-10.0.0.1:59320.service: Deactivated successfully. Jul 12 00:18:34.679293 systemd[1]: session-22.scope: Deactivated successfully. Jul 12 00:18:34.679452 systemd[1]: session-22.scope: Consumed 1.365s CPU time. Jul 12 00:18:34.683423 systemd-logind[1417]: Session 22 logged out. Waiting for processes to exit. Jul 12 00:18:34.699542 systemd[1]: Started sshd@22-10.0.0.81:22-10.0.0.1:59336.service - OpenSSH per-connection server daemon (10.0.0.1:59336). Jul 12 00:18:34.703904 systemd-logind[1417]: Removed session 22. Jul 12 00:18:34.711417 systemd[1]: Created slice kubepods-burstable-pod4bd1b420_6735_4e00_9fce_6b2e09e3ad28.slice - libcontainer container kubepods-burstable-pod4bd1b420_6735_4e00_9fce_6b2e09e3ad28.slice. Jul 12 00:18:34.734063 sshd[4278]: Accepted publickey for core from 10.0.0.1 port 59336 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:18:34.735392 sshd[4278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:34.739396 systemd-logind[1417]: New session 23 of user core. Jul 12 00:18:34.750228 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 12 00:18:34.799427 sshd[4278]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:34.809659 systemd[1]: sshd@22-10.0.0.81:22-10.0.0.1:59336.service: Deactivated successfully. Jul 12 00:18:34.811507 systemd[1]: session-23.scope: Deactivated successfully. Jul 12 00:18:34.813486 systemd-logind[1417]: Session 23 logged out. Waiting for processes to exit. Jul 12 00:18:34.813862 kubelet[2484]: I0712 00:18:34.813580 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4bd1b420-6735-4e00-9fce-6b2e09e3ad28-xtables-lock\") pod \"cilium-4wx9k\" (UID: \"4bd1b420-6735-4e00-9fce-6b2e09e3ad28\") " pod="kube-system/cilium-4wx9k" Jul 12 00:18:34.813862 kubelet[2484]: I0712 00:18:34.813648 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4bd1b420-6735-4e00-9fce-6b2e09e3ad28-host-proc-sys-kernel\") pod \"cilium-4wx9k\" (UID: \"4bd1b420-6735-4e00-9fce-6b2e09e3ad28\") " pod="kube-system/cilium-4wx9k" Jul 12 00:18:34.813862 kubelet[2484]: I0712 00:18:34.813666 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4bd1b420-6735-4e00-9fce-6b2e09e3ad28-hubble-tls\") pod \"cilium-4wx9k\" (UID: \"4bd1b420-6735-4e00-9fce-6b2e09e3ad28\") " pod="kube-system/cilium-4wx9k" Jul 12 00:18:34.813862 kubelet[2484]: I0712 00:18:34.813706 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sngxz\" (UniqueName: \"kubernetes.io/projected/4bd1b420-6735-4e00-9fce-6b2e09e3ad28-kube-api-access-sngxz\") pod \"cilium-4wx9k\" (UID: \"4bd1b420-6735-4e00-9fce-6b2e09e3ad28\") " pod="kube-system/cilium-4wx9k" Jul 12 00:18:34.813862 kubelet[2484]: I0712 00:18:34.813722 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4bd1b420-6735-4e00-9fce-6b2e09e3ad28-cni-path\") pod \"cilium-4wx9k\" (UID: \"4bd1b420-6735-4e00-9fce-6b2e09e3ad28\") " pod="kube-system/cilium-4wx9k" Jul 12 00:18:34.813862 kubelet[2484]: I0712 00:18:34.813737 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4bd1b420-6735-4e00-9fce-6b2e09e3ad28-etc-cni-netd\") pod \"cilium-4wx9k\" (UID: \"4bd1b420-6735-4e00-9fce-6b2e09e3ad28\") " pod="kube-system/cilium-4wx9k" Jul 12 00:18:34.814024 kubelet[2484]: I0712 00:18:34.813763 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4bd1b420-6735-4e00-9fce-6b2e09e3ad28-cilium-ipsec-secrets\") pod \"cilium-4wx9k\" (UID: \"4bd1b420-6735-4e00-9fce-6b2e09e3ad28\") " pod="kube-system/cilium-4wx9k" Jul 12 00:18:34.814024 kubelet[2484]: I0712 00:18:34.813778 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4bd1b420-6735-4e00-9fce-6b2e09e3ad28-host-proc-sys-net\") pod \"cilium-4wx9k\" (UID: \"4bd1b420-6735-4e00-9fce-6b2e09e3ad28\") " pod="kube-system/cilium-4wx9k" Jul 12 00:18:34.814024 kubelet[2484]: I0712 00:18:34.813797 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4bd1b420-6735-4e00-9fce-6b2e09e3ad28-bpf-maps\") pod \"cilium-4wx9k\" (UID: \"4bd1b420-6735-4e00-9fce-6b2e09e3ad28\") " pod="kube-system/cilium-4wx9k" Jul 12 00:18:34.814024 kubelet[2484]: I0712 00:18:34.813811 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4bd1b420-6735-4e00-9fce-6b2e09e3ad28-cilium-config-path\") pod \"cilium-4wx9k\" (UID: \"4bd1b420-6735-4e00-9fce-6b2e09e3ad28\") " pod="kube-system/cilium-4wx9k" Jul 12 00:18:34.814024 kubelet[2484]: I0712 00:18:34.813825 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4bd1b420-6735-4e00-9fce-6b2e09e3ad28-cilium-run\") pod \"cilium-4wx9k\" (UID: \"4bd1b420-6735-4e00-9fce-6b2e09e3ad28\") " pod="kube-system/cilium-4wx9k" Jul 12 00:18:34.814024 kubelet[2484]: I0712 00:18:34.813842 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4bd1b420-6735-4e00-9fce-6b2e09e3ad28-hostproc\") pod \"cilium-4wx9k\" (UID: \"4bd1b420-6735-4e00-9fce-6b2e09e3ad28\") " pod="kube-system/cilium-4wx9k" Jul 12 00:18:34.814200 kubelet[2484]: I0712 00:18:34.813867 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4bd1b420-6735-4e00-9fce-6b2e09e3ad28-cilium-cgroup\") pod \"cilium-4wx9k\" (UID: \"4bd1b420-6735-4e00-9fce-6b2e09e3ad28\") " pod="kube-system/cilium-4wx9k" Jul 12 00:18:34.814200 kubelet[2484]: I0712 00:18:34.813885 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4bd1b420-6735-4e00-9fce-6b2e09e3ad28-lib-modules\") pod \"cilium-4wx9k\" (UID: \"4bd1b420-6735-4e00-9fce-6b2e09e3ad28\") " pod="kube-system/cilium-4wx9k" Jul 12 00:18:34.814200 kubelet[2484]: I0712 00:18:34.813920 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4bd1b420-6735-4e00-9fce-6b2e09e3ad28-clustermesh-secrets\") pod \"cilium-4wx9k\" (UID: \"4bd1b420-6735-4e00-9fce-6b2e09e3ad28\") " pod="kube-system/cilium-4wx9k" Jul 12 00:18:34.822786 systemd[1]: Started sshd@23-10.0.0.81:22-10.0.0.1:59340.service - OpenSSH per-connection server daemon (10.0.0.1:59340). Jul 12 00:18:34.824672 systemd-logind[1417]: Removed session 23. Jul 12 00:18:34.855905 sshd[4286]: Accepted publickey for core from 10.0.0.1 port 59340 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:18:34.857372 sshd[4286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:34.861761 systemd-logind[1417]: New session 24 of user core. Jul 12 00:18:34.873264 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 12 00:18:35.018934 kubelet[2484]: E0712 00:18:35.018705 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:35.019715 containerd[1434]: time="2025-07-12T00:18:35.019673899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4wx9k,Uid:4bd1b420-6735-4e00-9fce-6b2e09e3ad28,Namespace:kube-system,Attempt:0,}" Jul 12 00:18:35.036875 containerd[1434]: time="2025-07-12T00:18:35.036790717Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:18:35.037001 containerd[1434]: time="2025-07-12T00:18:35.036888352Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:18:35.037001 containerd[1434]: time="2025-07-12T00:18:35.036922990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:18:35.037052 containerd[1434]: time="2025-07-12T00:18:35.037025505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:18:35.057280 systemd[1]: Started cri-containerd-9330f6ce89a0687d77c8794a7af233a121c7a9f29495aa39b4601b236ca28ebc.scope - libcontainer container 9330f6ce89a0687d77c8794a7af233a121c7a9f29495aa39b4601b236ca28ebc. Jul 12 00:18:35.078254 containerd[1434]: time="2025-07-12T00:18:35.078212231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4wx9k,Uid:4bd1b420-6735-4e00-9fce-6b2e09e3ad28,Namespace:kube-system,Attempt:0,} returns sandbox id \"9330f6ce89a0687d77c8794a7af233a121c7a9f29495aa39b4601b236ca28ebc\"" Jul 12 00:18:35.078998 kubelet[2484]: E0712 00:18:35.078976 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:35.084170 containerd[1434]: time="2025-07-12T00:18:35.084103295Z" level=info msg="CreateContainer within sandbox \"9330f6ce89a0687d77c8794a7af233a121c7a9f29495aa39b4601b236ca28ebc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 12 00:18:35.093137 containerd[1434]: time="2025-07-12T00:18:35.093072283Z" level=info msg="CreateContainer within sandbox \"9330f6ce89a0687d77c8794a7af233a121c7a9f29495aa39b4601b236ca28ebc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"dc1c72ef47dd145087578d0308cbda213a508908bc0794d063be1d02804636a1\"" Jul 12 00:18:35.093641 containerd[1434]: time="2025-07-12T00:18:35.093619695Z" level=info msg="StartContainer for \"dc1c72ef47dd145087578d0308cbda213a508908bc0794d063be1d02804636a1\"" Jul 12 00:18:35.135289 systemd[1]: Started cri-containerd-dc1c72ef47dd145087578d0308cbda213a508908bc0794d063be1d02804636a1.scope - libcontainer container dc1c72ef47dd145087578d0308cbda213a508908bc0794d063be1d02804636a1. Jul 12 00:18:35.160984 containerd[1434]: time="2025-07-12T00:18:35.160925706Z" level=info msg="StartContainer for \"dc1c72ef47dd145087578d0308cbda213a508908bc0794d063be1d02804636a1\" returns successfully" Jul 12 00:18:35.178061 systemd[1]: cri-containerd-dc1c72ef47dd145087578d0308cbda213a508908bc0794d063be1d02804636a1.scope: Deactivated successfully. Jul 12 00:18:35.207512 containerd[1434]: time="2025-07-12T00:18:35.207447403Z" level=info msg="shim disconnected" id=dc1c72ef47dd145087578d0308cbda213a508908bc0794d063be1d02804636a1 namespace=k8s.io Jul 12 00:18:35.207512 containerd[1434]: time="2025-07-12T00:18:35.207507680Z" level=warning msg="cleaning up after shim disconnected" id=dc1c72ef47dd145087578d0308cbda213a508908bc0794d063be1d02804636a1 namespace=k8s.io Jul 12 00:18:35.207512 containerd[1434]: time="2025-07-12T00:18:35.207516440Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:18:35.853650 kubelet[2484]: E0712 00:18:35.853427 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:35.862666 containerd[1434]: time="2025-07-12T00:18:35.862610852Z" level=info msg="CreateContainer within sandbox \"9330f6ce89a0687d77c8794a7af233a121c7a9f29495aa39b4601b236ca28ebc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 12 00:18:35.876312 containerd[1434]: time="2025-07-12T00:18:35.876257645Z" level=info msg="CreateContainer within sandbox \"9330f6ce89a0687d77c8794a7af233a121c7a9f29495aa39b4601b236ca28ebc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"32219160b4b94922d1d7c4a54dbb9a94701d742303857da2dcfb4fe0de29955a\"" Jul 12 00:18:35.880426 containerd[1434]: time="2025-07-12T00:18:35.877764449Z" level=info msg="StartContainer for \"32219160b4b94922d1d7c4a54dbb9a94701d742303857da2dcfb4fe0de29955a\"" Jul 12 00:18:35.917457 systemd[1]: Started cri-containerd-32219160b4b94922d1d7c4a54dbb9a94701d742303857da2dcfb4fe0de29955a.scope - libcontainer container 32219160b4b94922d1d7c4a54dbb9a94701d742303857da2dcfb4fe0de29955a. Jul 12 00:18:35.944402 containerd[1434]: time="2025-07-12T00:18:35.944348456Z" level=info msg="StartContainer for \"32219160b4b94922d1d7c4a54dbb9a94701d742303857da2dcfb4fe0de29955a\" returns successfully" Jul 12 00:18:35.953726 systemd[1]: cri-containerd-32219160b4b94922d1d7c4a54dbb9a94701d742303857da2dcfb4fe0de29955a.scope: Deactivated successfully. Jul 12 00:18:35.969544 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32219160b4b94922d1d7c4a54dbb9a94701d742303857da2dcfb4fe0de29955a-rootfs.mount: Deactivated successfully. Jul 12 00:18:35.974100 containerd[1434]: time="2025-07-12T00:18:35.974029401Z" level=info msg="shim disconnected" id=32219160b4b94922d1d7c4a54dbb9a94701d742303857da2dcfb4fe0de29955a namespace=k8s.io Jul 12 00:18:35.974100 containerd[1434]: time="2025-07-12T00:18:35.974083159Z" level=warning msg="cleaning up after shim disconnected" id=32219160b4b94922d1d7c4a54dbb9a94701d742303857da2dcfb4fe0de29955a namespace=k8s.io Jul 12 00:18:35.974100 containerd[1434]: time="2025-07-12T00:18:35.974101678Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:18:36.835624 kubelet[2484]: I0712 00:18:36.835556 2484 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-12T00:18:36Z","lastTransitionTime":"2025-07-12T00:18:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 12 00:18:36.855598 kubelet[2484]: E0712 00:18:36.855562 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:36.866717 containerd[1434]: time="2025-07-12T00:18:36.866665661Z" level=info msg="CreateContainer within sandbox \"9330f6ce89a0687d77c8794a7af233a121c7a9f29495aa39b4601b236ca28ebc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 12 00:18:36.886803 containerd[1434]: time="2025-07-12T00:18:36.886757230Z" level=info msg="CreateContainer within sandbox \"9330f6ce89a0687d77c8794a7af233a121c7a9f29495aa39b4601b236ca28ebc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b34deee70dfc816cfcdd1eb914e6d41a779b7546df283237da7c040e162af440\"" Jul 12 00:18:36.887762 containerd[1434]: time="2025-07-12T00:18:36.887719384Z" level=info msg="StartContainer for \"b34deee70dfc816cfcdd1eb914e6d41a779b7546df283237da7c040e162af440\"" Jul 12 00:18:36.922278 systemd[1]: Started cri-containerd-b34deee70dfc816cfcdd1eb914e6d41a779b7546df283237da7c040e162af440.scope - libcontainer container b34deee70dfc816cfcdd1eb914e6d41a779b7546df283237da7c040e162af440. Jul 12 00:18:36.946074 containerd[1434]: time="2025-07-12T00:18:36.945890270Z" level=info msg="StartContainer for \"b34deee70dfc816cfcdd1eb914e6d41a779b7546df283237da7c040e162af440\" returns successfully" Jul 12 00:18:36.948155 systemd[1]: cri-containerd-b34deee70dfc816cfcdd1eb914e6d41a779b7546df283237da7c040e162af440.scope: Deactivated successfully. Jul 12 00:18:36.966543 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b34deee70dfc816cfcdd1eb914e6d41a779b7546df283237da7c040e162af440-rootfs.mount: Deactivated successfully. Jul 12 00:18:36.971985 containerd[1434]: time="2025-07-12T00:18:36.971925118Z" level=info msg="shim disconnected" id=b34deee70dfc816cfcdd1eb914e6d41a779b7546df283237da7c040e162af440 namespace=k8s.io Jul 12 00:18:36.971985 containerd[1434]: time="2025-07-12T00:18:36.971980035Z" level=warning msg="cleaning up after shim disconnected" id=b34deee70dfc816cfcdd1eb914e6d41a779b7546df283237da7c040e162af440 namespace=k8s.io Jul 12 00:18:36.971985 containerd[1434]: time="2025-07-12T00:18:36.971988755Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:18:37.866466 kubelet[2484]: E0712 00:18:37.865966 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:37.882863 containerd[1434]: time="2025-07-12T00:18:37.882797171Z" level=info msg="CreateContainer within sandbox \"9330f6ce89a0687d77c8794a7af233a121c7a9f29495aa39b4601b236ca28ebc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 12 00:18:37.909163 containerd[1434]: time="2025-07-12T00:18:37.908987887Z" level=info msg="CreateContainer within sandbox \"9330f6ce89a0687d77c8794a7af233a121c7a9f29495aa39b4601b236ca28ebc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"98717fde0ce4b3ca230770dfce0546b1ba6162cd532464b43259e29c0b64f7a4\"" Jul 12 00:18:37.909992 containerd[1434]: time="2025-07-12T00:18:37.909921326Z" level=info msg="StartContainer for \"98717fde0ce4b3ca230770dfce0546b1ba6162cd532464b43259e29c0b64f7a4\"" Jul 12 00:18:37.939853 systemd[1]: Started cri-containerd-98717fde0ce4b3ca230770dfce0546b1ba6162cd532464b43259e29c0b64f7a4.scope - libcontainer container 98717fde0ce4b3ca230770dfce0546b1ba6162cd532464b43259e29c0b64f7a4. Jul 12 00:18:37.961460 systemd[1]: cri-containerd-98717fde0ce4b3ca230770dfce0546b1ba6162cd532464b43259e29c0b64f7a4.scope: Deactivated successfully. Jul 12 00:18:37.963787 containerd[1434]: time="2025-07-12T00:18:37.963746655Z" level=info msg="StartContainer for \"98717fde0ce4b3ca230770dfce0546b1ba6162cd532464b43259e29c0b64f7a4\" returns successfully" Jul 12 00:18:37.981021 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98717fde0ce4b3ca230770dfce0546b1ba6162cd532464b43259e29c0b64f7a4-rootfs.mount: Deactivated successfully. Jul 12 00:18:37.986437 containerd[1434]: time="2025-07-12T00:18:37.986353931Z" level=info msg="shim disconnected" id=98717fde0ce4b3ca230770dfce0546b1ba6162cd532464b43259e29c0b64f7a4 namespace=k8s.io Jul 12 00:18:37.986437 containerd[1434]: time="2025-07-12T00:18:37.986418008Z" level=warning msg="cleaning up after shim disconnected" id=98717fde0ce4b3ca230770dfce0546b1ba6162cd532464b43259e29c0b64f7a4 namespace=k8s.io Jul 12 00:18:37.986437 containerd[1434]: time="2025-07-12T00:18:37.986427127Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:18:37.996625 containerd[1434]: time="2025-07-12T00:18:37.996348767Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:18:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 12 00:18:38.871361 kubelet[2484]: E0712 00:18:38.871327 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:38.885721 containerd[1434]: time="2025-07-12T00:18:38.885669125Z" level=info msg="CreateContainer within sandbox \"9330f6ce89a0687d77c8794a7af233a121c7a9f29495aa39b4601b236ca28ebc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 12 00:18:38.935722 containerd[1434]: time="2025-07-12T00:18:38.935668365Z" level=info msg="CreateContainer within sandbox \"9330f6ce89a0687d77c8794a7af233a121c7a9f29495aa39b4601b236ca28ebc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a02894b532b89e717362bb91f3ee4595c2bd8ace774d3c9de31aa6178d7e0446\"" Jul 12 00:18:38.936553 containerd[1434]: time="2025-07-12T00:18:38.936526410Z" level=info msg="StartContainer for \"a02894b532b89e717362bb91f3ee4595c2bd8ace774d3c9de31aa6178d7e0446\"" Jul 12 00:18:38.973354 systemd[1]: Started cri-containerd-a02894b532b89e717362bb91f3ee4595c2bd8ace774d3c9de31aa6178d7e0446.scope - libcontainer container a02894b532b89e717362bb91f3ee4595c2bd8ace774d3c9de31aa6178d7e0446. Jul 12 00:18:39.002125 containerd[1434]: time="2025-07-12T00:18:39.002048487Z" level=info msg="StartContainer for \"a02894b532b89e717362bb91f3ee4595c2bd8ace774d3c9de31aa6178d7e0446\" returns successfully" Jul 12 00:18:39.299135 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 12 00:18:39.878304 kubelet[2484]: E0712 00:18:39.878232 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:39.912591 kubelet[2484]: I0712 00:18:39.910596 2484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4wx9k" podStartSLOduration=5.91058007 podStartE2EDuration="5.91058007s" podCreationTimestamp="2025-07-12 00:18:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:18:39.90027895 +0000 UTC m=+75.412244645" watchObservedRunningTime="2025-07-12 00:18:39.91058007 +0000 UTC m=+75.422545725" Jul 12 00:18:41.019746 kubelet[2484]: E0712 00:18:41.019697 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:42.287192 systemd-networkd[1373]: lxc_health: Link UP Jul 12 00:18:42.307656 systemd-networkd[1373]: lxc_health: Gained carrier Jul 12 00:18:43.021260 kubelet[2484]: E0712 00:18:43.020601 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:43.750547 systemd-networkd[1373]: lxc_health: Gained IPv6LL Jul 12 00:18:43.888690 kubelet[2484]: E0712 00:18:43.888659 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:44.891017 kubelet[2484]: E0712 00:18:44.890622 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:45.490104 systemd[1]: run-containerd-runc-k8s.io-a02894b532b89e717362bb91f3ee4595c2bd8ace774d3c9de31aa6178d7e0446-runc.V0n9iS.mount: Deactivated successfully. Jul 12 00:18:47.589099 systemd[1]: run-containerd-runc-k8s.io-a02894b532b89e717362bb91f3ee4595c2bd8ace774d3c9de31aa6178d7e0446-runc.e6hd15.mount: Deactivated successfully. Jul 12 00:18:47.630826 kubelet[2484]: E0712 00:18:47.630782 2484 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:47.644054 sshd[4286]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:47.647571 systemd[1]: sshd@23-10.0.0.81:22-10.0.0.1:59340.service: Deactivated successfully. Jul 12 00:18:47.649239 systemd[1]: session-24.scope: Deactivated successfully. Jul 12 00:18:47.649869 systemd-logind[1417]: Session 24 logged out. Waiting for processes to exit. Jul 12 00:18:47.650798 systemd-logind[1417]: Removed session 24.