Jul 10 00:28:30.919345 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 10 00:28:30.919366 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Jul 9 22:54:34 -00 2025 Jul 10 00:28:30.919376 kernel: KASLR enabled Jul 10 00:28:30.919382 kernel: efi: EFI v2.7 by EDK II Jul 10 00:28:30.919388 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jul 10 00:28:30.919393 kernel: random: crng init done Jul 10 00:28:30.919401 kernel: ACPI: Early table checksum verification disabled Jul 10 00:28:30.919407 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jul 10 00:28:30.919413 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 10 00:28:30.919420 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:28:30.919426 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:28:30.919432 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:28:30.919438 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:28:30.919444 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:28:30.919452 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:28:30.919460 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:28:30.919466 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:28:30.919472 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:28:30.919479 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 10 00:28:30.919485 kernel: NUMA: Failed to initialise from firmware Jul 10 00:28:30.919492 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 10 00:28:30.919498 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jul 10 00:28:30.919504 kernel: Zone ranges: Jul 10 00:28:30.919511 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 10 00:28:30.919517 kernel: DMA32 empty Jul 10 00:28:30.919525 kernel: Normal empty Jul 10 00:28:30.919531 kernel: Movable zone start for each node Jul 10 00:28:30.919537 kernel: Early memory node ranges Jul 10 00:28:30.919544 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jul 10 00:28:30.919550 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 10 00:28:30.919556 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 10 00:28:30.919563 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 10 00:28:30.919569 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 10 00:28:30.919575 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 10 00:28:30.919581 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 10 00:28:30.919588 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 10 00:28:30.919594 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 10 00:28:30.919602 kernel: psci: probing for conduit method from ACPI. Jul 10 00:28:30.919608 kernel: psci: PSCIv1.1 detected in firmware. Jul 10 00:28:30.919614 kernel: psci: Using standard PSCI v0.2 function IDs Jul 10 00:28:30.919623 kernel: psci: Trusted OS migration not required Jul 10 00:28:30.919630 kernel: psci: SMC Calling Convention v1.1 Jul 10 00:28:30.919637 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 10 00:28:30.919645 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 10 00:28:30.919651 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 10 00:28:30.919659 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 10 00:28:30.919673 kernel: Detected PIPT I-cache on CPU0 Jul 10 00:28:30.919680 kernel: CPU features: detected: GIC system register CPU interface Jul 10 00:28:30.919686 kernel: CPU features: detected: Hardware dirty bit management Jul 10 00:28:30.919693 kernel: CPU features: detected: Spectre-v4 Jul 10 00:28:30.919700 kernel: CPU features: detected: Spectre-BHB Jul 10 00:28:30.919706 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 10 00:28:30.919713 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 10 00:28:30.919722 kernel: CPU features: detected: ARM erratum 1418040 Jul 10 00:28:30.919729 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 10 00:28:30.919735 kernel: alternatives: applying boot alternatives Jul 10 00:28:30.919743 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2fac453dfc912247542078b0007b053dde4e47cc6ef808508492c36b6016a78f Jul 10 00:28:30.919750 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 10 00:28:30.919757 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 10 00:28:30.919764 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 10 00:28:30.919770 kernel: Fallback order for Node 0: 0 Jul 10 00:28:30.919777 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 10 00:28:30.919784 kernel: Policy zone: DMA Jul 10 00:28:30.919790 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 10 00:28:30.919799 kernel: software IO TLB: area num 4. Jul 10 00:28:30.919806 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 10 00:28:30.919813 kernel: Memory: 2386404K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185884K reserved, 0K cma-reserved) Jul 10 00:28:30.919820 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 10 00:28:30.919826 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 10 00:28:30.919833 kernel: rcu: RCU event tracing is enabled. Jul 10 00:28:30.919840 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 10 00:28:30.919847 kernel: Trampoline variant of Tasks RCU enabled. Jul 10 00:28:30.919854 kernel: Tracing variant of Tasks RCU enabled. Jul 10 00:28:30.919860 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 10 00:28:30.919867 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 10 00:28:30.919874 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 10 00:28:30.919882 kernel: GICv3: 256 SPIs implemented Jul 10 00:28:30.919889 kernel: GICv3: 0 Extended SPIs implemented Jul 10 00:28:30.919895 kernel: Root IRQ handler: gic_handle_irq Jul 10 00:28:30.919902 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 10 00:28:30.919909 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 10 00:28:30.919915 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 10 00:28:30.919922 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jul 10 00:28:30.919929 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jul 10 00:28:30.919936 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 10 00:28:30.919943 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 10 00:28:30.919950 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 10 00:28:30.919958 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 00:28:30.919964 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 10 00:28:30.919971 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 10 00:28:30.919978 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 10 00:28:30.919985 kernel: arm-pv: using stolen time PV Jul 10 00:28:30.919992 kernel: Console: colour dummy device 80x25 Jul 10 00:28:30.919999 kernel: ACPI: Core revision 20230628 Jul 10 00:28:30.920006 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 10 00:28:30.920013 kernel: pid_max: default: 32768 minimum: 301 Jul 10 00:28:30.920020 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 10 00:28:30.920028 kernel: landlock: Up and running. Jul 10 00:28:30.920035 kernel: SELinux: Initializing. Jul 10 00:28:30.920042 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 00:28:30.920049 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 00:28:30.920056 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 10 00:28:30.920063 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 10 00:28:30.920070 kernel: rcu: Hierarchical SRCU implementation. Jul 10 00:28:30.920077 kernel: rcu: Max phase no-delay instances is 400. Jul 10 00:28:30.920084 kernel: Platform MSI: ITS@0x8080000 domain created Jul 10 00:28:30.920093 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 10 00:28:30.920100 kernel: Remapping and enabling EFI services. Jul 10 00:28:30.920107 kernel: smp: Bringing up secondary CPUs ... Jul 10 00:28:30.920114 kernel: Detected PIPT I-cache on CPU1 Jul 10 00:28:30.920121 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 10 00:28:30.920129 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 10 00:28:30.920136 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 00:28:30.920143 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 10 00:28:30.920150 kernel: Detected PIPT I-cache on CPU2 Jul 10 00:28:30.920157 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 10 00:28:30.920166 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 10 00:28:30.920173 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 00:28:30.920185 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 10 00:28:30.920194 kernel: Detected PIPT I-cache on CPU3 Jul 10 00:28:30.920222 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 10 00:28:30.920229 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 10 00:28:30.920237 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 00:28:30.920244 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 10 00:28:30.920251 kernel: smp: Brought up 1 node, 4 CPUs Jul 10 00:28:30.920260 kernel: SMP: Total of 4 processors activated. Jul 10 00:28:30.920268 kernel: CPU features: detected: 32-bit EL0 Support Jul 10 00:28:30.920275 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 10 00:28:30.920282 kernel: CPU features: detected: Common not Private translations Jul 10 00:28:30.920289 kernel: CPU features: detected: CRC32 instructions Jul 10 00:28:30.920297 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 10 00:28:30.920304 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 10 00:28:30.920311 kernel: CPU features: detected: LSE atomic instructions Jul 10 00:28:30.920320 kernel: CPU features: detected: Privileged Access Never Jul 10 00:28:30.920327 kernel: CPU features: detected: RAS Extension Support Jul 10 00:28:30.920334 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 10 00:28:30.920342 kernel: CPU: All CPU(s) started at EL1 Jul 10 00:28:30.920349 kernel: alternatives: applying system-wide alternatives Jul 10 00:28:30.920357 kernel: devtmpfs: initialized Jul 10 00:28:30.920364 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 10 00:28:30.920371 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 10 00:28:30.920379 kernel: pinctrl core: initialized pinctrl subsystem Jul 10 00:28:30.920387 kernel: SMBIOS 3.0.0 present. Jul 10 00:28:30.920395 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jul 10 00:28:30.920402 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 10 00:28:30.920410 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 10 00:28:30.920417 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 10 00:28:30.920424 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 10 00:28:30.920432 kernel: audit: initializing netlink subsys (disabled) Jul 10 00:28:30.920439 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Jul 10 00:28:30.920446 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 10 00:28:30.920455 kernel: cpuidle: using governor menu Jul 10 00:28:30.920463 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 10 00:28:30.920470 kernel: ASID allocator initialised with 32768 entries Jul 10 00:28:30.920477 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 10 00:28:30.920485 kernel: Serial: AMBA PL011 UART driver Jul 10 00:28:30.920492 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 10 00:28:30.920499 kernel: Modules: 0 pages in range for non-PLT usage Jul 10 00:28:30.920507 kernel: Modules: 509008 pages in range for PLT usage Jul 10 00:28:30.920514 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 10 00:28:30.920523 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 10 00:28:30.920530 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 10 00:28:30.920537 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 10 00:28:30.920545 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 10 00:28:30.920552 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 10 00:28:30.920559 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 10 00:28:30.920567 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 10 00:28:30.920574 kernel: ACPI: Added _OSI(Module Device) Jul 10 00:28:30.920581 kernel: ACPI: Added _OSI(Processor Device) Jul 10 00:28:30.920590 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 10 00:28:30.920597 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 10 00:28:30.920604 kernel: ACPI: Interpreter enabled Jul 10 00:28:30.920612 kernel: ACPI: Using GIC for interrupt routing Jul 10 00:28:30.920619 kernel: ACPI: MCFG table detected, 1 entries Jul 10 00:28:30.920626 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 10 00:28:30.920634 kernel: printk: console [ttyAMA0] enabled Jul 10 00:28:30.920641 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 10 00:28:30.920788 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 10 00:28:30.920862 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 10 00:28:30.920926 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 10 00:28:30.920990 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 10 00:28:30.921050 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 10 00:28:30.921060 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 10 00:28:30.921068 kernel: PCI host bridge to bus 0000:00 Jul 10 00:28:30.921135 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 10 00:28:30.921196 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 10 00:28:30.921277 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 10 00:28:30.921334 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 10 00:28:30.921418 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 10 00:28:30.921492 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 10 00:28:30.921561 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 10 00:28:30.921630 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 10 00:28:30.921709 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 10 00:28:30.921778 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 10 00:28:30.921842 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 10 00:28:30.921910 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 10 00:28:30.921969 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 10 00:28:30.922027 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 10 00:28:30.922091 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 10 00:28:30.922101 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 10 00:28:30.922108 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 10 00:28:30.922116 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 10 00:28:30.922124 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 10 00:28:30.922131 kernel: iommu: Default domain type: Translated Jul 10 00:28:30.922139 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 10 00:28:30.922146 kernel: efivars: Registered efivars operations Jul 10 00:28:30.922155 kernel: vgaarb: loaded Jul 10 00:28:30.922163 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 10 00:28:30.922170 kernel: VFS: Disk quotas dquot_6.6.0 Jul 10 00:28:30.922178 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 10 00:28:30.922186 kernel: pnp: PnP ACPI init Jul 10 00:28:30.922289 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 10 00:28:30.922301 kernel: pnp: PnP ACPI: found 1 devices Jul 10 00:28:30.922309 kernel: NET: Registered PF_INET protocol family Jul 10 00:28:30.922317 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 10 00:28:30.922327 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 10 00:28:30.922335 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 10 00:28:30.922342 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 10 00:28:30.922350 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 10 00:28:30.922357 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 10 00:28:30.922364 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 00:28:30.922372 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 00:28:30.922379 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 10 00:28:30.922388 kernel: PCI: CLS 0 bytes, default 64 Jul 10 00:28:30.922396 kernel: kvm [1]: HYP mode not available Jul 10 00:28:30.922403 kernel: Initialise system trusted keyrings Jul 10 00:28:30.922410 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 10 00:28:30.922417 kernel: Key type asymmetric registered Jul 10 00:28:30.922425 kernel: Asymmetric key parser 'x509' registered Jul 10 00:28:30.922432 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 10 00:28:30.922440 kernel: io scheduler mq-deadline registered Jul 10 00:28:30.922447 kernel: io scheduler kyber registered Jul 10 00:28:30.922454 kernel: io scheduler bfq registered Jul 10 00:28:30.922464 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 10 00:28:30.922471 kernel: ACPI: button: Power Button [PWRB] Jul 10 00:28:30.922479 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 10 00:28:30.922547 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 10 00:28:30.922557 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 10 00:28:30.922565 kernel: thunder_xcv, ver 1.0 Jul 10 00:28:30.922572 kernel: thunder_bgx, ver 1.0 Jul 10 00:28:30.922580 kernel: nicpf, ver 1.0 Jul 10 00:28:30.922587 kernel: nicvf, ver 1.0 Jul 10 00:28:30.922663 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 10 00:28:30.922738 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-10T00:28:30 UTC (1752107310) Jul 10 00:28:30.922749 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 10 00:28:30.922757 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 10 00:28:30.922764 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 10 00:28:30.922772 kernel: watchdog: Hard watchdog permanently disabled Jul 10 00:28:30.922779 kernel: NET: Registered PF_INET6 protocol family Jul 10 00:28:30.922786 kernel: Segment Routing with IPv6 Jul 10 00:28:30.922797 kernel: In-situ OAM (IOAM) with IPv6 Jul 10 00:28:30.922804 kernel: NET: Registered PF_PACKET protocol family Jul 10 00:28:30.922811 kernel: Key type dns_resolver registered Jul 10 00:28:30.922819 kernel: registered taskstats version 1 Jul 10 00:28:30.922826 kernel: Loading compiled-in X.509 certificates Jul 10 00:28:30.922834 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 9cbc45ab00feb4acb0fa362a962909c99fb6ef52' Jul 10 00:28:30.922841 kernel: Key type .fscrypt registered Jul 10 00:28:30.922848 kernel: Key type fscrypt-provisioning registered Jul 10 00:28:30.922856 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 10 00:28:30.922864 kernel: ima: Allocated hash algorithm: sha1 Jul 10 00:28:30.922872 kernel: ima: No architecture policies found Jul 10 00:28:30.922880 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 10 00:28:30.922887 kernel: clk: Disabling unused clocks Jul 10 00:28:30.922894 kernel: Freeing unused kernel memory: 39424K Jul 10 00:28:30.922901 kernel: Run /init as init process Jul 10 00:28:30.922909 kernel: with arguments: Jul 10 00:28:30.922916 kernel: /init Jul 10 00:28:30.922923 kernel: with environment: Jul 10 00:28:30.922931 kernel: HOME=/ Jul 10 00:28:30.922939 kernel: TERM=linux Jul 10 00:28:30.922946 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 10 00:28:30.922955 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 10 00:28:30.922964 systemd[1]: Detected virtualization kvm. Jul 10 00:28:30.922972 systemd[1]: Detected architecture arm64. Jul 10 00:28:30.922980 systemd[1]: Running in initrd. Jul 10 00:28:30.922989 systemd[1]: No hostname configured, using default hostname. Jul 10 00:28:30.922997 systemd[1]: Hostname set to . Jul 10 00:28:30.923005 systemd[1]: Initializing machine ID from VM UUID. Jul 10 00:28:30.923013 systemd[1]: Queued start job for default target initrd.target. Jul 10 00:28:30.923021 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:28:30.923030 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:28:30.923038 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 10 00:28:30.923046 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 00:28:30.923056 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 10 00:28:30.923064 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 10 00:28:30.923073 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 10 00:28:30.923082 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 10 00:28:30.923090 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:28:30.923098 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:28:30.923106 systemd[1]: Reached target paths.target - Path Units. Jul 10 00:28:30.923116 systemd[1]: Reached target slices.target - Slice Units. Jul 10 00:28:30.923124 systemd[1]: Reached target swap.target - Swaps. Jul 10 00:28:30.923132 systemd[1]: Reached target timers.target - Timer Units. Jul 10 00:28:30.923139 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 00:28:30.923148 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 00:28:30.923156 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 10 00:28:30.923164 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 10 00:28:30.923173 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:28:30.923181 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 00:28:30.923191 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:28:30.923215 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 00:28:30.923224 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 10 00:28:30.923232 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 00:28:30.923240 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 10 00:28:30.923248 systemd[1]: Starting systemd-fsck-usr.service... Jul 10 00:28:30.923256 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 00:28:30.923264 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 00:28:30.923274 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:28:30.923282 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 10 00:28:30.923290 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:28:30.923298 systemd[1]: Finished systemd-fsck-usr.service. Jul 10 00:28:30.923307 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 00:28:30.923317 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:28:30.923325 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 00:28:30.923353 systemd-journald[239]: Collecting audit messages is disabled. Jul 10 00:28:30.923374 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 00:28:30.923384 systemd-journald[239]: Journal started Jul 10 00:28:30.923402 systemd-journald[239]: Runtime Journal (/run/log/journal/3435a7bc91bc43b283b08a1e8c5d0dff) is 5.9M, max 47.3M, 41.4M free. Jul 10 00:28:30.923439 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 00:28:30.914528 systemd-modules-load[240]: Inserted module 'overlay' Jul 10 00:28:30.928504 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 10 00:28:30.928545 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 00:28:30.929262 systemd-modules-load[240]: Inserted module 'br_netfilter' Jul 10 00:28:30.930246 kernel: Bridge firewalling registered Jul 10 00:28:30.930779 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 00:28:30.933794 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:28:30.935891 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 00:28:30.938547 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:28:30.943117 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:28:30.945332 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 10 00:28:30.948692 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:28:30.950165 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:28:30.958452 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 00:28:30.966937 dracut-cmdline[273]: dracut-dracut-053 Jul 10 00:28:30.969384 dracut-cmdline[273]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2fac453dfc912247542078b0007b053dde4e47cc6ef808508492c36b6016a78f Jul 10 00:28:30.984973 systemd-resolved[277]: Positive Trust Anchors: Jul 10 00:28:30.984989 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:28:30.985021 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 00:28:30.991069 systemd-resolved[277]: Defaulting to hostname 'linux'. Jul 10 00:28:30.992382 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 00:28:30.993265 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:28:31.042229 kernel: SCSI subsystem initialized Jul 10 00:28:31.046211 kernel: Loading iSCSI transport class v2.0-870. Jul 10 00:28:31.054239 kernel: iscsi: registered transport (tcp) Jul 10 00:28:31.068255 kernel: iscsi: registered transport (qla4xxx) Jul 10 00:28:31.068285 kernel: QLogic iSCSI HBA Driver Jul 10 00:28:31.113628 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 10 00:28:31.125376 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 10 00:28:31.140245 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 10 00:28:31.140292 kernel: device-mapper: uevent: version 1.0.3 Jul 10 00:28:31.141218 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 10 00:28:31.194234 kernel: raid6: neonx8 gen() 15786 MB/s Jul 10 00:28:31.208216 kernel: raid6: neonx4 gen() 15624 MB/s Jul 10 00:28:31.225212 kernel: raid6: neonx2 gen() 13249 MB/s Jul 10 00:28:31.242216 kernel: raid6: neonx1 gen() 10476 MB/s Jul 10 00:28:31.259214 kernel: raid6: int64x8 gen() 6949 MB/s Jul 10 00:28:31.276211 kernel: raid6: int64x4 gen() 7341 MB/s Jul 10 00:28:31.293213 kernel: raid6: int64x2 gen() 6121 MB/s Jul 10 00:28:31.310210 kernel: raid6: int64x1 gen() 5055 MB/s Jul 10 00:28:31.310229 kernel: raid6: using algorithm neonx8 gen() 15786 MB/s Jul 10 00:28:31.327220 kernel: raid6: .... xor() 11887 MB/s, rmw enabled Jul 10 00:28:31.327235 kernel: raid6: using neon recovery algorithm Jul 10 00:28:31.332215 kernel: xor: measuring software checksum speed Jul 10 00:28:31.332233 kernel: 8regs : 19272 MB/sec Jul 10 00:28:31.333679 kernel: 32regs : 17665 MB/sec Jul 10 00:28:31.333693 kernel: arm64_neon : 27123 MB/sec Jul 10 00:28:31.333702 kernel: xor: using function: arm64_neon (27123 MB/sec) Jul 10 00:28:31.383375 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 10 00:28:31.394089 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 10 00:28:31.399367 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:28:31.411737 systemd-udevd[460]: Using default interface naming scheme 'v255'. Jul 10 00:28:31.414839 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:28:31.417128 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 10 00:28:31.431868 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Jul 10 00:28:31.458782 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 00:28:31.471347 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 00:28:31.510957 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:28:31.516444 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 10 00:28:31.531051 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 10 00:28:31.531972 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 00:28:31.534232 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:28:31.535985 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 00:28:31.542366 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 10 00:28:31.553749 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 10 00:28:31.565598 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 00:28:31.568173 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 10 00:28:31.568381 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 10 00:28:31.565732 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:28:31.568236 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 00:28:31.573362 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 10 00:28:31.573380 kernel: GPT:9289727 != 19775487 Jul 10 00:28:31.573390 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 10 00:28:31.573399 kernel: GPT:9289727 != 19775487 Jul 10 00:28:31.569303 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:28:31.575935 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 10 00:28:31.575954 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:28:31.569434 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:28:31.571969 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:28:31.584994 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:28:31.593250 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (514) Jul 10 00:28:31.593295 kernel: BTRFS: device fsid e18a5201-bc0c-484b-ba1b-be3c0a720c32 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (518) Jul 10 00:28:31.597793 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 10 00:28:31.599912 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:28:31.610551 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 10 00:28:31.617387 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 10 00:28:31.620929 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 10 00:28:31.621812 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 10 00:28:31.640693 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 10 00:28:31.642229 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 00:28:31.646268 disk-uuid[549]: Primary Header is updated. Jul 10 00:28:31.646268 disk-uuid[549]: Secondary Entries is updated. Jul 10 00:28:31.646268 disk-uuid[549]: Secondary Header is updated. Jul 10 00:28:31.649226 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:28:31.665644 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:28:32.670234 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:28:32.670287 disk-uuid[551]: The operation has completed successfully. Jul 10 00:28:32.694855 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 10 00:28:32.694974 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 10 00:28:32.719386 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 10 00:28:32.722558 sh[571]: Success Jul 10 00:28:32.737801 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 10 00:28:32.776190 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 10 00:28:32.789674 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 10 00:28:32.791223 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 10 00:28:32.808252 kernel: BTRFS info (device dm-0): first mount of filesystem e18a5201-bc0c-484b-ba1b-be3c0a720c32 Jul 10 00:28:32.808302 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 10 00:28:32.808313 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 10 00:28:32.809558 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 10 00:28:32.809572 kernel: BTRFS info (device dm-0): using free space tree Jul 10 00:28:32.814829 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 10 00:28:32.816272 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 10 00:28:32.827376 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 10 00:28:32.830576 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 10 00:28:32.841774 kernel: BTRFS info (device vda6): first mount of filesystem 8ce7827a-be35-4e5a-9c5c-f9bfd6370ac0 Jul 10 00:28:32.841817 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 00:28:32.841828 kernel: BTRFS info (device vda6): using free space tree Jul 10 00:28:32.844255 kernel: BTRFS info (device vda6): auto enabling async discard Jul 10 00:28:32.853455 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 10 00:28:32.854815 kernel: BTRFS info (device vda6): last unmount of filesystem 8ce7827a-be35-4e5a-9c5c-f9bfd6370ac0 Jul 10 00:28:32.862936 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 10 00:28:32.870385 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 10 00:28:32.933245 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 00:28:32.945837 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 00:28:32.973631 systemd-networkd[761]: lo: Link UP Jul 10 00:28:32.973642 systemd-networkd[761]: lo: Gained carrier Jul 10 00:28:32.974348 systemd-networkd[761]: Enumeration completed Jul 10 00:28:32.974784 ignition[676]: Ignition 2.19.0 Jul 10 00:28:32.974637 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 00:28:32.974791 ignition[676]: Stage: fetch-offline Jul 10 00:28:32.974814 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:28:32.974825 ignition[676]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:28:32.974817 systemd-networkd[761]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:28:32.974833 ignition[676]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:28:32.975564 systemd-networkd[761]: eth0: Link UP Jul 10 00:28:32.975022 ignition[676]: parsed url from cmdline: "" Jul 10 00:28:32.975567 systemd-networkd[761]: eth0: Gained carrier Jul 10 00:28:32.975025 ignition[676]: no config URL provided Jul 10 00:28:32.975573 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:28:32.975030 ignition[676]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 00:28:32.975930 systemd[1]: Reached target network.target - Network. Jul 10 00:28:32.975037 ignition[676]: no config at "/usr/lib/ignition/user.ign" Jul 10 00:28:32.975059 ignition[676]: op(1): [started] loading QEMU firmware config module Jul 10 00:28:32.975064 ignition[676]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 10 00:28:32.988981 ignition[676]: op(1): [finished] loading QEMU firmware config module Jul 10 00:28:32.989002 ignition[676]: QEMU firmware config was not found. Ignoring... Jul 10 00:28:33.007328 systemd-networkd[761]: eth0: DHCPv4 address 10.0.0.64/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 00:28:33.031079 ignition[676]: parsing config with SHA512: a981a7c11ce1eaa2eeda6939847786b4705e594bd9ee786ff68b21e32d14f7953fa4f3ea6adc7c909ef64b251ae078fb52565c6b3fe7de428245aba4f7e5edde Jul 10 00:28:33.035334 unknown[676]: fetched base config from "system" Jul 10 00:28:33.035346 unknown[676]: fetched user config from "qemu" Jul 10 00:28:33.035835 ignition[676]: fetch-offline: fetch-offline passed Jul 10 00:28:33.035902 ignition[676]: Ignition finished successfully Jul 10 00:28:33.038263 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 00:28:33.039308 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 10 00:28:33.045384 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 10 00:28:33.056245 ignition[769]: Ignition 2.19.0 Jul 10 00:28:33.056257 ignition[769]: Stage: kargs Jul 10 00:28:33.056452 ignition[769]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:28:33.056462 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:28:33.057432 ignition[769]: kargs: kargs passed Jul 10 00:28:33.057482 ignition[769]: Ignition finished successfully Jul 10 00:28:33.061055 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 10 00:28:33.071424 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 10 00:28:33.081954 ignition[777]: Ignition 2.19.0 Jul 10 00:28:33.081966 ignition[777]: Stage: disks Jul 10 00:28:33.082126 ignition[777]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:28:33.082135 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:28:33.083156 ignition[777]: disks: disks passed Jul 10 00:28:33.083220 ignition[777]: Ignition finished successfully Jul 10 00:28:33.089308 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 10 00:28:33.090418 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 10 00:28:33.091546 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 10 00:28:33.093122 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 00:28:33.094618 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 00:28:33.096036 systemd[1]: Reached target basic.target - Basic System. Jul 10 00:28:33.109410 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 10 00:28:33.121123 systemd-fsck[788]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 10 00:28:33.126889 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 10 00:28:33.140338 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 10 00:28:33.192223 kernel: EXT4-fs (vda9): mounted filesystem c566fdd5-af6f-4008-858c-a2aed765f9b4 r/w with ordered data mode. Quota mode: none. Jul 10 00:28:33.192595 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 10 00:28:33.193635 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 10 00:28:33.205290 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 00:28:33.207087 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 10 00:28:33.207878 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 10 00:28:33.207919 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 10 00:28:33.207941 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 00:28:33.213766 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 10 00:28:33.215829 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 10 00:28:33.221124 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (796) Jul 10 00:28:33.221160 kernel: BTRFS info (device vda6): first mount of filesystem 8ce7827a-be35-4e5a-9c5c-f9bfd6370ac0 Jul 10 00:28:33.221171 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 00:28:33.221614 kernel: BTRFS info (device vda6): using free space tree Jul 10 00:28:33.225211 kernel: BTRFS info (device vda6): auto enabling async discard Jul 10 00:28:33.226495 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 00:28:33.271779 initrd-setup-root[821]: cut: /sysroot/etc/passwd: No such file or directory Jul 10 00:28:33.275287 initrd-setup-root[828]: cut: /sysroot/etc/group: No such file or directory Jul 10 00:28:33.278881 initrd-setup-root[835]: cut: /sysroot/etc/shadow: No such file or directory Jul 10 00:28:33.282728 initrd-setup-root[842]: cut: /sysroot/etc/gshadow: No such file or directory Jul 10 00:28:33.352858 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 10 00:28:33.365405 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 10 00:28:33.366970 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 10 00:28:33.372244 kernel: BTRFS info (device vda6): last unmount of filesystem 8ce7827a-be35-4e5a-9c5c-f9bfd6370ac0 Jul 10 00:28:33.390687 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 10 00:28:33.399317 ignition[910]: INFO : Ignition 2.19.0 Jul 10 00:28:33.399317 ignition[910]: INFO : Stage: mount Jul 10 00:28:33.400625 ignition[910]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:28:33.400625 ignition[910]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:28:33.400625 ignition[910]: INFO : mount: mount passed Jul 10 00:28:33.400625 ignition[910]: INFO : Ignition finished successfully Jul 10 00:28:33.403861 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 10 00:28:33.412322 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 10 00:28:33.807299 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 10 00:28:33.821382 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 00:28:33.827970 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (923) Jul 10 00:28:33.828008 kernel: BTRFS info (device vda6): first mount of filesystem 8ce7827a-be35-4e5a-9c5c-f9bfd6370ac0 Jul 10 00:28:33.828019 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 00:28:33.828640 kernel: BTRFS info (device vda6): using free space tree Jul 10 00:28:33.831218 kernel: BTRFS info (device vda6): auto enabling async discard Jul 10 00:28:33.832225 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 00:28:33.849722 ignition[940]: INFO : Ignition 2.19.0 Jul 10 00:28:33.849722 ignition[940]: INFO : Stage: files Jul 10 00:28:33.850970 ignition[940]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:28:33.850970 ignition[940]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:28:33.852708 ignition[940]: DEBUG : files: compiled without relabeling support, skipping Jul 10 00:28:33.852708 ignition[940]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 10 00:28:33.852708 ignition[940]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 10 00:28:33.855758 ignition[940]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 10 00:28:33.855758 ignition[940]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 10 00:28:33.855758 ignition[940]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 10 00:28:33.855758 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 10 00:28:33.854610 unknown[940]: wrote ssh authorized keys file for user: core Jul 10 00:28:33.860585 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 10 00:28:33.860585 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 10 00:28:33.860585 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 10 00:28:33.982767 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 10 00:28:34.097338 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 10 00:28:34.097338 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 10 00:28:34.100243 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 10 00:28:34.453522 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jul 10 00:28:34.602375 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 10 00:28:34.603769 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jul 10 00:28:34.603769 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jul 10 00:28:34.603769 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:28:34.603769 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:28:34.603769 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:28:34.603769 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:28:34.603769 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:28:34.603769 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:28:34.603769 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:28:34.603769 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:28:34.603769 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 10 00:28:34.603769 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 10 00:28:34.603769 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 10 00:28:34.603769 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 10 00:28:35.018637 systemd-networkd[761]: eth0: Gained IPv6LL Jul 10 00:28:35.095851 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jul 10 00:28:35.390841 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 10 00:28:35.390841 ignition[940]: INFO : files: op(d): [started] processing unit "containerd.service" Jul 10 00:28:35.393727 ignition[940]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 10 00:28:35.393727 ignition[940]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 10 00:28:35.393727 ignition[940]: INFO : files: op(d): [finished] processing unit "containerd.service" Jul 10 00:28:35.393727 ignition[940]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jul 10 00:28:35.393727 ignition[940]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:28:35.393727 ignition[940]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:28:35.393727 ignition[940]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jul 10 00:28:35.393727 ignition[940]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Jul 10 00:28:35.393727 ignition[940]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 00:28:35.393727 ignition[940]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 00:28:35.393727 ignition[940]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Jul 10 00:28:35.393727 ignition[940]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Jul 10 00:28:35.426245 ignition[940]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 00:28:35.430211 ignition[940]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 00:28:35.431303 ignition[940]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Jul 10 00:28:35.431303 ignition[940]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Jul 10 00:28:35.431303 ignition[940]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Jul 10 00:28:35.431303 ignition[940]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:28:35.431303 ignition[940]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:28:35.431303 ignition[940]: INFO : files: files passed Jul 10 00:28:35.431303 ignition[940]: INFO : Ignition finished successfully Jul 10 00:28:35.432460 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 10 00:28:35.444379 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 10 00:28:35.448388 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 10 00:28:35.450522 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 10 00:28:35.450607 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 10 00:28:35.456090 initrd-setup-root-after-ignition[969]: grep: /sysroot/oem/oem-release: No such file or directory Jul 10 00:28:35.459323 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:28:35.459323 initrd-setup-root-after-ignition[971]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:28:35.462334 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:28:35.462398 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 00:28:35.465242 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 10 00:28:35.473421 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 10 00:28:35.495013 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 10 00:28:35.495135 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 10 00:28:35.496786 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 10 00:28:35.498150 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 10 00:28:35.499510 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 10 00:28:35.500503 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 10 00:28:35.515398 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 00:28:35.531528 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 10 00:28:35.541179 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:28:35.542473 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:28:35.544068 systemd[1]: Stopped target timers.target - Timer Units. Jul 10 00:28:35.545399 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 10 00:28:35.545527 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 00:28:35.547311 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 10 00:28:35.548727 systemd[1]: Stopped target basic.target - Basic System. Jul 10 00:28:35.549902 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 10 00:28:35.551173 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 00:28:35.552669 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 10 00:28:35.554129 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 10 00:28:35.555448 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 00:28:35.556867 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 10 00:28:35.558305 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 10 00:28:35.559605 systemd[1]: Stopped target swap.target - Swaps. Jul 10 00:28:35.560735 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 10 00:28:35.560864 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 10 00:28:35.562576 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:28:35.564081 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:28:35.565472 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 10 00:28:35.565616 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:28:35.567028 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 10 00:28:35.567237 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 10 00:28:35.569151 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 10 00:28:35.569279 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 00:28:35.570739 systemd[1]: Stopped target paths.target - Path Units. Jul 10 00:28:35.571861 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 10 00:28:35.575307 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:28:35.576266 systemd[1]: Stopped target slices.target - Slice Units. Jul 10 00:28:35.577835 systemd[1]: Stopped target sockets.target - Socket Units. Jul 10 00:28:35.578993 systemd[1]: iscsid.socket: Deactivated successfully. Jul 10 00:28:35.579083 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 00:28:35.580182 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 10 00:28:35.580275 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 00:28:35.581436 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 10 00:28:35.581542 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 00:28:35.582841 systemd[1]: ignition-files.service: Deactivated successfully. Jul 10 00:28:35.582939 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 10 00:28:35.593508 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 10 00:28:35.594187 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 10 00:28:35.594340 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:28:35.599442 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 10 00:28:35.600138 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 10 00:28:35.600288 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:28:35.601626 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 10 00:28:35.601784 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 00:28:35.607033 ignition[996]: INFO : Ignition 2.19.0 Jul 10 00:28:35.607033 ignition[996]: INFO : Stage: umount Jul 10 00:28:35.607033 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:28:35.607033 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:28:35.609835 ignition[996]: INFO : umount: umount passed Jul 10 00:28:35.609835 ignition[996]: INFO : Ignition finished successfully Jul 10 00:28:35.609436 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 10 00:28:35.609522 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 10 00:28:35.611190 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 10 00:28:35.611379 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 10 00:28:35.614969 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 10 00:28:35.615404 systemd[1]: Stopped target network.target - Network. Jul 10 00:28:35.618746 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 10 00:28:35.618805 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 10 00:28:35.620156 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 10 00:28:35.620219 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 10 00:28:35.621572 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 10 00:28:35.621615 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 10 00:28:35.622798 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 10 00:28:35.622840 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 10 00:28:35.626113 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 10 00:28:35.627322 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 10 00:28:35.629006 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 10 00:28:35.629097 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 10 00:28:35.630381 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 10 00:28:35.630465 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 10 00:28:35.636279 systemd-networkd[761]: eth0: DHCPv6 lease lost Jul 10 00:28:35.637616 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 10 00:28:35.638834 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 10 00:28:35.640562 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 10 00:28:35.641926 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 10 00:28:35.643969 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 10 00:28:35.644023 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:28:35.653341 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 10 00:28:35.653995 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 10 00:28:35.654048 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 00:28:35.655518 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:28:35.655556 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:28:35.657154 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 10 00:28:35.657220 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 10 00:28:35.659309 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 10 00:28:35.659355 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:28:35.663972 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:28:35.686462 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 10 00:28:35.686615 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:28:35.688507 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 10 00:28:35.688546 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 10 00:28:35.690069 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 10 00:28:35.690101 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:28:35.691690 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 10 00:28:35.691738 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 10 00:28:35.693978 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 10 00:28:35.694024 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 10 00:28:35.696318 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 00:28:35.696365 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:28:35.708401 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 10 00:28:35.709229 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 10 00:28:35.709285 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:28:35.711019 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:28:35.711064 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:28:35.713816 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 10 00:28:35.713927 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 10 00:28:35.715308 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 10 00:28:35.715387 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 10 00:28:35.718124 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 10 00:28:35.720877 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 10 00:28:35.731576 systemd[1]: Switching root. Jul 10 00:28:35.765817 systemd-journald[239]: Journal stopped Jul 10 00:28:36.623509 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Jul 10 00:28:36.623575 kernel: SELinux: policy capability network_peer_controls=1 Jul 10 00:28:36.623587 kernel: SELinux: policy capability open_perms=1 Jul 10 00:28:36.623597 kernel: SELinux: policy capability extended_socket_class=1 Jul 10 00:28:36.623607 kernel: SELinux: policy capability always_check_network=0 Jul 10 00:28:36.623617 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 10 00:28:36.623626 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 10 00:28:36.623636 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 10 00:28:36.623654 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 10 00:28:36.623674 kernel: audit: type=1403 audit(1752107316.069:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 10 00:28:36.623685 systemd[1]: Successfully loaded SELinux policy in 39.965ms. Jul 10 00:28:36.623705 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.507ms. Jul 10 00:28:36.623717 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 10 00:28:36.623728 systemd[1]: Detected virtualization kvm. Jul 10 00:28:36.623738 systemd[1]: Detected architecture arm64. Jul 10 00:28:36.623749 systemd[1]: Detected first boot. Jul 10 00:28:36.623759 systemd[1]: Initializing machine ID from VM UUID. Jul 10 00:28:36.623769 zram_generator::config[1058]: No configuration found. Jul 10 00:28:36.623783 systemd[1]: Populated /etc with preset unit settings. Jul 10 00:28:36.623795 systemd[1]: Queued start job for default target multi-user.target. Jul 10 00:28:36.623805 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 10 00:28:36.623817 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 10 00:28:36.623827 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 10 00:28:36.623842 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 10 00:28:36.623852 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 10 00:28:36.623863 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 10 00:28:36.623876 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 10 00:28:36.623886 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 10 00:28:36.623897 systemd[1]: Created slice user.slice - User and Session Slice. Jul 10 00:28:36.623908 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:28:36.623918 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:28:36.623929 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 10 00:28:36.623940 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 10 00:28:36.623950 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 10 00:28:36.623962 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 00:28:36.623973 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 10 00:28:36.623983 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:28:36.623994 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 10 00:28:36.624004 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:28:36.624015 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 00:28:36.624026 systemd[1]: Reached target slices.target - Slice Units. Jul 10 00:28:36.624037 systemd[1]: Reached target swap.target - Swaps. Jul 10 00:28:36.624048 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 10 00:28:36.624060 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 10 00:28:36.624070 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 10 00:28:36.624081 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 10 00:28:36.624091 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:28:36.624102 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 00:28:36.624113 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:28:36.624123 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 10 00:28:36.624134 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 10 00:28:36.624145 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 10 00:28:36.624156 systemd[1]: Mounting media.mount - External Media Directory... Jul 10 00:28:36.624167 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 10 00:28:36.624177 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 10 00:28:36.624187 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 10 00:28:36.624209 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 10 00:28:36.624228 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:28:36.624239 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 00:28:36.624250 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 10 00:28:36.624261 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:28:36.624275 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 00:28:36.624285 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:28:36.624296 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 10 00:28:36.624306 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:28:36.624317 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 10 00:28:36.624329 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 10 00:28:36.624340 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jul 10 00:28:36.624350 kernel: fuse: init (API version 7.39) Jul 10 00:28:36.624362 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 00:28:36.624372 kernel: ACPI: bus type drm_connector registered Jul 10 00:28:36.624382 kernel: loop: module loaded Jul 10 00:28:36.624391 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 00:28:36.624402 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 00:28:36.624412 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 10 00:28:36.624423 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 00:28:36.624455 systemd-journald[1144]: Collecting audit messages is disabled. Jul 10 00:28:36.624479 systemd-journald[1144]: Journal started Jul 10 00:28:36.624500 systemd-journald[1144]: Runtime Journal (/run/log/journal/3435a7bc91bc43b283b08a1e8c5d0dff) is 5.9M, max 47.3M, 41.4M free. Jul 10 00:28:36.626462 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 00:28:36.627601 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 10 00:28:36.628448 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 10 00:28:36.629474 systemd[1]: Mounted media.mount - External Media Directory. Jul 10 00:28:36.630359 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 10 00:28:36.631233 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 10 00:28:36.632173 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 10 00:28:36.633283 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 10 00:28:36.634526 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:28:36.635803 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 10 00:28:36.635968 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 10 00:28:36.637068 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:28:36.637239 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:28:36.638341 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:28:36.638490 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 00:28:36.639470 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:28:36.639622 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:28:36.640833 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 10 00:28:36.641007 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 10 00:28:36.642560 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:28:36.642800 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:28:36.644020 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 00:28:36.645308 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:28:36.646694 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 10 00:28:36.658420 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 00:28:36.669345 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 10 00:28:36.671154 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 10 00:28:36.672019 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 10 00:28:36.675454 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 10 00:28:36.679375 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 10 00:28:36.680344 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:28:36.681398 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 10 00:28:36.682336 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 00:28:36.683403 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:28:36.685813 systemd-journald[1144]: Time spent on flushing to /var/log/journal/3435a7bc91bc43b283b08a1e8c5d0dff is 21.801ms for 846 entries. Jul 10 00:28:36.685813 systemd-journald[1144]: System Journal (/var/log/journal/3435a7bc91bc43b283b08a1e8c5d0dff) is 8.0M, max 195.6M, 187.6M free. Jul 10 00:28:36.714293 systemd-journald[1144]: Received client request to flush runtime journal. Jul 10 00:28:36.688365 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 00:28:36.692028 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:28:36.694551 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 10 00:28:36.697385 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 10 00:28:36.698700 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 10 00:28:36.701686 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 10 00:28:36.717413 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 10 00:28:36.721732 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 10 00:28:36.723052 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:28:36.724073 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Jul 10 00:28:36.724086 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Jul 10 00:28:36.728138 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 00:28:36.746570 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 10 00:28:36.747570 udevadm[1201]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 10 00:28:36.767482 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 10 00:28:36.775479 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 00:28:36.786791 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Jul 10 00:28:36.786812 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Jul 10 00:28:36.790791 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:28:37.106248 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 10 00:28:37.114451 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:28:37.133927 systemd-udevd[1221]: Using default interface naming scheme 'v255'. Jul 10 00:28:37.154297 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:28:37.167507 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 00:28:37.189361 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 10 00:28:37.191829 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Jul 10 00:28:37.215247 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1231) Jul 10 00:28:37.239032 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 10 00:28:37.243591 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 10 00:28:37.300443 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:28:37.307999 systemd-networkd[1226]: lo: Link UP Jul 10 00:28:37.308008 systemd-networkd[1226]: lo: Gained carrier Jul 10 00:28:37.308712 systemd-networkd[1226]: Enumeration completed Jul 10 00:28:37.308880 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 00:28:37.309553 systemd-networkd[1226]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:28:37.309557 systemd-networkd[1226]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:28:37.311154 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 10 00:28:37.311978 systemd-networkd[1226]: eth0: Link UP Jul 10 00:28:37.311987 systemd-networkd[1226]: eth0: Gained carrier Jul 10 00:28:37.312000 systemd-networkd[1226]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:28:37.313626 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 10 00:28:37.319564 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 10 00:28:37.330597 systemd-networkd[1226]: eth0: DHCPv4 address 10.0.0.64/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 00:28:37.338363 lvm[1259]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 10 00:28:37.358859 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:28:37.360086 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 10 00:28:37.361750 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:28:37.370352 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 10 00:28:37.374689 lvm[1267]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 10 00:28:37.412559 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 10 00:28:37.413621 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 10 00:28:37.414516 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 10 00:28:37.414548 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 00:28:37.415245 systemd[1]: Reached target machines.target - Containers. Jul 10 00:28:37.416892 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 10 00:28:37.427388 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 10 00:28:37.429271 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 10 00:28:37.430126 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:28:37.430969 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 10 00:28:37.433328 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 10 00:28:37.437365 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 10 00:28:37.440366 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 10 00:28:37.445291 kernel: loop0: detected capacity change from 0 to 203944 Jul 10 00:28:37.446380 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 10 00:28:37.452920 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 10 00:28:37.454637 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 10 00:28:37.458248 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 10 00:28:37.489223 kernel: loop1: detected capacity change from 0 to 114328 Jul 10 00:28:37.523222 kernel: loop2: detected capacity change from 0 to 114432 Jul 10 00:28:37.566224 kernel: loop3: detected capacity change from 0 to 203944 Jul 10 00:28:37.577240 kernel: loop4: detected capacity change from 0 to 114328 Jul 10 00:28:37.584225 kernel: loop5: detected capacity change from 0 to 114432 Jul 10 00:28:37.589038 (sd-merge)[1291]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 10 00:28:37.590007 (sd-merge)[1291]: Merged extensions into '/usr'. Jul 10 00:28:37.593622 systemd[1]: Reloading requested from client PID 1275 ('systemd-sysext') (unit systemd-sysext.service)... Jul 10 00:28:37.593640 systemd[1]: Reloading... Jul 10 00:28:37.654222 zram_generator::config[1325]: No configuration found. Jul 10 00:28:37.672408 ldconfig[1271]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 10 00:28:37.742044 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:28:37.788235 systemd[1]: Reloading finished in 194 ms. Jul 10 00:28:37.801957 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 10 00:28:37.803423 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 10 00:28:37.821382 systemd[1]: Starting ensure-sysext.service... Jul 10 00:28:37.823533 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 00:28:37.827131 systemd[1]: Reloading requested from client PID 1361 ('systemctl') (unit ensure-sysext.service)... Jul 10 00:28:37.827148 systemd[1]: Reloading... Jul 10 00:28:37.846612 systemd-tmpfiles[1362]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 10 00:28:37.847320 systemd-tmpfiles[1362]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 10 00:28:37.848095 systemd-tmpfiles[1362]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 10 00:28:37.848367 systemd-tmpfiles[1362]: ACLs are not supported, ignoring. Jul 10 00:28:37.848422 systemd-tmpfiles[1362]: ACLs are not supported, ignoring. Jul 10 00:28:37.850841 systemd-tmpfiles[1362]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 00:28:37.850852 systemd-tmpfiles[1362]: Skipping /boot Jul 10 00:28:37.860818 systemd-tmpfiles[1362]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 00:28:37.860830 systemd-tmpfiles[1362]: Skipping /boot Jul 10 00:28:37.877221 zram_generator::config[1386]: No configuration found. Jul 10 00:28:37.966169 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:28:38.011585 systemd[1]: Reloading finished in 184 ms. Jul 10 00:28:38.027840 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:28:38.043970 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 10 00:28:38.046284 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 10 00:28:38.048421 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 10 00:28:38.053354 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 00:28:38.055172 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 10 00:28:38.061104 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:28:38.065729 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:28:38.069145 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:28:38.072141 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:28:38.072982 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:28:38.073657 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:28:38.073799 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:28:38.077276 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:28:38.077513 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:28:38.083761 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:28:38.084325 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:28:38.089462 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 10 00:28:38.091631 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:28:38.099165 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:28:38.104806 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:28:38.108863 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:28:38.110237 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:28:38.113976 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 10 00:28:38.117398 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 10 00:28:38.118839 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:28:38.118981 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:28:38.120748 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:28:38.127692 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:28:38.129180 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 10 00:28:38.130543 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:28:38.133358 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:28:38.135404 augenrules[1473]: No rules Jul 10 00:28:38.136765 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 10 00:28:38.141468 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 10 00:28:38.145881 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:28:38.147608 systemd-resolved[1436]: Positive Trust Anchors: Jul 10 00:28:38.147626 systemd-resolved[1436]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:28:38.147667 systemd-resolved[1436]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 00:28:38.152337 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:28:38.153450 systemd-resolved[1436]: Defaulting to hostname 'linux'. Jul 10 00:28:38.154015 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 00:28:38.158352 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:28:38.160152 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:28:38.161027 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:28:38.161082 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:28:38.161445 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 00:28:38.162858 systemd[1]: Finished ensure-sysext.service. Jul 10 00:28:38.163772 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:28:38.179562 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:28:38.180780 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:28:38.180921 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 00:28:38.181957 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:28:38.182098 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:28:38.183276 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:28:38.183465 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:28:38.188793 systemd[1]: Reached target network.target - Network. Jul 10 00:28:38.189478 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:28:38.190327 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:28:38.190397 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 00:28:38.192271 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 10 00:28:38.237714 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 10 00:28:38.238429 systemd-timesyncd[1504]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 10 00:28:38.238480 systemd-timesyncd[1504]: Initial clock synchronization to Thu 2025-07-10 00:28:38.217200 UTC. Jul 10 00:28:38.238947 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 00:28:38.239793 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 10 00:28:38.240680 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 10 00:28:38.241545 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 10 00:28:38.242401 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 10 00:28:38.242432 systemd[1]: Reached target paths.target - Path Units. Jul 10 00:28:38.243050 systemd[1]: Reached target time-set.target - System Time Set. Jul 10 00:28:38.243911 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 10 00:28:38.244768 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 10 00:28:38.245640 systemd[1]: Reached target timers.target - Timer Units. Jul 10 00:28:38.246874 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 10 00:28:38.248944 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 10 00:28:38.250687 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 10 00:28:38.254028 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 10 00:28:38.254829 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 00:28:38.255608 systemd[1]: Reached target basic.target - Basic System. Jul 10 00:28:38.256403 systemd[1]: System is tainted: cgroupsv1 Jul 10 00:28:38.256444 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 10 00:28:38.256465 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 10 00:28:38.257452 systemd[1]: Starting containerd.service - containerd container runtime... Jul 10 00:28:38.259119 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 10 00:28:38.260776 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 10 00:28:38.263363 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 10 00:28:38.264080 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 10 00:28:38.267115 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 10 00:28:38.271808 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 10 00:28:38.274932 jq[1510]: false Jul 10 00:28:38.276452 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 10 00:28:38.279545 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 10 00:28:38.283384 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 10 00:28:38.291805 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 10 00:28:38.293342 systemd[1]: Starting update-engine.service - Update Engine... Jul 10 00:28:38.295987 dbus-daemon[1509]: [system] SELinux support is enabled Jul 10 00:28:38.300255 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 10 00:28:38.303878 extend-filesystems[1512]: Found loop3 Jul 10 00:28:38.303878 extend-filesystems[1512]: Found loop4 Jul 10 00:28:38.303878 extend-filesystems[1512]: Found loop5 Jul 10 00:28:38.302910 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 10 00:28:38.315395 jq[1531]: true Jul 10 00:28:38.315656 extend-filesystems[1512]: Found vda Jul 10 00:28:38.315656 extend-filesystems[1512]: Found vda1 Jul 10 00:28:38.315656 extend-filesystems[1512]: Found vda2 Jul 10 00:28:38.315656 extend-filesystems[1512]: Found vda3 Jul 10 00:28:38.315656 extend-filesystems[1512]: Found usr Jul 10 00:28:38.315656 extend-filesystems[1512]: Found vda4 Jul 10 00:28:38.315656 extend-filesystems[1512]: Found vda6 Jul 10 00:28:38.315656 extend-filesystems[1512]: Found vda7 Jul 10 00:28:38.315656 extend-filesystems[1512]: Found vda9 Jul 10 00:28:38.315656 extend-filesystems[1512]: Checking size of /dev/vda9 Jul 10 00:28:38.314610 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 10 00:28:38.314948 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 10 00:28:38.315333 systemd[1]: motdgen.service: Deactivated successfully. Jul 10 00:28:38.315546 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 10 00:28:38.317341 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 10 00:28:38.317555 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 10 00:28:38.328896 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 10 00:28:38.328937 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 10 00:28:38.332016 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 10 00:28:38.332032 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 10 00:28:38.336005 (ntainerd)[1542]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 10 00:28:38.343989 extend-filesystems[1512]: Resized partition /dev/vda9 Jul 10 00:28:38.346092 jq[1539]: true Jul 10 00:28:38.347267 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1231) Jul 10 00:28:38.349322 extend-filesystems[1550]: resize2fs 1.47.1 (20-May-2024) Jul 10 00:28:38.351931 tar[1537]: linux-arm64/helm Jul 10 00:28:38.353220 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 10 00:28:38.407095 systemd-logind[1522]: Watching system buttons on /dev/input/event0 (Power Button) Jul 10 00:28:38.407367 systemd-logind[1522]: New seat seat0. Jul 10 00:28:38.407984 systemd[1]: Started systemd-logind.service - User Login Management. Jul 10 00:28:38.420754 update_engine[1529]: I20250710 00:28:38.420408 1529 main.cc:92] Flatcar Update Engine starting Jul 10 00:28:38.425330 systemd[1]: Started update-engine.service - Update Engine. Jul 10 00:28:38.427750 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 10 00:28:38.428040 update_engine[1529]: I20250710 00:28:38.427982 1529 update_check_scheduler.cc:74] Next update check in 9m21s Jul 10 00:28:38.448569 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 10 00:28:38.441898 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 10 00:28:38.452742 extend-filesystems[1550]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 10 00:28:38.452742 extend-filesystems[1550]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 10 00:28:38.452742 extend-filesystems[1550]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 10 00:28:38.456130 extend-filesystems[1512]: Resized filesystem in /dev/vda9 Jul 10 00:28:38.453262 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 10 00:28:38.453512 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 10 00:28:38.463912 bash[1569]: Updated "/home/core/.ssh/authorized_keys" Jul 10 00:28:38.469859 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 10 00:28:38.471615 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 10 00:28:38.494437 locksmithd[1575]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 10 00:28:38.569800 containerd[1542]: time="2025-07-10T00:28:38.569708320Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 10 00:28:38.599042 containerd[1542]: time="2025-07-10T00:28:38.598997360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:28:38.600599 containerd[1542]: time="2025-07-10T00:28:38.600557800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:28:38.600758 containerd[1542]: time="2025-07-10T00:28:38.600739920Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 10 00:28:38.600868 containerd[1542]: time="2025-07-10T00:28:38.600851520Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 10 00:28:38.601127 containerd[1542]: time="2025-07-10T00:28:38.601106800Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 10 00:28:38.601195 containerd[1542]: time="2025-07-10T00:28:38.601182240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 10 00:28:38.601392 containerd[1542]: time="2025-07-10T00:28:38.601370320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:28:38.601515 containerd[1542]: time="2025-07-10T00:28:38.601498360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:28:38.602804 containerd[1542]: time="2025-07-10T00:28:38.602767200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:28:38.602864 containerd[1542]: time="2025-07-10T00:28:38.602803880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 10 00:28:38.602864 containerd[1542]: time="2025-07-10T00:28:38.602827840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:28:38.602864 containerd[1542]: time="2025-07-10T00:28:38.602843560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 10 00:28:38.602997 containerd[1542]: time="2025-07-10T00:28:38.602945440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:28:38.603212 containerd[1542]: time="2025-07-10T00:28:38.603176120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:28:38.603626 containerd[1542]: time="2025-07-10T00:28:38.603565480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:28:38.603626 containerd[1542]: time="2025-07-10T00:28:38.603592760Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 10 00:28:38.603709 containerd[1542]: time="2025-07-10T00:28:38.603691080Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 10 00:28:38.603749 containerd[1542]: time="2025-07-10T00:28:38.603732120Z" level=info msg="metadata content store policy set" policy=shared Jul 10 00:28:38.607524 containerd[1542]: time="2025-07-10T00:28:38.607495560Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 10 00:28:38.607574 containerd[1542]: time="2025-07-10T00:28:38.607541960Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 10 00:28:38.607574 containerd[1542]: time="2025-07-10T00:28:38.607557400Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 10 00:28:38.607574 containerd[1542]: time="2025-07-10T00:28:38.607572080Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 10 00:28:38.607639 containerd[1542]: time="2025-07-10T00:28:38.607587320Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 10 00:28:38.607745 containerd[1542]: time="2025-07-10T00:28:38.607725200Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 10 00:28:38.608031 containerd[1542]: time="2025-07-10T00:28:38.608011920Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 10 00:28:38.608129 containerd[1542]: time="2025-07-10T00:28:38.608112000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 10 00:28:38.608162 containerd[1542]: time="2025-07-10T00:28:38.608133000Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 10 00:28:38.608162 containerd[1542]: time="2025-07-10T00:28:38.608148160Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 10 00:28:38.608267 containerd[1542]: time="2025-07-10T00:28:38.608161920Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 10 00:28:38.608267 containerd[1542]: time="2025-07-10T00:28:38.608179400Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 10 00:28:38.608267 containerd[1542]: time="2025-07-10T00:28:38.608191800Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 10 00:28:38.608267 containerd[1542]: time="2025-07-10T00:28:38.608226600Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 10 00:28:38.608267 containerd[1542]: time="2025-07-10T00:28:38.608242360Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 10 00:28:38.608267 containerd[1542]: time="2025-07-10T00:28:38.608259960Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 10 00:28:38.608371 containerd[1542]: time="2025-07-10T00:28:38.608273440Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 10 00:28:38.608371 containerd[1542]: time="2025-07-10T00:28:38.608285600Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 10 00:28:38.608371 containerd[1542]: time="2025-07-10T00:28:38.608305720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 10 00:28:38.608371 containerd[1542]: time="2025-07-10T00:28:38.608319840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 10 00:28:38.608371 containerd[1542]: time="2025-07-10T00:28:38.608331640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 10 00:28:38.608371 containerd[1542]: time="2025-07-10T00:28:38.608343120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 10 00:28:38.608371 containerd[1542]: time="2025-07-10T00:28:38.608357960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 10 00:28:38.608371 containerd[1542]: time="2025-07-10T00:28:38.608370840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 10 00:28:38.608525 containerd[1542]: time="2025-07-10T00:28:38.608382880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 10 00:28:38.608525 containerd[1542]: time="2025-07-10T00:28:38.608395360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 10 00:28:38.608525 containerd[1542]: time="2025-07-10T00:28:38.608408280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 10 00:28:38.608525 containerd[1542]: time="2025-07-10T00:28:38.608422200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 10 00:28:38.608525 containerd[1542]: time="2025-07-10T00:28:38.608438960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 10 00:28:38.608525 containerd[1542]: time="2025-07-10T00:28:38.608450600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 10 00:28:38.608525 containerd[1542]: time="2025-07-10T00:28:38.608462560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 10 00:28:38.608525 containerd[1542]: time="2025-07-10T00:28:38.608478800Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 10 00:28:38.608525 containerd[1542]: time="2025-07-10T00:28:38.608498000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 10 00:28:38.608525 containerd[1542]: time="2025-07-10T00:28:38.608509920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 10 00:28:38.608525 containerd[1542]: time="2025-07-10T00:28:38.608520800Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 10 00:28:38.608718 containerd[1542]: time="2025-07-10T00:28:38.608620600Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 10 00:28:38.608718 containerd[1542]: time="2025-07-10T00:28:38.608636520Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 10 00:28:38.608718 containerd[1542]: time="2025-07-10T00:28:38.608658600Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 10 00:28:38.608718 containerd[1542]: time="2025-07-10T00:28:38.608671960Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 10 00:28:38.608718 containerd[1542]: time="2025-07-10T00:28:38.608681440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 10 00:28:38.608718 containerd[1542]: time="2025-07-10T00:28:38.608693440Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 10 00:28:38.608718 containerd[1542]: time="2025-07-10T00:28:38.608703120Z" level=info msg="NRI interface is disabled by configuration." Jul 10 00:28:38.608718 containerd[1542]: time="2025-07-10T00:28:38.608716760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 10 00:28:38.609091 containerd[1542]: time="2025-07-10T00:28:38.609028920Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 10 00:28:38.609091 containerd[1542]: time="2025-07-10T00:28:38.609090800Z" level=info msg="Connect containerd service" Jul 10 00:28:38.609252 containerd[1542]: time="2025-07-10T00:28:38.609184120Z" level=info msg="using legacy CRI server" Jul 10 00:28:38.609252 containerd[1542]: time="2025-07-10T00:28:38.609190800Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 10 00:28:38.609335 containerd[1542]: time="2025-07-10T00:28:38.609290360Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 10 00:28:38.609863 containerd[1542]: time="2025-07-10T00:28:38.609836520Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:28:38.610371 containerd[1542]: time="2025-07-10T00:28:38.610334600Z" level=info msg="Start subscribing containerd event" Jul 10 00:28:38.610410 containerd[1542]: time="2025-07-10T00:28:38.610380600Z" level=info msg="Start recovering state" Jul 10 00:28:38.610590 containerd[1542]: time="2025-07-10T00:28:38.610573840Z" level=info msg="Start event monitor" Jul 10 00:28:38.610590 containerd[1542]: time="2025-07-10T00:28:38.610594360Z" level=info msg="Start snapshots syncer" Jul 10 00:28:38.610649 containerd[1542]: time="2025-07-10T00:28:38.610603440Z" level=info msg="Start cni network conf syncer for default" Jul 10 00:28:38.610649 containerd[1542]: time="2025-07-10T00:28:38.610611600Z" level=info msg="Start streaming server" Jul 10 00:28:38.610826 containerd[1542]: time="2025-07-10T00:28:38.610805240Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 10 00:28:38.610870 containerd[1542]: time="2025-07-10T00:28:38.610858160Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 10 00:28:38.612239 containerd[1542]: time="2025-07-10T00:28:38.610946480Z" level=info msg="containerd successfully booted in 0.042596s" Jul 10 00:28:38.613334 systemd[1]: Started containerd.service - containerd container runtime. Jul 10 00:28:38.714461 sshd_keygen[1530]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 10 00:28:38.733097 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 10 00:28:38.742502 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 10 00:28:38.747235 tar[1537]: linux-arm64/LICENSE Jul 10 00:28:38.747295 tar[1537]: linux-arm64/README.md Jul 10 00:28:38.749080 systemd[1]: issuegen.service: Deactivated successfully. Jul 10 00:28:38.749357 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 10 00:28:38.752546 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 10 00:28:38.757087 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 10 00:28:38.765714 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 10 00:28:38.779627 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 10 00:28:38.781632 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 10 00:28:38.782616 systemd[1]: Reached target getty.target - Login Prompts. Jul 10 00:28:38.922323 systemd-networkd[1226]: eth0: Gained IPv6LL Jul 10 00:28:38.927839 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 10 00:28:38.929387 systemd[1]: Reached target network-online.target - Network is Online. Jul 10 00:28:38.942445 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 10 00:28:38.944462 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:28:38.946369 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 10 00:28:38.960913 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 10 00:28:38.961172 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 10 00:28:38.962927 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 10 00:28:38.972632 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 10 00:28:39.544443 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:28:39.545739 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 10 00:28:39.548626 (kubelet)[1645]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:28:39.551282 systemd[1]: Startup finished in 5.904s (kernel) + 3.524s (userspace) = 9.429s. Jul 10 00:28:40.063229 kubelet[1645]: E0710 00:28:40.063168 1645 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:28:40.065600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:28:40.065777 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:28:43.171921 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 10 00:28:43.181457 systemd[1]: Started sshd@0-10.0.0.64:22-10.0.0.1:58234.service - OpenSSH per-connection server daemon (10.0.0.1:58234). Jul 10 00:28:43.230832 sshd[1658]: Accepted publickey for core from 10.0.0.1 port 58234 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:28:43.232674 sshd[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:28:43.244590 systemd-logind[1522]: New session 1 of user core. Jul 10 00:28:43.245576 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 10 00:28:43.253462 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 10 00:28:43.263350 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 10 00:28:43.265516 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 10 00:28:43.271876 (systemd)[1664]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:28:43.352459 systemd[1664]: Queued start job for default target default.target. Jul 10 00:28:43.352833 systemd[1664]: Created slice app.slice - User Application Slice. Jul 10 00:28:43.352855 systemd[1664]: Reached target paths.target - Paths. Jul 10 00:28:43.352867 systemd[1664]: Reached target timers.target - Timers. Jul 10 00:28:43.367310 systemd[1664]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 10 00:28:43.373612 systemd[1664]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 10 00:28:43.373673 systemd[1664]: Reached target sockets.target - Sockets. Jul 10 00:28:43.373685 systemd[1664]: Reached target basic.target - Basic System. Jul 10 00:28:43.373723 systemd[1664]: Reached target default.target - Main User Target. Jul 10 00:28:43.373748 systemd[1664]: Startup finished in 96ms. Jul 10 00:28:43.374188 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 10 00:28:43.376395 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 10 00:28:43.432584 systemd[1]: Started sshd@1-10.0.0.64:22-10.0.0.1:58236.service - OpenSSH per-connection server daemon (10.0.0.1:58236). Jul 10 00:28:43.464831 sshd[1676]: Accepted publickey for core from 10.0.0.1 port 58236 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:28:43.466085 sshd[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:28:43.470268 systemd-logind[1522]: New session 2 of user core. Jul 10 00:28:43.481434 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 10 00:28:43.533225 sshd[1676]: pam_unix(sshd:session): session closed for user core Jul 10 00:28:43.549490 systemd[1]: Started sshd@2-10.0.0.64:22-10.0.0.1:58240.service - OpenSSH per-connection server daemon (10.0.0.1:58240). Jul 10 00:28:43.549957 systemd[1]: sshd@1-10.0.0.64:22-10.0.0.1:58236.service: Deactivated successfully. Jul 10 00:28:43.551629 systemd[1]: session-2.scope: Deactivated successfully. Jul 10 00:28:43.552235 systemd-logind[1522]: Session 2 logged out. Waiting for processes to exit. Jul 10 00:28:43.553619 systemd-logind[1522]: Removed session 2. Jul 10 00:28:43.580145 sshd[1681]: Accepted publickey for core from 10.0.0.1 port 58240 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:28:43.581346 sshd[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:28:43.585455 systemd-logind[1522]: New session 3 of user core. Jul 10 00:28:43.595461 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 10 00:28:43.644410 sshd[1681]: pam_unix(sshd:session): session closed for user core Jul 10 00:28:43.659463 systemd[1]: Started sshd@3-10.0.0.64:22-10.0.0.1:58254.service - OpenSSH per-connection server daemon (10.0.0.1:58254). Jul 10 00:28:43.659831 systemd[1]: sshd@2-10.0.0.64:22-10.0.0.1:58240.service: Deactivated successfully. Jul 10 00:28:43.661452 systemd-logind[1522]: Session 3 logged out. Waiting for processes to exit. Jul 10 00:28:43.662098 systemd[1]: session-3.scope: Deactivated successfully. Jul 10 00:28:43.663547 systemd-logind[1522]: Removed session 3. Jul 10 00:28:43.689574 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 58254 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:28:43.691138 sshd[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:28:43.696062 systemd-logind[1522]: New session 4 of user core. Jul 10 00:28:43.710452 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 10 00:28:43.761438 sshd[1689]: pam_unix(sshd:session): session closed for user core Jul 10 00:28:43.771531 systemd[1]: Started sshd@4-10.0.0.64:22-10.0.0.1:58262.service - OpenSSH per-connection server daemon (10.0.0.1:58262). Jul 10 00:28:43.772257 systemd[1]: sshd@3-10.0.0.64:22-10.0.0.1:58254.service: Deactivated successfully. Jul 10 00:28:43.773811 systemd[1]: session-4.scope: Deactivated successfully. Jul 10 00:28:43.774420 systemd-logind[1522]: Session 4 logged out. Waiting for processes to exit. Jul 10 00:28:43.775637 systemd-logind[1522]: Removed session 4. Jul 10 00:28:43.801283 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 58262 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:28:43.802549 sshd[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:28:43.806242 systemd-logind[1522]: New session 5 of user core. Jul 10 00:28:43.824496 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 10 00:28:43.880242 sudo[1704]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 10 00:28:43.880517 sudo[1704]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:28:43.898036 sudo[1704]: pam_unix(sudo:session): session closed for user root Jul 10 00:28:43.899630 sshd[1697]: pam_unix(sshd:session): session closed for user core Jul 10 00:28:43.916507 systemd[1]: Started sshd@5-10.0.0.64:22-10.0.0.1:58270.service - OpenSSH per-connection server daemon (10.0.0.1:58270). Jul 10 00:28:43.916896 systemd[1]: sshd@4-10.0.0.64:22-10.0.0.1:58262.service: Deactivated successfully. Jul 10 00:28:43.918788 systemd-logind[1522]: Session 5 logged out. Waiting for processes to exit. Jul 10 00:28:43.919367 systemd[1]: session-5.scope: Deactivated successfully. Jul 10 00:28:43.920835 systemd-logind[1522]: Removed session 5. Jul 10 00:28:43.946929 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 58270 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:28:43.948081 sshd[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:28:43.951967 systemd-logind[1522]: New session 6 of user core. Jul 10 00:28:43.967444 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 10 00:28:44.018593 sudo[1714]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 10 00:28:44.018894 sudo[1714]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:28:44.022006 sudo[1714]: pam_unix(sudo:session): session closed for user root Jul 10 00:28:44.026439 sudo[1713]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 10 00:28:44.026699 sudo[1713]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:28:44.047764 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 10 00:28:44.048769 auditctl[1717]: No rules Jul 10 00:28:44.049659 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 00:28:44.049930 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 10 00:28:44.052229 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 10 00:28:44.075670 augenrules[1736]: No rules Jul 10 00:28:44.077112 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 10 00:28:44.078270 sudo[1713]: pam_unix(sudo:session): session closed for user root Jul 10 00:28:44.079759 sshd[1706]: pam_unix(sshd:session): session closed for user core Jul 10 00:28:44.088542 systemd[1]: Started sshd@6-10.0.0.64:22-10.0.0.1:58276.service - OpenSSH per-connection server daemon (10.0.0.1:58276). Jul 10 00:28:44.088921 systemd[1]: sshd@5-10.0.0.64:22-10.0.0.1:58270.service: Deactivated successfully. Jul 10 00:28:44.090845 systemd-logind[1522]: Session 6 logged out. Waiting for processes to exit. Jul 10 00:28:44.091432 systemd[1]: session-6.scope: Deactivated successfully. Jul 10 00:28:44.092901 systemd-logind[1522]: Removed session 6. Jul 10 00:28:44.119269 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 58276 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:28:44.120493 sshd[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:28:44.124727 systemd-logind[1522]: New session 7 of user core. Jul 10 00:28:44.134506 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 10 00:28:44.185822 sudo[1749]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 10 00:28:44.186095 sudo[1749]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:28:44.537462 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 10 00:28:44.537682 (dockerd)[1767]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 10 00:28:44.797905 dockerd[1767]: time="2025-07-10T00:28:44.797422820Z" level=info msg="Starting up" Jul 10 00:28:45.063029 dockerd[1767]: time="2025-07-10T00:28:45.062862184Z" level=info msg="Loading containers: start." Jul 10 00:28:45.142228 kernel: Initializing XFRM netlink socket Jul 10 00:28:45.219995 systemd-networkd[1226]: docker0: Link UP Jul 10 00:28:45.237613 dockerd[1767]: time="2025-07-10T00:28:45.237555865Z" level=info msg="Loading containers: done." Jul 10 00:28:45.250482 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1444097236-merged.mount: Deactivated successfully. Jul 10 00:28:45.256277 dockerd[1767]: time="2025-07-10T00:28:45.256224420Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 10 00:28:45.256428 dockerd[1767]: time="2025-07-10T00:28:45.256335960Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 10 00:28:45.256494 dockerd[1767]: time="2025-07-10T00:28:45.256461487Z" level=info msg="Daemon has completed initialization" Jul 10 00:28:45.287540 dockerd[1767]: time="2025-07-10T00:28:45.287408656Z" level=info msg="API listen on /run/docker.sock" Jul 10 00:28:45.287684 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 10 00:28:46.048842 containerd[1542]: time="2025-07-10T00:28:46.048795464Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 10 00:28:46.713969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2983453650.mount: Deactivated successfully. Jul 10 00:28:47.637260 containerd[1542]: time="2025-07-10T00:28:47.636589008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:28:47.638064 containerd[1542]: time="2025-07-10T00:28:47.638028672Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=25651795" Jul 10 00:28:47.639110 containerd[1542]: time="2025-07-10T00:28:47.639080122Z" level=info msg="ImageCreate event name:\"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:28:47.642774 containerd[1542]: time="2025-07-10T00:28:47.642738835Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:28:47.644601 containerd[1542]: time="2025-07-10T00:28:47.644569550Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"25648593\" in 1.595731961s" Jul 10 00:28:47.644892 containerd[1542]: time="2025-07-10T00:28:47.644686338Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 10 00:28:47.647786 containerd[1542]: time="2025-07-10T00:28:47.647755596Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 10 00:28:48.774858 containerd[1542]: time="2025-07-10T00:28:48.774792052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:28:48.775531 containerd[1542]: time="2025-07-10T00:28:48.775489856Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=22459679" Jul 10 00:28:48.776334 containerd[1542]: time="2025-07-10T00:28:48.776295300Z" level=info msg="ImageCreate event name:\"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:28:48.780306 containerd[1542]: time="2025-07-10T00:28:48.780254251Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:28:48.781765 containerd[1542]: time="2025-07-10T00:28:48.781673481Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"23995467\" in 1.133881993s" Jul 10 00:28:48.781765 containerd[1542]: time="2025-07-10T00:28:48.781720446Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 10 00:28:48.782492 containerd[1542]: time="2025-07-10T00:28:48.782263564Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 10 00:28:49.992008 containerd[1542]: time="2025-07-10T00:28:49.991943247Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:28:49.992923 containerd[1542]: time="2025-07-10T00:28:49.992878159Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=17125068" Jul 10 00:28:49.994911 containerd[1542]: time="2025-07-10T00:28:49.994868858Z" level=info msg="ImageCreate event name:\"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:28:49.998260 containerd[1542]: time="2025-07-10T00:28:49.998223891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:28:49.999403 containerd[1542]: time="2025-07-10T00:28:49.999362662Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"18660874\" in 1.21706736s" Jul 10 00:28:49.999444 containerd[1542]: time="2025-07-10T00:28:49.999402234Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 10 00:28:50.000318 containerd[1542]: time="2025-07-10T00:28:50.000273190Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 10 00:28:50.315994 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 10 00:28:50.327424 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:28:50.436692 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:28:50.441249 (kubelet)[1989]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:28:50.477908 kubelet[1989]: E0710 00:28:50.477851 1989 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:28:50.481157 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:28:50.481464 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:28:51.061231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4166412936.mount: Deactivated successfully. Jul 10 00:28:51.397469 containerd[1542]: time="2025-07-10T00:28:51.397341785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:28:51.398107 containerd[1542]: time="2025-07-10T00:28:51.398073939Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=26915959" Jul 10 00:28:51.399176 containerd[1542]: time="2025-07-10T00:28:51.399152322Z" level=info msg="ImageCreate event name:\"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:28:51.401438 containerd[1542]: time="2025-07-10T00:28:51.401394475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:28:51.402050 containerd[1542]: time="2025-07-10T00:28:51.402008381Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"26914976\" in 1.401697496s" Jul 10 00:28:51.402050 containerd[1542]: time="2025-07-10T00:28:51.402046038Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 10 00:28:51.402687 containerd[1542]: time="2025-07-10T00:28:51.402539937Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 10 00:28:51.997555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2550086784.mount: Deactivated successfully. Jul 10 00:28:52.707545 containerd[1542]: time="2025-07-10T00:28:52.707486943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:28:52.708072 containerd[1542]: time="2025-07-10T00:28:52.708035349Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 10 00:28:52.711222 containerd[1542]: time="2025-07-10T00:28:52.710368176Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:28:52.715142 containerd[1542]: time="2025-07-10T00:28:52.715111026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:28:52.716386 containerd[1542]: time="2025-07-10T00:28:52.716354356Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.313777801s" Jul 10 00:28:52.716479 containerd[1542]: time="2025-07-10T00:28:52.716463973Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 10 00:28:52.716938 containerd[1542]: time="2025-07-10T00:28:52.716916675Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 10 00:28:53.139429 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1082023491.mount: Deactivated successfully. Jul 10 00:28:53.143730 containerd[1542]: time="2025-07-10T00:28:53.143687107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:28:53.144662 containerd[1542]: time="2025-07-10T00:28:53.144628882Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 10 00:28:53.145589 containerd[1542]: time="2025-07-10T00:28:53.145512929Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:28:53.148251 containerd[1542]: time="2025-07-10T00:28:53.147414270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:28:53.148867 containerd[1542]: time="2025-07-10T00:28:53.148828113Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 431.881655ms" Jul 10 00:28:53.148867 containerd[1542]: time="2025-07-10T00:28:53.148858696Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 10 00:28:53.149288 containerd[1542]: time="2025-07-10T00:28:53.149256723Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 10 00:28:53.602906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount61406089.mount: Deactivated successfully. Jul 10 00:28:55.296621 containerd[1542]: time="2025-07-10T00:28:55.296569974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:28:55.299350 containerd[1542]: time="2025-07-10T00:28:55.299309804Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" Jul 10 00:28:55.300241 containerd[1542]: time="2025-07-10T00:28:55.300207821Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:28:55.303446 containerd[1542]: time="2025-07-10T00:28:55.303409794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:28:55.305188 containerd[1542]: time="2025-07-10T00:28:55.305151974Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.155864227s" Jul 10 00:28:55.305228 containerd[1542]: time="2025-07-10T00:28:55.305190236Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 10 00:28:59.712972 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:28:59.723371 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:28:59.740307 systemd[1]: Reloading requested from client PID 2148 ('systemctl') (unit session-7.scope)... Jul 10 00:28:59.740327 systemd[1]: Reloading... Jul 10 00:28:59.804509 zram_generator::config[2188]: No configuration found. Jul 10 00:28:59.934430 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:28:59.986573 systemd[1]: Reloading finished in 245 ms. Jul 10 00:29:00.022494 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 10 00:29:00.022557 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 10 00:29:00.022809 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:29:00.024425 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:29:00.137847 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:29:00.141024 (kubelet)[2244]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 00:29:00.176820 kubelet[2244]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:29:00.176820 kubelet[2244]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 10 00:29:00.176820 kubelet[2244]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:29:00.177179 kubelet[2244]: I0710 00:29:00.176860 2244 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:29:00.624558 kubelet[2244]: I0710 00:29:00.624512 2244 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 10 00:29:00.624558 kubelet[2244]: I0710 00:29:00.624546 2244 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:29:00.624839 kubelet[2244]: I0710 00:29:00.624810 2244 server.go:934] "Client rotation is on, will bootstrap in background" Jul 10 00:29:00.657849 kubelet[2244]: E0710 00:29:00.657678 2244 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.64:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:29:00.659923 kubelet[2244]: I0710 00:29:00.659538 2244 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:29:00.667284 kubelet[2244]: E0710 00:29:00.667193 2244 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 10 00:29:00.667284 kubelet[2244]: I0710 00:29:00.667286 2244 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 10 00:29:00.670650 kubelet[2244]: I0710 00:29:00.670626 2244 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:29:00.671694 kubelet[2244]: I0710 00:29:00.671659 2244 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 10 00:29:00.671814 kubelet[2244]: I0710 00:29:00.671791 2244 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:29:00.671977 kubelet[2244]: I0710 00:29:00.671816 2244 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 10 00:29:00.672057 kubelet[2244]: I0710 00:29:00.671980 2244 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:29:00.672057 kubelet[2244]: I0710 00:29:00.671988 2244 container_manager_linux.go:300] "Creating device plugin manager" Jul 10 00:29:00.672230 kubelet[2244]: I0710 00:29:00.672218 2244 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:29:00.677574 kubelet[2244]: I0710 00:29:00.675957 2244 kubelet.go:408] "Attempting to sync node with API server" Jul 10 00:29:00.677574 kubelet[2244]: I0710 00:29:00.675986 2244 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:29:00.677574 kubelet[2244]: I0710 00:29:00.676006 2244 kubelet.go:314] "Adding apiserver pod source" Jul 10 00:29:00.677574 kubelet[2244]: I0710 00:29:00.676075 2244 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:29:00.679630 kubelet[2244]: W0710 00:29:00.679445 2244 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Jul 10 00:29:00.679630 kubelet[2244]: E0710 00:29:00.679503 2244 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:29:00.679630 kubelet[2244]: W0710 00:29:00.679557 2244 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Jul 10 00:29:00.679630 kubelet[2244]: E0710 00:29:00.679596 2244 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:29:00.680954 kubelet[2244]: I0710 00:29:00.680931 2244 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 10 00:29:00.681755 kubelet[2244]: I0710 00:29:00.681742 2244 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 00:29:00.681911 kubelet[2244]: W0710 00:29:00.681899 2244 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 10 00:29:00.682974 kubelet[2244]: I0710 00:29:00.682865 2244 server.go:1274] "Started kubelet" Jul 10 00:29:00.684001 kubelet[2244]: I0710 00:29:00.683976 2244 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:29:00.684534 kubelet[2244]: I0710 00:29:00.684484 2244 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:29:00.684788 kubelet[2244]: I0710 00:29:00.684765 2244 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:29:00.689729 kubelet[2244]: I0710 00:29:00.686067 2244 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:29:00.689729 kubelet[2244]: I0710 00:29:00.686494 2244 server.go:449] "Adding debug handlers to kubelet server" Jul 10 00:29:00.689729 kubelet[2244]: I0710 00:29:00.688575 2244 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:29:00.689729 kubelet[2244]: I0710 00:29:00.688892 2244 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 10 00:29:00.689729 kubelet[2244]: E0710 00:29:00.689302 2244 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:29:00.689729 kubelet[2244]: I0710 00:29:00.689349 2244 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 10 00:29:00.690068 kubelet[2244]: I0710 00:29:00.690041 2244 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:29:00.690265 kubelet[2244]: W0710 00:29:00.690230 2244 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Jul 10 00:29:00.690356 kubelet[2244]: E0710 00:29:00.690341 2244 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:29:00.691699 kubelet[2244]: E0710 00:29:00.690491 2244 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="200ms" Jul 10 00:29:00.692762 kubelet[2244]: I0710 00:29:00.692739 2244 factory.go:221] Registration of the systemd container factory successfully Jul 10 00:29:00.692962 kubelet[2244]: I0710 00:29:00.692944 2244 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:29:00.693496 kubelet[2244]: E0710 00:29:00.691498 2244 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.64:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.64:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1850bc57e1c1569c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-10 00:29:00.682843804 +0000 UTC m=+0.538888259,LastTimestamp:2025-07-10 00:29:00.682843804 +0000 UTC m=+0.538888259,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 10 00:29:00.694270 kubelet[2244]: E0710 00:29:00.694250 2244 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:29:00.694901 kubelet[2244]: I0710 00:29:00.694882 2244 factory.go:221] Registration of the containerd container factory successfully Jul 10 00:29:00.705224 kubelet[2244]: I0710 00:29:00.702171 2244 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 00:29:00.705224 kubelet[2244]: I0710 00:29:00.703717 2244 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 00:29:00.705224 kubelet[2244]: I0710 00:29:00.703734 2244 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 10 00:29:00.705224 kubelet[2244]: I0710 00:29:00.703755 2244 kubelet.go:2321] "Starting kubelet main sync loop" Jul 10 00:29:00.705224 kubelet[2244]: E0710 00:29:00.703794 2244 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:29:00.705224 kubelet[2244]: W0710 00:29:00.704519 2244 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Jul 10 00:29:00.705224 kubelet[2244]: E0710 00:29:00.704549 2244 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:29:00.712826 kubelet[2244]: I0710 00:29:00.712808 2244 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 10 00:29:00.712973 kubelet[2244]: I0710 00:29:00.712942 2244 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 10 00:29:00.713035 kubelet[2244]: I0710 00:29:00.713022 2244 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:29:00.715139 kubelet[2244]: I0710 00:29:00.715120 2244 policy_none.go:49] "None policy: Start" Jul 10 00:29:00.715914 kubelet[2244]: I0710 00:29:00.715894 2244 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 10 00:29:00.715914 kubelet[2244]: I0710 00:29:00.715918 2244 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:29:00.720613 kubelet[2244]: I0710 00:29:00.720578 2244 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 00:29:00.720998 kubelet[2244]: I0710 00:29:00.720967 2244 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:29:00.721043 kubelet[2244]: I0710 00:29:00.720987 2244 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:29:00.721749 kubelet[2244]: I0710 00:29:00.721723 2244 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:29:00.722826 kubelet[2244]: E0710 00:29:00.722807 2244 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 10 00:29:00.822614 kubelet[2244]: I0710 00:29:00.822573 2244 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:29:00.823123 kubelet[2244]: E0710 00:29:00.823083 2244 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.64:6443/api/v1/nodes\": dial tcp 10.0.0.64:6443: connect: connection refused" node="localhost" Jul 10 00:29:00.892795 kubelet[2244]: E0710 00:29:00.892688 2244 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="400ms" Jul 10 00:29:00.991153 kubelet[2244]: I0710 00:29:00.991079 2244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:29:00.991153 kubelet[2244]: I0710 00:29:00.991137 2244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b8ea7f213f3541e9e38adcf7476a1ac9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b8ea7f213f3541e9e38adcf7476a1ac9\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:29:00.991153 kubelet[2244]: I0710 00:29:00.991157 2244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b8ea7f213f3541e9e38adcf7476a1ac9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b8ea7f213f3541e9e38adcf7476a1ac9\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:29:00.991315 kubelet[2244]: I0710 00:29:00.991193 2244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:29:00.991315 kubelet[2244]: I0710 00:29:00.991225 2244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:29:00.991315 kubelet[2244]: I0710 00:29:00.991241 2244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 10 00:29:00.991315 kubelet[2244]: I0710 00:29:00.991262 2244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b8ea7f213f3541e9e38adcf7476a1ac9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b8ea7f213f3541e9e38adcf7476a1ac9\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:29:00.991315 kubelet[2244]: I0710 00:29:00.991278 2244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:29:00.991422 kubelet[2244]: I0710 00:29:00.991294 2244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:29:01.025204 kubelet[2244]: I0710 00:29:01.025170 2244 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:29:01.025566 kubelet[2244]: E0710 00:29:01.025521 2244 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.64:6443/api/v1/nodes\": dial tcp 10.0.0.64:6443: connect: connection refused" node="localhost" Jul 10 00:29:01.110220 kubelet[2244]: E0710 00:29:01.110171 2244 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:01.110283 kubelet[2244]: E0710 00:29:01.110256 2244 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:01.110899 containerd[1542]: time="2025-07-10T00:29:01.110861609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b8ea7f213f3541e9e38adcf7476a1ac9,Namespace:kube-system,Attempt:0,}" Jul 10 00:29:01.111178 kubelet[2244]: E0710 00:29:01.111040 2244 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:01.111241 containerd[1542]: time="2025-07-10T00:29:01.110936865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 10 00:29:01.111597 containerd[1542]: time="2025-07-10T00:29:01.111458178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 10 00:29:01.294289 kubelet[2244]: E0710 00:29:01.294127 2244 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="800ms" Jul 10 00:29:01.426727 kubelet[2244]: I0710 00:29:01.426690 2244 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:29:01.427089 kubelet[2244]: E0710 00:29:01.427025 2244 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.64:6443/api/v1/nodes\": dial tcp 10.0.0.64:6443: connect: connection refused" node="localhost" Jul 10 00:29:01.628284 kubelet[2244]: W0710 00:29:01.628143 2244 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Jul 10 00:29:01.628284 kubelet[2244]: E0710 00:29:01.628235 2244 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:29:01.661584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3378156246.mount: Deactivated successfully. Jul 10 00:29:01.667845 containerd[1542]: time="2025-07-10T00:29:01.667605286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:29:01.668380 containerd[1542]: time="2025-07-10T00:29:01.668348128Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 10 00:29:01.669171 containerd[1542]: time="2025-07-10T00:29:01.669139435Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:29:01.669943 containerd[1542]: time="2025-07-10T00:29:01.669919346Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:29:01.670474 containerd[1542]: time="2025-07-10T00:29:01.670445778Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 10 00:29:01.671212 containerd[1542]: time="2025-07-10T00:29:01.671111125Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:29:01.671839 kubelet[2244]: W0710 00:29:01.671554 2244 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Jul 10 00:29:01.671839 kubelet[2244]: E0710 00:29:01.671648 2244 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:29:01.671948 containerd[1542]: time="2025-07-10T00:29:01.671709214Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 10 00:29:01.673823 containerd[1542]: time="2025-07-10T00:29:01.673760119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:29:01.677535 containerd[1542]: time="2025-07-10T00:29:01.677301107Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 565.77583ms" Jul 10 00:29:01.679724 containerd[1542]: time="2025-07-10T00:29:01.679632082Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 568.685581ms" Jul 10 00:29:01.682172 containerd[1542]: time="2025-07-10T00:29:01.682137122Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 570.897473ms" Jul 10 00:29:01.812093 containerd[1542]: time="2025-07-10T00:29:01.811989744Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:29:01.812093 containerd[1542]: time="2025-07-10T00:29:01.812055003Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:29:01.812093 containerd[1542]: time="2025-07-10T00:29:01.812069398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:29:01.812310 containerd[1542]: time="2025-07-10T00:29:01.812214152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:29:01.812758 containerd[1542]: time="2025-07-10T00:29:01.812701956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:29:01.812895 containerd[1542]: time="2025-07-10T00:29:01.812859786Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:29:01.812980 containerd[1542]: time="2025-07-10T00:29:01.812934722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:29:01.813159 containerd[1542]: time="2025-07-10T00:29:01.813126900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:29:01.814110 containerd[1542]: time="2025-07-10T00:29:01.814050245Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:29:01.814188 containerd[1542]: time="2025-07-10T00:29:01.814104748Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:29:01.814188 containerd[1542]: time="2025-07-10T00:29:01.814124581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:29:01.814296 containerd[1542]: time="2025-07-10T00:29:01.814213353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:29:01.849611 kubelet[2244]: W0710 00:29:01.849555 2244 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Jul 10 00:29:01.849790 kubelet[2244]: E0710 00:29:01.849752 2244 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:29:01.861429 containerd[1542]: time="2025-07-10T00:29:01.861388237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b8ea7f213f3541e9e38adcf7476a1ac9,Namespace:kube-system,Attempt:0,} returns sandbox id \"97233cb89258ab0b349627630f44611af7762f90ae6f1a0f7d2cbab2c21ecae4\"" Jul 10 00:29:01.862841 containerd[1542]: time="2025-07-10T00:29:01.862789989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"8749a975e34c3541cc4378847f832ef281b25f26df0c9403b4e588af04fb5971\"" Jul 10 00:29:01.862998 kubelet[2244]: E0710 00:29:01.862975 2244 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:01.863371 kubelet[2244]: E0710 00:29:01.863349 2244 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:01.864832 containerd[1542]: time="2025-07-10T00:29:01.864478849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"e397a9c959104362c7d5db3576c8b63b20de6af742146c099a26692b30d59466\"" Jul 10 00:29:01.865089 containerd[1542]: time="2025-07-10T00:29:01.865061103Z" level=info msg="CreateContainer within sandbox \"8749a975e34c3541cc4378847f832ef281b25f26df0c9403b4e588af04fb5971\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 10 00:29:01.865129 containerd[1542]: time="2025-07-10T00:29:01.865112567Z" level=info msg="CreateContainer within sandbox \"97233cb89258ab0b349627630f44611af7762f90ae6f1a0f7d2cbab2c21ecae4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 10 00:29:01.865547 kubelet[2244]: E0710 00:29:01.865516 2244 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:01.867722 containerd[1542]: time="2025-07-10T00:29:01.867581098Z" level=info msg="CreateContainer within sandbox \"e397a9c959104362c7d5db3576c8b63b20de6af742146c099a26692b30d59466\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 10 00:29:01.882494 containerd[1542]: time="2025-07-10T00:29:01.882340981Z" level=info msg="CreateContainer within sandbox \"97233cb89258ab0b349627630f44611af7762f90ae6f1a0f7d2cbab2c21ecae4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"04aa1dbe8605577080a7750303a354a40f59dc1068d87c1a8d2d1fca60a24e49\"" Jul 10 00:29:01.883097 containerd[1542]: time="2025-07-10T00:29:01.883042757Z" level=info msg="StartContainer for \"04aa1dbe8605577080a7750303a354a40f59dc1068d87c1a8d2d1fca60a24e49\"" Jul 10 00:29:01.887038 containerd[1542]: time="2025-07-10T00:29:01.886966143Z" level=info msg="CreateContainer within sandbox \"e397a9c959104362c7d5db3576c8b63b20de6af742146c099a26692b30d59466\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5f25b1feab98f5b902a8abbe7f7dc0595105001e9c56f30d75c855c41fb41a91\"" Jul 10 00:29:01.888363 containerd[1542]: time="2025-07-10T00:29:01.887269806Z" level=info msg="CreateContainer within sandbox \"8749a975e34c3541cc4378847f832ef281b25f26df0c9403b4e588af04fb5971\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b97de6096c65f14c928e14b2ef05d785077af98bf53d7c8feebc5df9c13f2305\"" Jul 10 00:29:01.888363 containerd[1542]: time="2025-07-10T00:29:01.887427795Z" level=info msg="StartContainer for \"5f25b1feab98f5b902a8abbe7f7dc0595105001e9c56f30d75c855c41fb41a91\"" Jul 10 00:29:01.888363 containerd[1542]: time="2025-07-10T00:29:01.887670558Z" level=info msg="StartContainer for \"b97de6096c65f14c928e14b2ef05d785077af98bf53d7c8feebc5df9c13f2305\"" Jul 10 00:29:01.951479 kubelet[2244]: W0710 00:29:01.946027 2244 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Jul 10 00:29:01.951479 kubelet[2244]: E0710 00:29:01.946102 2244 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:29:01.958988 containerd[1542]: time="2025-07-10T00:29:01.958935863Z" level=info msg="StartContainer for \"b97de6096c65f14c928e14b2ef05d785077af98bf53d7c8feebc5df9c13f2305\" returns successfully" Jul 10 00:29:01.959100 containerd[1542]: time="2025-07-10T00:29:01.959074179Z" level=info msg="StartContainer for \"5f25b1feab98f5b902a8abbe7f7dc0595105001e9c56f30d75c855c41fb41a91\" returns successfully" Jul 10 00:29:01.959132 containerd[1542]: time="2025-07-10T00:29:01.959103009Z" level=info msg="StartContainer for \"04aa1dbe8605577080a7750303a354a40f59dc1068d87c1a8d2d1fca60a24e49\" returns successfully" Jul 10 00:29:02.096337 kubelet[2244]: E0710 00:29:02.096292 2244 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="1.6s" Jul 10 00:29:02.229279 kubelet[2244]: I0710 00:29:02.229166 2244 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:29:02.717271 kubelet[2244]: E0710 00:29:02.715563 2244 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:02.717271 kubelet[2244]: E0710 00:29:02.716101 2244 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:02.718840 kubelet[2244]: E0710 00:29:02.718819 2244 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:03.699797 kubelet[2244]: E0710 00:29:03.699750 2244 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 10 00:29:03.721568 kubelet[2244]: E0710 00:29:03.721364 2244 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:03.727980 kubelet[2244]: I0710 00:29:03.727941 2244 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 10 00:29:03.727980 kubelet[2244]: E0710 00:29:03.727976 2244 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 10 00:29:03.735548 kubelet[2244]: E0710 00:29:03.735504 2244 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:29:03.836479 kubelet[2244]: E0710 00:29:03.836443 2244 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:29:03.937191 kubelet[2244]: E0710 00:29:03.937158 2244 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:29:04.038354 kubelet[2244]: E0710 00:29:04.038215 2244 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:29:04.437462 kubelet[2244]: E0710 00:29:04.437357 2244 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 10 00:29:04.437572 kubelet[2244]: E0710 00:29:04.437558 2244 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:04.679411 kubelet[2244]: I0710 00:29:04.679338 2244 apiserver.go:52] "Watching apiserver" Jul 10 00:29:04.690278 kubelet[2244]: I0710 00:29:04.690141 2244 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 10 00:29:04.726945 kubelet[2244]: E0710 00:29:04.726903 2244 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:05.611672 systemd[1]: Reloading requested from client PID 2522 ('systemctl') (unit session-7.scope)... Jul 10 00:29:05.611688 systemd[1]: Reloading... Jul 10 00:29:05.678254 zram_generator::config[2561]: No configuration found. Jul 10 00:29:05.723886 kubelet[2244]: E0710 00:29:05.723842 2244 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:05.766708 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:29:05.829497 systemd[1]: Reloading finished in 217 ms. Jul 10 00:29:05.854736 kubelet[2244]: I0710 00:29:05.854663 2244 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:29:05.854855 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:29:05.868691 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 00:29:05.868961 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:29:05.882607 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:29:05.986811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:29:05.992256 (kubelet)[2613]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 00:29:06.038031 kubelet[2613]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:29:06.038031 kubelet[2613]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 10 00:29:06.038031 kubelet[2613]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:29:06.038031 kubelet[2613]: I0710 00:29:06.037786 2613 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:29:06.046097 kubelet[2613]: I0710 00:29:06.042954 2613 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 10 00:29:06.046097 kubelet[2613]: I0710 00:29:06.042980 2613 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:29:06.046097 kubelet[2613]: I0710 00:29:06.043554 2613 server.go:934] "Client rotation is on, will bootstrap in background" Jul 10 00:29:06.046388 kubelet[2613]: I0710 00:29:06.046354 2613 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 10 00:29:06.048986 kubelet[2613]: I0710 00:29:06.048953 2613 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:29:06.051868 kubelet[2613]: E0710 00:29:06.051837 2613 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 10 00:29:06.051868 kubelet[2613]: I0710 00:29:06.051870 2613 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 10 00:29:06.054424 kubelet[2613]: I0710 00:29:06.054392 2613 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:29:06.054749 kubelet[2613]: I0710 00:29:06.054713 2613 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 10 00:29:06.054840 kubelet[2613]: I0710 00:29:06.054810 2613 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:29:06.055000 kubelet[2613]: I0710 00:29:06.054831 2613 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 10 00:29:06.055000 kubelet[2613]: I0710 00:29:06.054987 2613 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:29:06.055000 kubelet[2613]: I0710 00:29:06.054994 2613 container_manager_linux.go:300] "Creating device plugin manager" Jul 10 00:29:06.055262 kubelet[2613]: I0710 00:29:06.055024 2613 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:29:06.055262 kubelet[2613]: I0710 00:29:06.055118 2613 kubelet.go:408] "Attempting to sync node with API server" Jul 10 00:29:06.055262 kubelet[2613]: I0710 00:29:06.055130 2613 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:29:06.055262 kubelet[2613]: I0710 00:29:06.055146 2613 kubelet.go:314] "Adding apiserver pod source" Jul 10 00:29:06.055262 kubelet[2613]: I0710 00:29:06.055158 2613 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:29:06.055840 kubelet[2613]: I0710 00:29:06.055819 2613 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 10 00:29:06.058224 kubelet[2613]: I0710 00:29:06.056263 2613 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 00:29:06.058224 kubelet[2613]: I0710 00:29:06.056611 2613 server.go:1274] "Started kubelet" Jul 10 00:29:06.058224 kubelet[2613]: I0710 00:29:06.056679 2613 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:29:06.058224 kubelet[2613]: I0710 00:29:06.056883 2613 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:29:06.058224 kubelet[2613]: I0710 00:29:06.057103 2613 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:29:06.058224 kubelet[2613]: I0710 00:29:06.057793 2613 server.go:449] "Adding debug handlers to kubelet server" Jul 10 00:29:06.060018 kubelet[2613]: I0710 00:29:06.059996 2613 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:29:06.064209 kubelet[2613]: I0710 00:29:06.060872 2613 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:29:06.065494 kubelet[2613]: I0710 00:29:06.065468 2613 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 10 00:29:06.066938 kubelet[2613]: I0710 00:29:06.065893 2613 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 10 00:29:06.066938 kubelet[2613]: E0710 00:29:06.066741 2613 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:29:06.066938 kubelet[2613]: I0710 00:29:06.066883 2613 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:29:06.068789 kubelet[2613]: E0710 00:29:06.068479 2613 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:29:06.068789 kubelet[2613]: I0710 00:29:06.068528 2613 factory.go:221] Registration of the systemd container factory successfully Jul 10 00:29:06.068789 kubelet[2613]: I0710 00:29:06.068616 2613 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:29:06.070856 kubelet[2613]: I0710 00:29:06.070828 2613 factory.go:221] Registration of the containerd container factory successfully Jul 10 00:29:06.071514 kubelet[2613]: I0710 00:29:06.071490 2613 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 00:29:06.072586 kubelet[2613]: I0710 00:29:06.072568 2613 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 00:29:06.072675 kubelet[2613]: I0710 00:29:06.072666 2613 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 10 00:29:06.072739 kubelet[2613]: I0710 00:29:06.072731 2613 kubelet.go:2321] "Starting kubelet main sync loop" Jul 10 00:29:06.072839 kubelet[2613]: E0710 00:29:06.072823 2613 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:29:06.121067 kubelet[2613]: I0710 00:29:06.120967 2613 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 10 00:29:06.121067 kubelet[2613]: I0710 00:29:06.120992 2613 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 10 00:29:06.121067 kubelet[2613]: I0710 00:29:06.121013 2613 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:29:06.121234 kubelet[2613]: I0710 00:29:06.121157 2613 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 10 00:29:06.121234 kubelet[2613]: I0710 00:29:06.121169 2613 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 10 00:29:06.121234 kubelet[2613]: I0710 00:29:06.121186 2613 policy_none.go:49] "None policy: Start" Jul 10 00:29:06.121916 kubelet[2613]: I0710 00:29:06.121898 2613 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 10 00:29:06.121952 kubelet[2613]: I0710 00:29:06.121922 2613 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:29:06.122080 kubelet[2613]: I0710 00:29:06.122044 2613 state_mem.go:75] "Updated machine memory state" Jul 10 00:29:06.123100 kubelet[2613]: I0710 00:29:06.123070 2613 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 00:29:06.123277 kubelet[2613]: I0710 00:29:06.123251 2613 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:29:06.123314 kubelet[2613]: I0710 00:29:06.123272 2613 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:29:06.123684 kubelet[2613]: I0710 00:29:06.123657 2613 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:29:06.181416 kubelet[2613]: E0710 00:29:06.181373 2613 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 10 00:29:06.226781 kubelet[2613]: I0710 00:29:06.226756 2613 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:29:06.234775 kubelet[2613]: I0710 00:29:06.234715 2613 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 10 00:29:06.234983 kubelet[2613]: I0710 00:29:06.234951 2613 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 10 00:29:06.268043 kubelet[2613]: I0710 00:29:06.267975 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:29:06.268146 kubelet[2613]: I0710 00:29:06.268045 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b8ea7f213f3541e9e38adcf7476a1ac9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b8ea7f213f3541e9e38adcf7476a1ac9\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:29:06.268146 kubelet[2613]: I0710 00:29:06.268078 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:29:06.268146 kubelet[2613]: I0710 00:29:06.268097 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 10 00:29:06.268146 kubelet[2613]: I0710 00:29:06.268113 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b8ea7f213f3541e9e38adcf7476a1ac9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b8ea7f213f3541e9e38adcf7476a1ac9\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:29:06.268146 kubelet[2613]: I0710 00:29:06.268129 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b8ea7f213f3541e9e38adcf7476a1ac9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b8ea7f213f3541e9e38adcf7476a1ac9\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:29:06.268300 kubelet[2613]: I0710 00:29:06.268142 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:29:06.268300 kubelet[2613]: I0710 00:29:06.268157 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:29:06.268300 kubelet[2613]: I0710 00:29:06.268174 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:29:06.480188 kubelet[2613]: E0710 00:29:06.480063 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:06.482593 kubelet[2613]: E0710 00:29:06.482523 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:06.482593 kubelet[2613]: E0710 00:29:06.482584 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:06.611702 sudo[2647]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 10 00:29:06.611979 sudo[2647]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 10 00:29:07.055693 kubelet[2613]: I0710 00:29:07.055647 2613 apiserver.go:52] "Watching apiserver" Jul 10 00:29:07.066817 kubelet[2613]: I0710 00:29:07.066790 2613 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 10 00:29:07.069723 sudo[2647]: pam_unix(sudo:session): session closed for user root Jul 10 00:29:07.141617 kubelet[2613]: E0710 00:29:07.141540 2613 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 10 00:29:07.142666 kubelet[2613]: E0710 00:29:07.142560 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:07.142746 kubelet[2613]: E0710 00:29:07.142703 2613 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 10 00:29:07.142871 kubelet[2613]: E0710 00:29:07.142836 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:07.142982 kubelet[2613]: E0710 00:29:07.142966 2613 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 10 00:29:07.143077 kubelet[2613]: E0710 00:29:07.143063 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:07.151185 kubelet[2613]: I0710 00:29:07.151132 2613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.151119687 podStartE2EDuration="1.151119687s" podCreationTimestamp="2025-07-10 00:29:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:29:07.143558647 +0000 UTC m=+1.147827509" watchObservedRunningTime="2025-07-10 00:29:07.151119687 +0000 UTC m=+1.155388509" Jul 10 00:29:07.158795 kubelet[2613]: I0710 00:29:07.158747 2613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.158732355 podStartE2EDuration="3.158732355s" podCreationTimestamp="2025-07-10 00:29:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:29:07.151350757 +0000 UTC m=+1.155619619" watchObservedRunningTime="2025-07-10 00:29:07.158732355 +0000 UTC m=+1.163001217" Jul 10 00:29:07.169129 kubelet[2613]: I0710 00:29:07.169034 2613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.169020164 podStartE2EDuration="1.169020164s" podCreationTimestamp="2025-07-10 00:29:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:29:07.159702345 +0000 UTC m=+1.163971207" watchObservedRunningTime="2025-07-10 00:29:07.169020164 +0000 UTC m=+1.173289026" Jul 10 00:29:08.100822 kubelet[2613]: E0710 00:29:08.100721 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:08.100822 kubelet[2613]: E0710 00:29:08.100726 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:08.101946 kubelet[2613]: E0710 00:29:08.100875 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:09.064578 sudo[1749]: pam_unix(sudo:session): session closed for user root Jul 10 00:29:09.066014 sshd[1742]: pam_unix(sshd:session): session closed for user core Jul 10 00:29:09.069139 systemd[1]: sshd@6-10.0.0.64:22-10.0.0.1:58276.service: Deactivated successfully. Jul 10 00:29:09.070938 systemd-logind[1522]: Session 7 logged out. Waiting for processes to exit. Jul 10 00:29:09.070994 systemd[1]: session-7.scope: Deactivated successfully. Jul 10 00:29:09.071933 systemd-logind[1522]: Removed session 7. Jul 10 00:29:09.102640 kubelet[2613]: E0710 00:29:09.102596 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:09.102996 kubelet[2613]: E0710 00:29:09.102672 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:12.170575 kubelet[2613]: I0710 00:29:12.170502 2613 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 10 00:29:12.171148 containerd[1542]: time="2025-07-10T00:29:12.170807792Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 10 00:29:12.171639 kubelet[2613]: I0710 00:29:12.171382 2613 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 10 00:29:12.393285 kubelet[2613]: E0710 00:29:12.393229 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:13.110216 kubelet[2613]: E0710 00:29:13.110141 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:13.113024 kubelet[2613]: I0710 00:29:13.112988 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/073addf9-45a4-4183-ba3c-13a2309ae575-lib-modules\") pod \"cilium-cn5lp\" (UID: \"073addf9-45a4-4183-ba3c-13a2309ae575\") " pod="kube-system/cilium-cn5lp" Jul 10 00:29:13.113934 kubelet[2613]: I0710 00:29:13.113910 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/073addf9-45a4-4183-ba3c-13a2309ae575-cni-path\") pod \"cilium-cn5lp\" (UID: \"073addf9-45a4-4183-ba3c-13a2309ae575\") " pod="kube-system/cilium-cn5lp" Jul 10 00:29:13.114854 kubelet[2613]: I0710 00:29:13.114406 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbjb6\" (UniqueName: \"kubernetes.io/projected/073addf9-45a4-4183-ba3c-13a2309ae575-kube-api-access-wbjb6\") pod \"cilium-cn5lp\" (UID: \"073addf9-45a4-4183-ba3c-13a2309ae575\") " pod="kube-system/cilium-cn5lp" Jul 10 00:29:13.114854 kubelet[2613]: I0710 00:29:13.114441 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/073addf9-45a4-4183-ba3c-13a2309ae575-cilium-config-path\") pod \"cilium-cn5lp\" (UID: \"073addf9-45a4-4183-ba3c-13a2309ae575\") " pod="kube-system/cilium-cn5lp" Jul 10 00:29:13.114854 kubelet[2613]: I0710 00:29:13.114491 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a2ff5b9e-e4d7-4320-b420-48abad6da55a-kube-proxy\") pod \"kube-proxy-dw27c\" (UID: \"a2ff5b9e-e4d7-4320-b420-48abad6da55a\") " pod="kube-system/kube-proxy-dw27c" Jul 10 00:29:13.114854 kubelet[2613]: I0710 00:29:13.114507 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/073addf9-45a4-4183-ba3c-13a2309ae575-cilium-run\") pod \"cilium-cn5lp\" (UID: \"073addf9-45a4-4183-ba3c-13a2309ae575\") " pod="kube-system/cilium-cn5lp" Jul 10 00:29:13.114854 kubelet[2613]: I0710 00:29:13.114521 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/073addf9-45a4-4183-ba3c-13a2309ae575-cilium-cgroup\") pod \"cilium-cn5lp\" (UID: \"073addf9-45a4-4183-ba3c-13a2309ae575\") " pod="kube-system/cilium-cn5lp" Jul 10 00:29:13.114854 kubelet[2613]: I0710 00:29:13.114540 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2ff5b9e-e4d7-4320-b420-48abad6da55a-lib-modules\") pod \"kube-proxy-dw27c\" (UID: \"a2ff5b9e-e4d7-4320-b420-48abad6da55a\") " pod="kube-system/kube-proxy-dw27c" Jul 10 00:29:13.115039 kubelet[2613]: I0710 00:29:13.114556 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/073addf9-45a4-4183-ba3c-13a2309ae575-bpf-maps\") pod \"cilium-cn5lp\" (UID: \"073addf9-45a4-4183-ba3c-13a2309ae575\") " pod="kube-system/cilium-cn5lp" Jul 10 00:29:13.115039 kubelet[2613]: I0710 00:29:13.114606 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/073addf9-45a4-4183-ba3c-13a2309ae575-xtables-lock\") pod \"cilium-cn5lp\" (UID: \"073addf9-45a4-4183-ba3c-13a2309ae575\") " pod="kube-system/cilium-cn5lp" Jul 10 00:29:13.115039 kubelet[2613]: I0710 00:29:13.114623 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6vnb\" (UniqueName: \"kubernetes.io/projected/a2ff5b9e-e4d7-4320-b420-48abad6da55a-kube-api-access-j6vnb\") pod \"kube-proxy-dw27c\" (UID: \"a2ff5b9e-e4d7-4320-b420-48abad6da55a\") " pod="kube-system/kube-proxy-dw27c" Jul 10 00:29:13.115039 kubelet[2613]: I0710 00:29:13.114640 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/073addf9-45a4-4183-ba3c-13a2309ae575-etc-cni-netd\") pod \"cilium-cn5lp\" (UID: \"073addf9-45a4-4183-ba3c-13a2309ae575\") " pod="kube-system/cilium-cn5lp" Jul 10 00:29:13.115039 kubelet[2613]: I0710 00:29:13.114656 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/073addf9-45a4-4183-ba3c-13a2309ae575-hubble-tls\") pod \"cilium-cn5lp\" (UID: \"073addf9-45a4-4183-ba3c-13a2309ae575\") " pod="kube-system/cilium-cn5lp" Jul 10 00:29:13.115039 kubelet[2613]: I0710 00:29:13.114671 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2ff5b9e-e4d7-4320-b420-48abad6da55a-xtables-lock\") pod \"kube-proxy-dw27c\" (UID: \"a2ff5b9e-e4d7-4320-b420-48abad6da55a\") " pod="kube-system/kube-proxy-dw27c" Jul 10 00:29:13.115158 kubelet[2613]: I0710 00:29:13.114691 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/073addf9-45a4-4183-ba3c-13a2309ae575-host-proc-sys-kernel\") pod \"cilium-cn5lp\" (UID: \"073addf9-45a4-4183-ba3c-13a2309ae575\") " pod="kube-system/cilium-cn5lp" Jul 10 00:29:13.115158 kubelet[2613]: I0710 00:29:13.114728 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/073addf9-45a4-4183-ba3c-13a2309ae575-hostproc\") pod \"cilium-cn5lp\" (UID: \"073addf9-45a4-4183-ba3c-13a2309ae575\") " pod="kube-system/cilium-cn5lp" Jul 10 00:29:13.115158 kubelet[2613]: I0710 00:29:13.114742 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/073addf9-45a4-4183-ba3c-13a2309ae575-clustermesh-secrets\") pod \"cilium-cn5lp\" (UID: \"073addf9-45a4-4183-ba3c-13a2309ae575\") " pod="kube-system/cilium-cn5lp" Jul 10 00:29:13.115158 kubelet[2613]: I0710 00:29:13.114760 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/073addf9-45a4-4183-ba3c-13a2309ae575-host-proc-sys-net\") pod \"cilium-cn5lp\" (UID: \"073addf9-45a4-4183-ba3c-13a2309ae575\") " pod="kube-system/cilium-cn5lp" Jul 10 00:29:13.215840 kubelet[2613]: I0710 00:29:13.215794 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8aadeeaa-ea3e-41ae-a389-2d682a038c74-cilium-config-path\") pod \"cilium-operator-5d85765b45-c45bf\" (UID: \"8aadeeaa-ea3e-41ae-a389-2d682a038c74\") " pod="kube-system/cilium-operator-5d85765b45-c45bf" Jul 10 00:29:13.215840 kubelet[2613]: I0710 00:29:13.215835 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f58n2\" (UniqueName: \"kubernetes.io/projected/8aadeeaa-ea3e-41ae-a389-2d682a038c74-kube-api-access-f58n2\") pod \"cilium-operator-5d85765b45-c45bf\" (UID: \"8aadeeaa-ea3e-41ae-a389-2d682a038c74\") " pod="kube-system/cilium-operator-5d85765b45-c45bf" Jul 10 00:29:13.251349 kubelet[2613]: E0710 00:29:13.251311 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:13.252124 containerd[1542]: time="2025-07-10T00:29:13.252075542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dw27c,Uid:a2ff5b9e-e4d7-4320-b420-48abad6da55a,Namespace:kube-system,Attempt:0,}" Jul 10 00:29:13.260420 kubelet[2613]: E0710 00:29:13.260376 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:13.262507 containerd[1542]: time="2025-07-10T00:29:13.261145966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cn5lp,Uid:073addf9-45a4-4183-ba3c-13a2309ae575,Namespace:kube-system,Attempt:0,}" Jul 10 00:29:13.275106 containerd[1542]: time="2025-07-10T00:29:13.275015124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:29:13.275106 containerd[1542]: time="2025-07-10T00:29:13.275078114Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:29:13.275106 containerd[1542]: time="2025-07-10T00:29:13.275089353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:29:13.275284 containerd[1542]: time="2025-07-10T00:29:13.275174500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:29:13.282329 containerd[1542]: time="2025-07-10T00:29:13.282229541Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:29:13.282329 containerd[1542]: time="2025-07-10T00:29:13.282303170Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:29:13.282481 containerd[1542]: time="2025-07-10T00:29:13.282319088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:29:13.283121 containerd[1542]: time="2025-07-10T00:29:13.283062818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:29:13.314177 containerd[1542]: time="2025-07-10T00:29:13.314129323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dw27c,Uid:a2ff5b9e-e4d7-4320-b420-48abad6da55a,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4524758ce7da9de727d447fae35b68146e1596a8cd32feced073a42dbb91d0d\"" Jul 10 00:29:13.314896 kubelet[2613]: E0710 00:29:13.314872 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:13.315158 containerd[1542]: time="2025-07-10T00:29:13.315134975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cn5lp,Uid:073addf9-45a4-4183-ba3c-13a2309ae575,Namespace:kube-system,Attempt:0,} returns sandbox id \"8edc2e84da8d416d9737c6129f6960c67f9abff1ffed0e9afc7323e026e69a7b\"" Jul 10 00:29:13.317439 kubelet[2613]: E0710 00:29:13.317325 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:13.319059 containerd[1542]: time="2025-07-10T00:29:13.319011244Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 10 00:29:13.323453 containerd[1542]: time="2025-07-10T00:29:13.323004776Z" level=info msg="CreateContainer within sandbox \"e4524758ce7da9de727d447fae35b68146e1596a8cd32feced073a42dbb91d0d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 10 00:29:13.351897 containerd[1542]: time="2025-07-10T00:29:13.351834449Z" level=info msg="CreateContainer within sandbox \"e4524758ce7da9de727d447fae35b68146e1596a8cd32feced073a42dbb91d0d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2df3a36c58490b1313709c36ae12b79f306fe750f481895744ba680d85af0478\"" Jul 10 00:29:13.352592 containerd[1542]: time="2025-07-10T00:29:13.352559423Z" level=info msg="StartContainer for \"2df3a36c58490b1313709c36ae12b79f306fe750f481895744ba680d85af0478\"" Jul 10 00:29:13.354960 kubelet[2613]: E0710 00:29:13.354921 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:13.355445 containerd[1542]: time="2025-07-10T00:29:13.355362810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-c45bf,Uid:8aadeeaa-ea3e-41ae-a389-2d682a038c74,Namespace:kube-system,Attempt:0,}" Jul 10 00:29:13.392192 containerd[1542]: time="2025-07-10T00:29:13.390479838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:29:13.392192 containerd[1542]: time="2025-07-10T00:29:13.390611338Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:29:13.392192 containerd[1542]: time="2025-07-10T00:29:13.390680768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:29:13.392192 containerd[1542]: time="2025-07-10T00:29:13.390815508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:29:13.415436 containerd[1542]: time="2025-07-10T00:29:13.415343935Z" level=info msg="StartContainer for \"2df3a36c58490b1313709c36ae12b79f306fe750f481895744ba680d85af0478\" returns successfully" Jul 10 00:29:13.440334 containerd[1542]: time="2025-07-10T00:29:13.440259866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-c45bf,Uid:8aadeeaa-ea3e-41ae-a389-2d682a038c74,Namespace:kube-system,Attempt:0,} returns sandbox id \"5fd6f244e243b1996c4b8b0583fec73e1a1eb23813cecb5620f3335f1fbc0221\"" Jul 10 00:29:13.440871 kubelet[2613]: E0710 00:29:13.440849 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:14.113492 kubelet[2613]: E0710 00:29:14.113463 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:14.125211 kubelet[2613]: I0710 00:29:14.124915 2613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dw27c" podStartSLOduration=2.124897657 podStartE2EDuration="2.124897657s" podCreationTimestamp="2025-07-10 00:29:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:29:14.124609097 +0000 UTC m=+8.128877959" watchObservedRunningTime="2025-07-10 00:29:14.124897657 +0000 UTC m=+8.129166479" Jul 10 00:29:16.489138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1977468679.mount: Deactivated successfully. Jul 10 00:29:17.778290 containerd[1542]: time="2025-07-10T00:29:17.778241652Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:17.778809 containerd[1542]: time="2025-07-10T00:29:17.778767712Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 10 00:29:17.779652 containerd[1542]: time="2025-07-10T00:29:17.779623375Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:17.785782 containerd[1542]: time="2025-07-10T00:29:17.785724400Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 4.466668843s" Jul 10 00:29:17.785858 containerd[1542]: time="2025-07-10T00:29:17.785788233Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 10 00:29:17.788559 containerd[1542]: time="2025-07-10T00:29:17.788458169Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 10 00:29:17.789167 containerd[1542]: time="2025-07-10T00:29:17.789102936Z" level=info msg="CreateContainer within sandbox \"8edc2e84da8d416d9737c6129f6960c67f9abff1ffed0e9afc7323e026e69a7b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 00:29:17.808899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount927940317.mount: Deactivated successfully. Jul 10 00:29:17.810309 containerd[1542]: time="2025-07-10T00:29:17.810270288Z" level=info msg="CreateContainer within sandbox \"8edc2e84da8d416d9737c6129f6960c67f9abff1ffed0e9afc7323e026e69a7b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7be49fb5efe49145c903465d73f3c01c8514edb65dfae67f9ea7347f693d9992\"" Jul 10 00:29:17.810791 containerd[1542]: time="2025-07-10T00:29:17.810768071Z" level=info msg="StartContainer for \"7be49fb5efe49145c903465d73f3c01c8514edb65dfae67f9ea7347f693d9992\"" Jul 10 00:29:17.854089 containerd[1542]: time="2025-07-10T00:29:17.854044828Z" level=info msg="StartContainer for \"7be49fb5efe49145c903465d73f3c01c8514edb65dfae67f9ea7347f693d9992\" returns successfully" Jul 10 00:29:18.051467 containerd[1542]: time="2025-07-10T00:29:18.040698473Z" level=info msg="shim disconnected" id=7be49fb5efe49145c903465d73f3c01c8514edb65dfae67f9ea7347f693d9992 namespace=k8s.io Jul 10 00:29:18.051467 containerd[1542]: time="2025-07-10T00:29:18.050999374Z" level=warning msg="cleaning up after shim disconnected" id=7be49fb5efe49145c903465d73f3c01c8514edb65dfae67f9ea7347f693d9992 namespace=k8s.io Jul 10 00:29:18.051467 containerd[1542]: time="2025-07-10T00:29:18.051014093Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:29:18.174344 kubelet[2613]: E0710 00:29:18.174303 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:18.177283 containerd[1542]: time="2025-07-10T00:29:18.177230391Z" level=info msg="CreateContainer within sandbox \"8edc2e84da8d416d9737c6129f6960c67f9abff1ffed0e9afc7323e026e69a7b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 00:29:18.189930 containerd[1542]: time="2025-07-10T00:29:18.189885801Z" level=info msg="CreateContainer within sandbox \"8edc2e84da8d416d9737c6129f6960c67f9abff1ffed0e9afc7323e026e69a7b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"902b2816d9c3a1c1fb845755d52beec5a82c1a27d5e0e6722325fbf813a37077\"" Jul 10 00:29:18.192400 containerd[1542]: time="2025-07-10T00:29:18.191501828Z" level=info msg="StartContainer for \"902b2816d9c3a1c1fb845755d52beec5a82c1a27d5e0e6722325fbf813a37077\"" Jul 10 00:29:18.232445 containerd[1542]: time="2025-07-10T00:29:18.232406586Z" level=info msg="StartContainer for \"902b2816d9c3a1c1fb845755d52beec5a82c1a27d5e0e6722325fbf813a37077\" returns successfully" Jul 10 00:29:18.257560 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:29:18.258231 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:29:18.258309 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:29:18.265511 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:29:18.279505 containerd[1542]: time="2025-07-10T00:29:18.279451968Z" level=info msg="shim disconnected" id=902b2816d9c3a1c1fb845755d52beec5a82c1a27d5e0e6722325fbf813a37077 namespace=k8s.io Jul 10 00:29:18.279850 containerd[1542]: time="2025-07-10T00:29:18.279829207Z" level=warning msg="cleaning up after shim disconnected" id=902b2816d9c3a1c1fb845755d52beec5a82c1a27d5e0e6722325fbf813a37077 namespace=k8s.io Jul 10 00:29:18.279935 containerd[1542]: time="2025-07-10T00:29:18.279920798Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:29:18.281677 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:29:18.366605 kubelet[2613]: E0710 00:29:18.366475 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:18.595093 kubelet[2613]: E0710 00:29:18.594701 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:18.806267 systemd[1]: run-containerd-runc-k8s.io-7be49fb5efe49145c903465d73f3c01c8514edb65dfae67f9ea7347f693d9992-runc.o0ZuyK.mount: Deactivated successfully. Jul 10 00:29:18.806398 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7be49fb5efe49145c903465d73f3c01c8514edb65dfae67f9ea7347f693d9992-rootfs.mount: Deactivated successfully. Jul 10 00:29:18.883648 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount201702104.mount: Deactivated successfully. Jul 10 00:29:19.178926 kubelet[2613]: E0710 00:29:19.178791 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:19.181756 containerd[1542]: time="2025-07-10T00:29:19.181429082Z" level=info msg="CreateContainer within sandbox \"8edc2e84da8d416d9737c6129f6960c67f9abff1ffed0e9afc7323e026e69a7b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 00:29:19.199688 containerd[1542]: time="2025-07-10T00:29:19.199645741Z" level=info msg="CreateContainer within sandbox \"8edc2e84da8d416d9737c6129f6960c67f9abff1ffed0e9afc7323e026e69a7b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5f893321818dbd02ca4c2cdc5767488596fa4415d398374dc08f148b58d6fa9f\"" Jul 10 00:29:19.200335 containerd[1542]: time="2025-07-10T00:29:19.200289836Z" level=info msg="StartContainer for \"5f893321818dbd02ca4c2cdc5767488596fa4415d398374dc08f148b58d6fa9f\"" Jul 10 00:29:19.258318 containerd[1542]: time="2025-07-10T00:29:19.256379708Z" level=info msg="StartContainer for \"5f893321818dbd02ca4c2cdc5767488596fa4415d398374dc08f148b58d6fa9f\" returns successfully" Jul 10 00:29:19.338604 containerd[1542]: time="2025-07-10T00:29:19.338543132Z" level=info msg="shim disconnected" id=5f893321818dbd02ca4c2cdc5767488596fa4415d398374dc08f148b58d6fa9f namespace=k8s.io Jul 10 00:29:19.338604 containerd[1542]: time="2025-07-10T00:29:19.338598927Z" level=warning msg="cleaning up after shim disconnected" id=5f893321818dbd02ca4c2cdc5767488596fa4415d398374dc08f148b58d6fa9f namespace=k8s.io Jul 10 00:29:19.338604 containerd[1542]: time="2025-07-10T00:29:19.338608846Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:29:19.427681 containerd[1542]: time="2025-07-10T00:29:19.427624065Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:19.428054 containerd[1542]: time="2025-07-10T00:29:19.428023185Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 10 00:29:19.428793 containerd[1542]: time="2025-07-10T00:29:19.428765991Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:19.430161 containerd[1542]: time="2025-07-10T00:29:19.430078819Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.641578894s" Jul 10 00:29:19.430161 containerd[1542]: time="2025-07-10T00:29:19.430115096Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 10 00:29:19.432406 containerd[1542]: time="2025-07-10T00:29:19.432365951Z" level=info msg="CreateContainer within sandbox \"5fd6f244e243b1996c4b8b0583fec73e1a1eb23813cecb5620f3335f1fbc0221\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 10 00:29:19.439568 containerd[1542]: time="2025-07-10T00:29:19.439526635Z" level=info msg="CreateContainer within sandbox \"5fd6f244e243b1996c4b8b0583fec73e1a1eb23813cecb5620f3335f1fbc0221\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"79d04ab1cc6488f4fc224c23e4998b2ecfa8c8f91d7f612b6efeced108b4dac4\"" Jul 10 00:29:19.441022 containerd[1542]: time="2025-07-10T00:29:19.440216366Z" level=info msg="StartContainer for \"79d04ab1cc6488f4fc224c23e4998b2ecfa8c8f91d7f612b6efeced108b4dac4\"" Jul 10 00:29:19.482235 containerd[1542]: time="2025-07-10T00:29:19.482184569Z" level=info msg="StartContainer for \"79d04ab1cc6488f4fc224c23e4998b2ecfa8c8f91d7f612b6efeced108b4dac4\" returns successfully" Jul 10 00:29:20.185433 kubelet[2613]: E0710 00:29:20.185356 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:20.190284 kubelet[2613]: E0710 00:29:20.190100 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:20.192381 containerd[1542]: time="2025-07-10T00:29:20.192269561Z" level=info msg="CreateContainer within sandbox \"8edc2e84da8d416d9737c6129f6960c67f9abff1ffed0e9afc7323e026e69a7b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 00:29:20.213912 kubelet[2613]: I0710 00:29:20.213505 2613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-c45bf" podStartSLOduration=1.223848427 podStartE2EDuration="7.213488931s" podCreationTimestamp="2025-07-10 00:29:13 +0000 UTC" firstStartedPulling="2025-07-10 00:29:13.441355304 +0000 UTC m=+7.445624166" lastFinishedPulling="2025-07-10 00:29:19.430995808 +0000 UTC m=+13.435264670" observedRunningTime="2025-07-10 00:29:20.212955781 +0000 UTC m=+14.217224643" watchObservedRunningTime="2025-07-10 00:29:20.213488931 +0000 UTC m=+14.217757793" Jul 10 00:29:20.231294 containerd[1542]: time="2025-07-10T00:29:20.231244507Z" level=info msg="CreateContainer within sandbox \"8edc2e84da8d416d9737c6129f6960c67f9abff1ffed0e9afc7323e026e69a7b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"35ee7df12f7057c16b7bc3c56484655b1dec2c74c916b838541b97c933a6f4ae\"" Jul 10 00:29:20.233439 containerd[1542]: time="2025-07-10T00:29:20.233392186Z" level=info msg="StartContainer for \"35ee7df12f7057c16b7bc3c56484655b1dec2c74c916b838541b97c933a6f4ae\"" Jul 10 00:29:20.307368 containerd[1542]: time="2025-07-10T00:29:20.307328335Z" level=info msg="StartContainer for \"35ee7df12f7057c16b7bc3c56484655b1dec2c74c916b838541b97c933a6f4ae\" returns successfully" Jul 10 00:29:20.323476 containerd[1542]: time="2025-07-10T00:29:20.323412947Z" level=info msg="shim disconnected" id=35ee7df12f7057c16b7bc3c56484655b1dec2c74c916b838541b97c933a6f4ae namespace=k8s.io Jul 10 00:29:20.323476 containerd[1542]: time="2025-07-10T00:29:20.323474501Z" level=warning msg="cleaning up after shim disconnected" id=35ee7df12f7057c16b7bc3c56484655b1dec2c74c916b838541b97c933a6f4ae namespace=k8s.io Jul 10 00:29:20.323476 containerd[1542]: time="2025-07-10T00:29:20.323484540Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:29:20.805679 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35ee7df12f7057c16b7bc3c56484655b1dec2c74c916b838541b97c933a6f4ae-rootfs.mount: Deactivated successfully. Jul 10 00:29:21.194418 kubelet[2613]: E0710 00:29:21.194313 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:21.195722 kubelet[2613]: E0710 00:29:21.195695 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:21.198449 containerd[1542]: time="2025-07-10T00:29:21.198405796Z" level=info msg="CreateContainer within sandbox \"8edc2e84da8d416d9737c6129f6960c67f9abff1ffed0e9afc7323e026e69a7b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 00:29:21.229738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1394990105.mount: Deactivated successfully. Jul 10 00:29:21.237083 containerd[1542]: time="2025-07-10T00:29:21.237021043Z" level=info msg="CreateContainer within sandbox \"8edc2e84da8d416d9737c6129f6960c67f9abff1ffed0e9afc7323e026e69a7b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"122ae575a58e82c2ffa0019a7a6c287d9a2ea27e3dabcf23bf3ab946bebdea41\"" Jul 10 00:29:21.237973 containerd[1542]: time="2025-07-10T00:29:21.237925283Z" level=info msg="StartContainer for \"122ae575a58e82c2ffa0019a7a6c287d9a2ea27e3dabcf23bf3ab946bebdea41\"" Jul 10 00:29:21.294573 containerd[1542]: time="2025-07-10T00:29:21.294532548Z" level=info msg="StartContainer for \"122ae575a58e82c2ffa0019a7a6c287d9a2ea27e3dabcf23bf3ab946bebdea41\" returns successfully" Jul 10 00:29:21.434427 kubelet[2613]: I0710 00:29:21.434391 2613 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 10 00:29:21.680663 kubelet[2613]: I0710 00:29:21.680624 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x56vm\" (UniqueName: \"kubernetes.io/projected/bd1519ff-deee-4c15-9f27-265a20273a64-kube-api-access-x56vm\") pod \"coredns-7c65d6cfc9-bbw56\" (UID: \"bd1519ff-deee-4c15-9f27-265a20273a64\") " pod="kube-system/coredns-7c65d6cfc9-bbw56" Jul 10 00:29:21.680819 kubelet[2613]: I0710 00:29:21.680683 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/951c76aa-d5b6-4d54-838f-06eab69ed33e-config-volume\") pod \"coredns-7c65d6cfc9-sxlnz\" (UID: \"951c76aa-d5b6-4d54-838f-06eab69ed33e\") " pod="kube-system/coredns-7c65d6cfc9-sxlnz" Jul 10 00:29:21.680819 kubelet[2613]: I0710 00:29:21.680707 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x58lw\" (UniqueName: \"kubernetes.io/projected/951c76aa-d5b6-4d54-838f-06eab69ed33e-kube-api-access-x58lw\") pod \"coredns-7c65d6cfc9-sxlnz\" (UID: \"951c76aa-d5b6-4d54-838f-06eab69ed33e\") " pod="kube-system/coredns-7c65d6cfc9-sxlnz" Jul 10 00:29:21.680819 kubelet[2613]: I0710 00:29:21.680729 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd1519ff-deee-4c15-9f27-265a20273a64-config-volume\") pod \"coredns-7c65d6cfc9-bbw56\" (UID: \"bd1519ff-deee-4c15-9f27-265a20273a64\") " pod="kube-system/coredns-7c65d6cfc9-bbw56" Jul 10 00:29:21.824117 kubelet[2613]: E0710 00:29:21.824078 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:21.825438 containerd[1542]: time="2025-07-10T00:29:21.824910097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bbw56,Uid:bd1519ff-deee-4c15-9f27-265a20273a64,Namespace:kube-system,Attempt:0,}" Jul 10 00:29:21.831637 kubelet[2613]: E0710 00:29:21.831603 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:21.832395 containerd[1542]: time="2025-07-10T00:29:21.832346884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-sxlnz,Uid:951c76aa-d5b6-4d54-838f-06eab69ed33e,Namespace:kube-system,Attempt:0,}" Jul 10 00:29:22.198256 kubelet[2613]: E0710 00:29:22.198224 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:22.218646 kubelet[2613]: I0710 00:29:22.218334 2613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cn5lp" podStartSLOduration=5.748377808 podStartE2EDuration="10.218314922s" podCreationTimestamp="2025-07-10 00:29:12 +0000 UTC" firstStartedPulling="2025-07-10 00:29:13.317941441 +0000 UTC m=+7.322210303" lastFinishedPulling="2025-07-10 00:29:17.787878555 +0000 UTC m=+11.792147417" observedRunningTime="2025-07-10 00:29:22.217668815 +0000 UTC m=+16.221937677" watchObservedRunningTime="2025-07-10 00:29:22.218314922 +0000 UTC m=+16.222583784" Jul 10 00:29:23.200188 kubelet[2613]: E0710 00:29:23.200146 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:23.525443 systemd-networkd[1226]: cilium_host: Link UP Jul 10 00:29:23.525577 systemd-networkd[1226]: cilium_net: Link UP Jul 10 00:29:23.525580 systemd-networkd[1226]: cilium_net: Gained carrier Jul 10 00:29:23.525736 systemd-networkd[1226]: cilium_host: Gained carrier Jul 10 00:29:23.607484 systemd-networkd[1226]: cilium_vxlan: Link UP Jul 10 00:29:23.607489 systemd-networkd[1226]: cilium_vxlan: Gained carrier Jul 10 00:29:23.778409 systemd-networkd[1226]: cilium_net: Gained IPv6LL Jul 10 00:29:23.933236 kernel: NET: Registered PF_ALG protocol family Jul 10 00:29:23.978312 systemd-networkd[1226]: cilium_host: Gained IPv6LL Jul 10 00:29:24.137383 update_engine[1529]: I20250710 00:29:24.137248 1529 update_attempter.cc:509] Updating boot flags... Jul 10 00:29:24.169473 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3460) Jul 10 00:29:24.193846 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3460) Jul 10 00:29:24.205389 kubelet[2613]: E0710 00:29:24.205321 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:24.529330 systemd-networkd[1226]: lxc_health: Link UP Jul 10 00:29:24.539096 systemd-networkd[1226]: lxc_health: Gained carrier Jul 10 00:29:24.938338 systemd-networkd[1226]: cilium_vxlan: Gained IPv6LL Jul 10 00:29:24.957303 systemd-networkd[1226]: lxc38fb8ab37fe2: Link UP Jul 10 00:29:24.964312 kernel: eth0: renamed from tmpebc0c Jul 10 00:29:24.975093 systemd-networkd[1226]: lxc017950475f6e: Link UP Jul 10 00:29:24.976245 kernel: eth0: renamed from tmpd5c3d Jul 10 00:29:24.983098 systemd-networkd[1226]: lxc38fb8ab37fe2: Gained carrier Jul 10 00:29:24.983425 systemd-networkd[1226]: lxc017950475f6e: Gained carrier Jul 10 00:29:25.255578 kubelet[2613]: E0710 00:29:25.255548 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:26.090334 systemd-networkd[1226]: lxc_health: Gained IPv6LL Jul 10 00:29:26.211102 kubelet[2613]: E0710 00:29:26.211064 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:26.282362 systemd-networkd[1226]: lxc38fb8ab37fe2: Gained IPv6LL Jul 10 00:29:26.410443 systemd-networkd[1226]: lxc017950475f6e: Gained IPv6LL Jul 10 00:29:27.213370 kubelet[2613]: E0710 00:29:27.213313 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:28.650864 containerd[1542]: time="2025-07-10T00:29:28.650604934Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:29:28.650864 containerd[1542]: time="2025-07-10T00:29:28.650671490Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:29:28.650864 containerd[1542]: time="2025-07-10T00:29:28.650682650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:29:28.650864 containerd[1542]: time="2025-07-10T00:29:28.650779364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:29:28.651690 containerd[1542]: time="2025-07-10T00:29:28.651601558Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:29:28.651690 containerd[1542]: time="2025-07-10T00:29:28.651659755Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:29:28.651690 containerd[1542]: time="2025-07-10T00:29:28.651674674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:29:28.651812 containerd[1542]: time="2025-07-10T00:29:28.651759949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:29:28.674363 systemd-resolved[1436]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:29:28.677713 systemd-resolved[1436]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:29:28.697239 containerd[1542]: time="2025-07-10T00:29:28.697192368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bbw56,Uid:bd1519ff-deee-4c15-9f27-265a20273a64,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5c3d3433aadb1c2e0da0a017dd4fe04197343e04ebe3e019717629200b25a76\"" Jul 10 00:29:28.697546 containerd[1542]: time="2025-07-10T00:29:28.697378838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-sxlnz,Uid:951c76aa-d5b6-4d54-838f-06eab69ed33e,Namespace:kube-system,Attempt:0,} returns sandbox id \"ebc0cd9f2f1ee6974bf53f3d936449abeabe98e1360405d15f9a94ffe0a0247d\"" Jul 10 00:29:28.698268 kubelet[2613]: E0710 00:29:28.698185 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:28.700087 kubelet[2613]: E0710 00:29:28.699966 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:28.701440 containerd[1542]: time="2025-07-10T00:29:28.700789207Z" level=info msg="CreateContainer within sandbox \"ebc0cd9f2f1ee6974bf53f3d936449abeabe98e1360405d15f9a94ffe0a0247d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:29:28.705235 containerd[1542]: time="2025-07-10T00:29:28.705122844Z" level=info msg="CreateContainer within sandbox \"d5c3d3433aadb1c2e0da0a017dd4fe04197343e04ebe3e019717629200b25a76\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:29:28.717632 containerd[1542]: time="2025-07-10T00:29:28.717584747Z" level=info msg="CreateContainer within sandbox \"d5c3d3433aadb1c2e0da0a017dd4fe04197343e04ebe3e019717629200b25a76\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3ebf13f348ed71bd2735eee429c7c84fd9722c248d03b5d4e0f50ddc09dd433e\"" Jul 10 00:29:28.718246 containerd[1542]: time="2025-07-10T00:29:28.718195393Z" level=info msg="StartContainer for \"3ebf13f348ed71bd2735eee429c7c84fd9722c248d03b5d4e0f50ddc09dd433e\"" Jul 10 00:29:28.722056 containerd[1542]: time="2025-07-10T00:29:28.722018939Z" level=info msg="CreateContainer within sandbox \"ebc0cd9f2f1ee6974bf53f3d936449abeabe98e1360405d15f9a94ffe0a0247d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2bf0144f703e5f86393ca4ecbb6e9bbb248e843268d9d714abbaf792b436a340\"" Jul 10 00:29:28.723502 containerd[1542]: time="2025-07-10T00:29:28.722958727Z" level=info msg="StartContainer for \"2bf0144f703e5f86393ca4ecbb6e9bbb248e843268d9d714abbaf792b436a340\"" Jul 10 00:29:28.765788 containerd[1542]: time="2025-07-10T00:29:28.765739574Z" level=info msg="StartContainer for \"2bf0144f703e5f86393ca4ecbb6e9bbb248e843268d9d714abbaf792b436a340\" returns successfully" Jul 10 00:29:28.823914 containerd[1542]: time="2025-07-10T00:29:28.823822365Z" level=info msg="StartContainer for \"3ebf13f348ed71bd2735eee429c7c84fd9722c248d03b5d4e0f50ddc09dd433e\" returns successfully" Jul 10 00:29:29.218560 kubelet[2613]: E0710 00:29:29.218521 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:29.222519 kubelet[2613]: E0710 00:29:29.222260 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:29.242685 kubelet[2613]: I0710 00:29:29.242535 2613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-bbw56" podStartSLOduration=16.24251766 podStartE2EDuration="16.24251766s" podCreationTimestamp="2025-07-10 00:29:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:29:29.230354777 +0000 UTC m=+23.234623839" watchObservedRunningTime="2025-07-10 00:29:29.24251766 +0000 UTC m=+23.246786522" Jul 10 00:29:29.259102 kubelet[2613]: I0710 00:29:29.259036 2613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-sxlnz" podStartSLOduration=16.259015834 podStartE2EDuration="16.259015834s" podCreationTimestamp="2025-07-10 00:29:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:29:29.25834343 +0000 UTC m=+23.262612292" watchObservedRunningTime="2025-07-10 00:29:29.259015834 +0000 UTC m=+23.263284696" Jul 10 00:29:29.656363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3002839097.mount: Deactivated successfully. Jul 10 00:29:30.223477 kubelet[2613]: E0710 00:29:30.223430 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:31.224548 kubelet[2613]: E0710 00:29:31.224507 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:31.833293 kubelet[2613]: E0710 00:29:31.833265 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:32.225853 kubelet[2613]: E0710 00:29:32.225831 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:37.766466 systemd[1]: Started sshd@7-10.0.0.64:22-10.0.0.1:54670.service - OpenSSH per-connection server daemon (10.0.0.1:54670). Jul 10 00:29:37.799114 sshd[4003]: Accepted publickey for core from 10.0.0.1 port 54670 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:29:37.800619 sshd[4003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:29:37.806516 systemd-logind[1522]: New session 8 of user core. Jul 10 00:29:37.817463 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 10 00:29:37.946251 sshd[4003]: pam_unix(sshd:session): session closed for user core Jul 10 00:29:37.949084 systemd[1]: sshd@7-10.0.0.64:22-10.0.0.1:54670.service: Deactivated successfully. Jul 10 00:29:37.952007 systemd-logind[1522]: Session 8 logged out. Waiting for processes to exit. Jul 10 00:29:37.952454 systemd[1]: session-8.scope: Deactivated successfully. Jul 10 00:29:37.953481 systemd-logind[1522]: Removed session 8. Jul 10 00:29:42.961503 systemd[1]: Started sshd@8-10.0.0.64:22-10.0.0.1:43646.service - OpenSSH per-connection server daemon (10.0.0.1:43646). Jul 10 00:29:42.996723 sshd[4038]: Accepted publickey for core from 10.0.0.1 port 43646 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:29:42.998230 sshd[4038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:29:43.003161 systemd-logind[1522]: New session 9 of user core. Jul 10 00:29:43.010535 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 10 00:29:43.125545 sshd[4038]: pam_unix(sshd:session): session closed for user core Jul 10 00:29:43.128776 systemd[1]: sshd@8-10.0.0.64:22-10.0.0.1:43646.service: Deactivated successfully. Jul 10 00:29:43.130993 systemd-logind[1522]: Session 9 logged out. Waiting for processes to exit. Jul 10 00:29:43.131066 systemd[1]: session-9.scope: Deactivated successfully. Jul 10 00:29:43.132014 systemd-logind[1522]: Removed session 9. Jul 10 00:29:48.137485 systemd[1]: Started sshd@9-10.0.0.64:22-10.0.0.1:43662.service - OpenSSH per-connection server daemon (10.0.0.1:43662). Jul 10 00:29:48.167921 sshd[4058]: Accepted publickey for core from 10.0.0.1 port 43662 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:29:48.169193 sshd[4058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:29:48.175265 systemd-logind[1522]: New session 10 of user core. Jul 10 00:29:48.182537 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 10 00:29:48.306372 sshd[4058]: pam_unix(sshd:session): session closed for user core Jul 10 00:29:48.311123 systemd[1]: sshd@9-10.0.0.64:22-10.0.0.1:43662.service: Deactivated successfully. Jul 10 00:29:48.314135 systemd-logind[1522]: Session 10 logged out. Waiting for processes to exit. Jul 10 00:29:48.315355 systemd[1]: session-10.scope: Deactivated successfully. Jul 10 00:29:48.316806 systemd-logind[1522]: Removed session 10. Jul 10 00:29:53.324473 systemd[1]: Started sshd@10-10.0.0.64:22-10.0.0.1:48340.service - OpenSSH per-connection server daemon (10.0.0.1:48340). Jul 10 00:29:53.367317 sshd[4074]: Accepted publickey for core from 10.0.0.1 port 48340 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:29:53.368736 sshd[4074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:29:53.372610 systemd-logind[1522]: New session 11 of user core. Jul 10 00:29:53.384473 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 10 00:29:53.520476 sshd[4074]: pam_unix(sshd:session): session closed for user core Jul 10 00:29:53.528459 systemd[1]: Started sshd@11-10.0.0.64:22-10.0.0.1:48346.service - OpenSSH per-connection server daemon (10.0.0.1:48346). Jul 10 00:29:53.528961 systemd[1]: sshd@10-10.0.0.64:22-10.0.0.1:48340.service: Deactivated successfully. Jul 10 00:29:53.531627 systemd[1]: session-11.scope: Deactivated successfully. Jul 10 00:29:53.531836 systemd-logind[1522]: Session 11 logged out. Waiting for processes to exit. Jul 10 00:29:53.535192 systemd-logind[1522]: Removed session 11. Jul 10 00:29:53.566967 sshd[4088]: Accepted publickey for core from 10.0.0.1 port 48346 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:29:53.568468 sshd[4088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:29:53.575298 systemd-logind[1522]: New session 12 of user core. Jul 10 00:29:53.582490 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 10 00:29:53.741832 sshd[4088]: pam_unix(sshd:session): session closed for user core Jul 10 00:29:53.754140 systemd[1]: Started sshd@12-10.0.0.64:22-10.0.0.1:48350.service - OpenSSH per-connection server daemon (10.0.0.1:48350). Jul 10 00:29:53.754739 systemd[1]: sshd@11-10.0.0.64:22-10.0.0.1:48346.service: Deactivated successfully. Jul 10 00:29:53.759905 systemd-logind[1522]: Session 12 logged out. Waiting for processes to exit. Jul 10 00:29:53.767738 systemd[1]: session-12.scope: Deactivated successfully. Jul 10 00:29:53.772297 systemd-logind[1522]: Removed session 12. Jul 10 00:29:53.803213 sshd[4101]: Accepted publickey for core from 10.0.0.1 port 48350 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:29:53.804416 sshd[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:29:53.808538 systemd-logind[1522]: New session 13 of user core. Jul 10 00:29:53.822547 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 10 00:29:53.932447 sshd[4101]: pam_unix(sshd:session): session closed for user core Jul 10 00:29:53.935864 systemd[1]: sshd@12-10.0.0.64:22-10.0.0.1:48350.service: Deactivated successfully. Jul 10 00:29:53.938580 systemd-logind[1522]: Session 13 logged out. Waiting for processes to exit. Jul 10 00:29:53.938666 systemd[1]: session-13.scope: Deactivated successfully. Jul 10 00:29:53.939762 systemd-logind[1522]: Removed session 13. Jul 10 00:29:58.943454 systemd[1]: Started sshd@13-10.0.0.64:22-10.0.0.1:48354.service - OpenSSH per-connection server daemon (10.0.0.1:48354). Jul 10 00:29:58.976571 sshd[4121]: Accepted publickey for core from 10.0.0.1 port 48354 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:29:58.977479 sshd[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:29:58.981151 systemd-logind[1522]: New session 14 of user core. Jul 10 00:29:58.987460 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 10 00:29:59.114625 sshd[4121]: pam_unix(sshd:session): session closed for user core Jul 10 00:29:59.119500 systemd[1]: sshd@13-10.0.0.64:22-10.0.0.1:48354.service: Deactivated successfully. Jul 10 00:29:59.121813 systemd-logind[1522]: Session 14 logged out. Waiting for processes to exit. Jul 10 00:29:59.121924 systemd[1]: session-14.scope: Deactivated successfully. Jul 10 00:29:59.123009 systemd-logind[1522]: Removed session 14. Jul 10 00:30:04.126495 systemd[1]: Started sshd@14-10.0.0.64:22-10.0.0.1:47930.service - OpenSSH per-connection server daemon (10.0.0.1:47930). Jul 10 00:30:04.157006 sshd[4137]: Accepted publickey for core from 10.0.0.1 port 47930 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:30:04.158327 sshd[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:30:04.162031 systemd-logind[1522]: New session 15 of user core. Jul 10 00:30:04.173533 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 10 00:30:04.289064 sshd[4137]: pam_unix(sshd:session): session closed for user core Jul 10 00:30:04.299468 systemd[1]: Started sshd@15-10.0.0.64:22-10.0.0.1:47944.service - OpenSSH per-connection server daemon (10.0.0.1:47944). Jul 10 00:30:04.299864 systemd[1]: sshd@14-10.0.0.64:22-10.0.0.1:47930.service: Deactivated successfully. Jul 10 00:30:04.304190 systemd[1]: session-15.scope: Deactivated successfully. Jul 10 00:30:04.305771 systemd-logind[1522]: Session 15 logged out. Waiting for processes to exit. Jul 10 00:30:04.306812 systemd-logind[1522]: Removed session 15. Jul 10 00:30:04.330787 sshd[4150]: Accepted publickey for core from 10.0.0.1 port 47944 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:30:04.332320 sshd[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:30:04.336750 systemd-logind[1522]: New session 16 of user core. Jul 10 00:30:04.347517 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 10 00:30:04.576296 sshd[4150]: pam_unix(sshd:session): session closed for user core Jul 10 00:30:04.588463 systemd[1]: Started sshd@16-10.0.0.64:22-10.0.0.1:47946.service - OpenSSH per-connection server daemon (10.0.0.1:47946). Jul 10 00:30:04.588862 systemd[1]: sshd@15-10.0.0.64:22-10.0.0.1:47944.service: Deactivated successfully. Jul 10 00:30:04.591961 systemd[1]: session-16.scope: Deactivated successfully. Jul 10 00:30:04.592946 systemd-logind[1522]: Session 16 logged out. Waiting for processes to exit. Jul 10 00:30:04.593791 systemd-logind[1522]: Removed session 16. Jul 10 00:30:04.628360 sshd[4164]: Accepted publickey for core from 10.0.0.1 port 47946 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:30:04.629794 sshd[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:30:04.633537 systemd-logind[1522]: New session 17 of user core. Jul 10 00:30:04.645439 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 10 00:30:05.985160 sshd[4164]: pam_unix(sshd:session): session closed for user core Jul 10 00:30:05.996500 systemd[1]: Started sshd@17-10.0.0.64:22-10.0.0.1:47950.service - OpenSSH per-connection server daemon (10.0.0.1:47950). Jul 10 00:30:05.997786 systemd[1]: sshd@16-10.0.0.64:22-10.0.0.1:47946.service: Deactivated successfully. Jul 10 00:30:06.000679 systemd[1]: session-17.scope: Deactivated successfully. Jul 10 00:30:06.005488 systemd-logind[1522]: Session 17 logged out. Waiting for processes to exit. Jul 10 00:30:06.010445 systemd-logind[1522]: Removed session 17. Jul 10 00:30:06.034555 sshd[4187]: Accepted publickey for core from 10.0.0.1 port 47950 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:30:06.035841 sshd[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:30:06.039598 systemd-logind[1522]: New session 18 of user core. Jul 10 00:30:06.055573 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 10 00:30:06.284438 sshd[4187]: pam_unix(sshd:session): session closed for user core Jul 10 00:30:06.291485 systemd[1]: Started sshd@18-10.0.0.64:22-10.0.0.1:47956.service - OpenSSH per-connection server daemon (10.0.0.1:47956). Jul 10 00:30:06.292036 systemd[1]: sshd@17-10.0.0.64:22-10.0.0.1:47950.service: Deactivated successfully. Jul 10 00:30:06.296371 systemd[1]: session-18.scope: Deactivated successfully. Jul 10 00:30:06.296820 systemd-logind[1522]: Session 18 logged out. Waiting for processes to exit. Jul 10 00:30:06.299415 systemd-logind[1522]: Removed session 18. Jul 10 00:30:06.330038 sshd[4203]: Accepted publickey for core from 10.0.0.1 port 47956 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:30:06.331416 sshd[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:30:06.335981 systemd-logind[1522]: New session 19 of user core. Jul 10 00:30:06.346476 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 10 00:30:06.453714 sshd[4203]: pam_unix(sshd:session): session closed for user core Jul 10 00:30:06.456717 systemd[1]: sshd@18-10.0.0.64:22-10.0.0.1:47956.service: Deactivated successfully. Jul 10 00:30:06.459022 systemd-logind[1522]: Session 19 logged out. Waiting for processes to exit. Jul 10 00:30:06.459310 systemd[1]: session-19.scope: Deactivated successfully. Jul 10 00:30:06.460403 systemd-logind[1522]: Removed session 19. Jul 10 00:30:11.464506 systemd[1]: Started sshd@19-10.0.0.64:22-10.0.0.1:47960.service - OpenSSH per-connection server daemon (10.0.0.1:47960). Jul 10 00:30:11.495886 sshd[4225]: Accepted publickey for core from 10.0.0.1 port 47960 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:30:11.497231 sshd[4225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:30:11.500851 systemd-logind[1522]: New session 20 of user core. Jul 10 00:30:11.509590 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 10 00:30:11.619798 sshd[4225]: pam_unix(sshd:session): session closed for user core Jul 10 00:30:11.622813 systemd[1]: sshd@19-10.0.0.64:22-10.0.0.1:47960.service: Deactivated successfully. Jul 10 00:30:11.624956 systemd-logind[1522]: Session 20 logged out. Waiting for processes to exit. Jul 10 00:30:11.624972 systemd[1]: session-20.scope: Deactivated successfully. Jul 10 00:30:11.627629 systemd-logind[1522]: Removed session 20. Jul 10 00:30:16.630616 systemd[1]: Started sshd@20-10.0.0.64:22-10.0.0.1:39040.service - OpenSSH per-connection server daemon (10.0.0.1:39040). Jul 10 00:30:16.661292 sshd[4243]: Accepted publickey for core from 10.0.0.1 port 39040 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:30:16.662495 sshd[4243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:30:16.665822 systemd-logind[1522]: New session 21 of user core. Jul 10 00:30:16.679481 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 10 00:30:16.788444 sshd[4243]: pam_unix(sshd:session): session closed for user core Jul 10 00:30:16.792012 systemd[1]: sshd@20-10.0.0.64:22-10.0.0.1:39040.service: Deactivated successfully. Jul 10 00:30:16.793908 systemd-logind[1522]: Session 21 logged out. Waiting for processes to exit. Jul 10 00:30:16.793982 systemd[1]: session-21.scope: Deactivated successfully. Jul 10 00:30:16.794707 systemd-logind[1522]: Removed session 21. Jul 10 00:30:21.799443 systemd[1]: Started sshd@21-10.0.0.64:22-10.0.0.1:39052.service - OpenSSH per-connection server daemon (10.0.0.1:39052). Jul 10 00:30:21.835738 sshd[4259]: Accepted publickey for core from 10.0.0.1 port 39052 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:30:21.837453 sshd[4259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:30:21.842361 systemd-logind[1522]: New session 22 of user core. Jul 10 00:30:21.848512 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 10 00:30:21.965259 sshd[4259]: pam_unix(sshd:session): session closed for user core Jul 10 00:30:21.971454 systemd[1]: Started sshd@22-10.0.0.64:22-10.0.0.1:39054.service - OpenSSH per-connection server daemon (10.0.0.1:39054). Jul 10 00:30:21.971827 systemd[1]: sshd@21-10.0.0.64:22-10.0.0.1:39052.service: Deactivated successfully. Jul 10 00:30:21.974902 systemd[1]: session-22.scope: Deactivated successfully. Jul 10 00:30:21.978756 systemd-logind[1522]: Session 22 logged out. Waiting for processes to exit. Jul 10 00:30:21.979671 systemd-logind[1522]: Removed session 22. Jul 10 00:30:22.009718 sshd[4271]: Accepted publickey for core from 10.0.0.1 port 39054 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:30:22.010600 sshd[4271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:30:22.015329 systemd-logind[1522]: New session 23 of user core. Jul 10 00:30:22.028503 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 10 00:30:24.449026 containerd[1542]: time="2025-07-10T00:30:24.448870435Z" level=info msg="StopContainer for \"79d04ab1cc6488f4fc224c23e4998b2ecfa8c8f91d7f612b6efeced108b4dac4\" with timeout 30 (s)" Jul 10 00:30:24.450086 containerd[1542]: time="2025-07-10T00:30:24.449646994Z" level=info msg="Stop container \"79d04ab1cc6488f4fc224c23e4998b2ecfa8c8f91d7f612b6efeced108b4dac4\" with signal terminated" Jul 10 00:30:24.487940 containerd[1542]: time="2025-07-10T00:30:24.487896970Z" level=info msg="StopContainer for \"122ae575a58e82c2ffa0019a7a6c287d9a2ea27e3dabcf23bf3ab946bebdea41\" with timeout 2 (s)" Jul 10 00:30:24.488756 containerd[1542]: time="2025-07-10T00:30:24.488670489Z" level=info msg="Stop container \"122ae575a58e82c2ffa0019a7a6c287d9a2ea27e3dabcf23bf3ab946bebdea41\" with signal terminated" Jul 10 00:30:24.491387 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79d04ab1cc6488f4fc224c23e4998b2ecfa8c8f91d7f612b6efeced108b4dac4-rootfs.mount: Deactivated successfully. Jul 10 00:30:24.498678 systemd-networkd[1226]: lxc_health: Link DOWN Jul 10 00:30:24.499256 systemd-networkd[1226]: lxc_health: Lost carrier Jul 10 00:30:24.503186 containerd[1542]: time="2025-07-10T00:30:24.503138105Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:30:24.505583 containerd[1542]: time="2025-07-10T00:30:24.505383941Z" level=info msg="shim disconnected" id=79d04ab1cc6488f4fc224c23e4998b2ecfa8c8f91d7f612b6efeced108b4dac4 namespace=k8s.io Jul 10 00:30:24.505583 containerd[1542]: time="2025-07-10T00:30:24.505438461Z" level=warning msg="cleaning up after shim disconnected" id=79d04ab1cc6488f4fc224c23e4998b2ecfa8c8f91d7f612b6efeced108b4dac4 namespace=k8s.io Jul 10 00:30:24.505583 containerd[1542]: time="2025-07-10T00:30:24.505449021Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:30:24.523722 containerd[1542]: time="2025-07-10T00:30:24.523678711Z" level=info msg="StopContainer for \"79d04ab1cc6488f4fc224c23e4998b2ecfa8c8f91d7f612b6efeced108b4dac4\" returns successfully" Jul 10 00:30:24.525813 containerd[1542]: time="2025-07-10T00:30:24.525652908Z" level=info msg="StopPodSandbox for \"5fd6f244e243b1996c4b8b0583fec73e1a1eb23813cecb5620f3335f1fbc0221\"" Jul 10 00:30:24.525813 containerd[1542]: time="2025-07-10T00:30:24.525704268Z" level=info msg="Container to stop \"79d04ab1cc6488f4fc224c23e4998b2ecfa8c8f91d7f612b6efeced108b4dac4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:30:24.530110 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5fd6f244e243b1996c4b8b0583fec73e1a1eb23813cecb5620f3335f1fbc0221-shm.mount: Deactivated successfully. Jul 10 00:30:24.541733 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-122ae575a58e82c2ffa0019a7a6c287d9a2ea27e3dabcf23bf3ab946bebdea41-rootfs.mount: Deactivated successfully. Jul 10 00:30:24.546493 containerd[1542]: time="2025-07-10T00:30:24.546424273Z" level=info msg="shim disconnected" id=122ae575a58e82c2ffa0019a7a6c287d9a2ea27e3dabcf23bf3ab946bebdea41 namespace=k8s.io Jul 10 00:30:24.546493 containerd[1542]: time="2025-07-10T00:30:24.546492753Z" level=warning msg="cleaning up after shim disconnected" id=122ae575a58e82c2ffa0019a7a6c287d9a2ea27e3dabcf23bf3ab946bebdea41 namespace=k8s.io Jul 10 00:30:24.546493 containerd[1542]: time="2025-07-10T00:30:24.546503193Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:30:24.558709 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5fd6f244e243b1996c4b8b0583fec73e1a1eb23813cecb5620f3335f1fbc0221-rootfs.mount: Deactivated successfully. Jul 10 00:30:24.575624 containerd[1542]: time="2025-07-10T00:30:24.575581745Z" level=info msg="StopContainer for \"122ae575a58e82c2ffa0019a7a6c287d9a2ea27e3dabcf23bf3ab946bebdea41\" returns successfully" Jul 10 00:30:24.576517 containerd[1542]: time="2025-07-10T00:30:24.576347624Z" level=info msg="shim disconnected" id=5fd6f244e243b1996c4b8b0583fec73e1a1eb23813cecb5620f3335f1fbc0221 namespace=k8s.io Jul 10 00:30:24.576517 containerd[1542]: time="2025-07-10T00:30:24.576409864Z" level=warning msg="cleaning up after shim disconnected" id=5fd6f244e243b1996c4b8b0583fec73e1a1eb23813cecb5620f3335f1fbc0221 namespace=k8s.io Jul 10 00:30:24.576517 containerd[1542]: time="2025-07-10T00:30:24.576442143Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:30:24.576638 containerd[1542]: time="2025-07-10T00:30:24.576529743Z" level=info msg="StopPodSandbox for \"8edc2e84da8d416d9737c6129f6960c67f9abff1ffed0e9afc7323e026e69a7b\"" Jul 10 00:30:24.576638 containerd[1542]: time="2025-07-10T00:30:24.576561863Z" level=info msg="Container to stop \"7be49fb5efe49145c903465d73f3c01c8514edb65dfae67f9ea7347f693d9992\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:30:24.576638 containerd[1542]: time="2025-07-10T00:30:24.576572783Z" level=info msg="Container to stop \"35ee7df12f7057c16b7bc3c56484655b1dec2c74c916b838541b97c933a6f4ae\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:30:24.576638 containerd[1542]: time="2025-07-10T00:30:24.576583983Z" level=info msg="Container to stop \"122ae575a58e82c2ffa0019a7a6c287d9a2ea27e3dabcf23bf3ab946bebdea41\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:30:24.576638 containerd[1542]: time="2025-07-10T00:30:24.576593423Z" level=info msg="Container to stop \"902b2816d9c3a1c1fb845755d52beec5a82c1a27d5e0e6722325fbf813a37077\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:30:24.576638 containerd[1542]: time="2025-07-10T00:30:24.576602383Z" level=info msg="Container to stop \"5f893321818dbd02ca4c2cdc5767488596fa4415d398374dc08f148b58d6fa9f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:30:24.591250 containerd[1542]: time="2025-07-10T00:30:24.590484320Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:30:24Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 10 00:30:24.592498 containerd[1542]: time="2025-07-10T00:30:24.592461157Z" level=info msg="TearDown network for sandbox \"5fd6f244e243b1996c4b8b0583fec73e1a1eb23813cecb5620f3335f1fbc0221\" successfully" Jul 10 00:30:24.592498 containerd[1542]: time="2025-07-10T00:30:24.592492077Z" level=info msg="StopPodSandbox for \"5fd6f244e243b1996c4b8b0583fec73e1a1eb23813cecb5620f3335f1fbc0221\" returns successfully" Jul 10 00:30:24.620743 containerd[1542]: time="2025-07-10T00:30:24.620681630Z" level=info msg="shim disconnected" id=8edc2e84da8d416d9737c6129f6960c67f9abff1ffed0e9afc7323e026e69a7b namespace=k8s.io Jul 10 00:30:24.620743 containerd[1542]: time="2025-07-10T00:30:24.620739150Z" level=warning msg="cleaning up after shim disconnected" id=8edc2e84da8d416d9737c6129f6960c67f9abff1ffed0e9afc7323e026e69a7b namespace=k8s.io Jul 10 00:30:24.620743 containerd[1542]: time="2025-07-10T00:30:24.620749270Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:30:24.632163 containerd[1542]: time="2025-07-10T00:30:24.632111131Z" level=info msg="TearDown network for sandbox \"8edc2e84da8d416d9737c6129f6960c67f9abff1ffed0e9afc7323e026e69a7b\" successfully" Jul 10 00:30:24.632163 containerd[1542]: time="2025-07-10T00:30:24.632148011Z" level=info msg="StopPodSandbox for \"8edc2e84da8d416d9737c6129f6960c67f9abff1ffed0e9afc7323e026e69a7b\" returns successfully" Jul 10 00:30:24.690826 kubelet[2613]: I0710 00:30:24.690767 2613 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbjb6\" (UniqueName: \"kubernetes.io/projected/073addf9-45a4-4183-ba3c-13a2309ae575-kube-api-access-wbjb6\") pod \"073addf9-45a4-4183-ba3c-13a2309ae575\" (UID: \"073addf9-45a4-4183-ba3c-13a2309ae575\") " Jul 10 00:30:24.690826 kubelet[2613]: I0710 00:30:24.690812 2613 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/073addf9-45a4-4183-ba3c-13a2309ae575-bpf-maps\") pod \"073addf9-45a4-4183-ba3c-13a2309ae575\" (UID: \"073addf9-45a4-4183-ba3c-13a2309ae575\") " Jul 10 00:30:24.690826 kubelet[2613]: I0710 00:30:24.690838 2613 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f58n2\" (UniqueName: \"kubernetes.io/projected/8aadeeaa-ea3e-41ae-a389-2d682a038c74-kube-api-access-f58n2\") pod \"8aadeeaa-ea3e-41ae-a389-2d682a038c74\" (UID: \"8aadeeaa-ea3e-41ae-a389-2d682a038c74\") " Jul 10 00:30:24.691303 kubelet[2613]: I0710 00:30:24.690857 2613 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/073addf9-45a4-4183-ba3c-13a2309ae575-hubble-tls\") pod \"073addf9-45a4-4183-ba3c-13a2309ae575\" (UID: \"073addf9-45a4-4183-ba3c-13a2309ae575\") " Jul 10 00:30:24.691303 kubelet[2613]: I0710 00:30:24.690875 2613 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/073addf9-45a4-4183-ba3c-13a2309ae575-clustermesh-secrets\") pod \"073addf9-45a4-4183-ba3c-13a2309ae575\" (UID: \"073addf9-45a4-4183-ba3c-13a2309ae575\") " Jul 10 00:30:24.691303 kubelet[2613]: I0710 00:30:24.690889 2613 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/073addf9-45a4-4183-ba3c-13a2309ae575-host-proc-sys-net\") pod \"073addf9-45a4-4183-ba3c-13a2309ae575\" (UID: \"073addf9-45a4-4183-ba3c-13a2309ae575\") " Jul 10 00:30:24.691303 kubelet[2613]: I0710 00:30:24.690903 2613 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/073addf9-45a4-4183-ba3c-13a2309ae575-hostproc\") pod \"073addf9-45a4-4183-ba3c-13a2309ae575\" (UID: \"073addf9-45a4-4183-ba3c-13a2309ae575\") " Jul 10 00:30:24.695289 kubelet[2613]: I0710 00:30:24.694099 2613 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/073addf9-45a4-4183-ba3c-13a2309ae575-host-proc-sys-kernel\") pod \"073addf9-45a4-4183-ba3c-13a2309ae575\" (UID: \"073addf9-45a4-4183-ba3c-13a2309ae575\") " Jul 10 00:30:24.695289 kubelet[2613]: I0710 00:30:24.694154 2613 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/073addf9-45a4-4183-ba3c-13a2309ae575-cilium-config-path\") pod \"073addf9-45a4-4183-ba3c-13a2309ae575\" (UID: \"073addf9-45a4-4183-ba3c-13a2309ae575\") " Jul 10 00:30:24.695289 kubelet[2613]: I0710 00:30:24.694172 2613 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/073addf9-45a4-4183-ba3c-13a2309ae575-etc-cni-netd\") pod \"073addf9-45a4-4183-ba3c-13a2309ae575\" (UID: \"073addf9-45a4-4183-ba3c-13a2309ae575\") " Jul 10 00:30:24.695289 kubelet[2613]: I0710 00:30:24.694380 2613 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/073addf9-45a4-4183-ba3c-13a2309ae575-lib-modules\") pod \"073addf9-45a4-4183-ba3c-13a2309ae575\" (UID: \"073addf9-45a4-4183-ba3c-13a2309ae575\") " Jul 10 00:30:24.695289 kubelet[2613]: I0710 00:30:24.694401 2613 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/073addf9-45a4-4183-ba3c-13a2309ae575-cni-path\") pod \"073addf9-45a4-4183-ba3c-13a2309ae575\" (UID: \"073addf9-45a4-4183-ba3c-13a2309ae575\") " Jul 10 00:30:24.695289 kubelet[2613]: I0710 00:30:24.694416 2613 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/073addf9-45a4-4183-ba3c-13a2309ae575-cilium-cgroup\") pod \"073addf9-45a4-4183-ba3c-13a2309ae575\" (UID: \"073addf9-45a4-4183-ba3c-13a2309ae575\") " Jul 10 00:30:24.695502 kubelet[2613]: I0710 00:30:24.694433 2613 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8aadeeaa-ea3e-41ae-a389-2d682a038c74-cilium-config-path\") pod \"8aadeeaa-ea3e-41ae-a389-2d682a038c74\" (UID: \"8aadeeaa-ea3e-41ae-a389-2d682a038c74\") " Jul 10 00:30:24.695502 kubelet[2613]: I0710 00:30:24.694450 2613 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/073addf9-45a4-4183-ba3c-13a2309ae575-cilium-run\") pod \"073addf9-45a4-4183-ba3c-13a2309ae575\" (UID: \"073addf9-45a4-4183-ba3c-13a2309ae575\") " Jul 10 00:30:24.695502 kubelet[2613]: I0710 00:30:24.694466 2613 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/073addf9-45a4-4183-ba3c-13a2309ae575-xtables-lock\") pod \"073addf9-45a4-4183-ba3c-13a2309ae575\" (UID: \"073addf9-45a4-4183-ba3c-13a2309ae575\") " Jul 10 00:30:24.696473 kubelet[2613]: I0710 00:30:24.696423 2613 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/073addf9-45a4-4183-ba3c-13a2309ae575-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "073addf9-45a4-4183-ba3c-13a2309ae575" (UID: "073addf9-45a4-4183-ba3c-13a2309ae575"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:30:24.696646 kubelet[2613]: I0710 00:30:24.696622 2613 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/073addf9-45a4-4183-ba3c-13a2309ae575-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "073addf9-45a4-4183-ba3c-13a2309ae575" (UID: "073addf9-45a4-4183-ba3c-13a2309ae575"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:30:24.696931 kubelet[2613]: I0710 00:30:24.696836 2613 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/073addf9-45a4-4183-ba3c-13a2309ae575-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "073addf9-45a4-4183-ba3c-13a2309ae575" (UID: "073addf9-45a4-4183-ba3c-13a2309ae575"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:30:24.696931 kubelet[2613]: I0710 00:30:24.696899 2613 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/073addf9-45a4-4183-ba3c-13a2309ae575-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "073addf9-45a4-4183-ba3c-13a2309ae575" (UID: "073addf9-45a4-4183-ba3c-13a2309ae575"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:30:24.697007 kubelet[2613]: I0710 00:30:24.696970 2613 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/073addf9-45a4-4183-ba3c-13a2309ae575-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "073addf9-45a4-4183-ba3c-13a2309ae575" (UID: "073addf9-45a4-4183-ba3c-13a2309ae575"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 10 00:30:24.698120 kubelet[2613]: I0710 00:30:24.698012 2613 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/073addf9-45a4-4183-ba3c-13a2309ae575-kube-api-access-wbjb6" (OuterVolumeSpecName: "kube-api-access-wbjb6") pod "073addf9-45a4-4183-ba3c-13a2309ae575" (UID: "073addf9-45a4-4183-ba3c-13a2309ae575"). InnerVolumeSpecName "kube-api-access-wbjb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 10 00:30:24.698277 kubelet[2613]: I0710 00:30:24.698123 2613 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/073addf9-45a4-4183-ba3c-13a2309ae575-cni-path" (OuterVolumeSpecName: "cni-path") pod "073addf9-45a4-4183-ba3c-13a2309ae575" (UID: "073addf9-45a4-4183-ba3c-13a2309ae575"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:30:24.698429 kubelet[2613]: I0710 00:30:24.698140 2613 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/073addf9-45a4-4183-ba3c-13a2309ae575-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "073addf9-45a4-4183-ba3c-13a2309ae575" (UID: "073addf9-45a4-4183-ba3c-13a2309ae575"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:30:24.698555 kubelet[2613]: I0710 00:30:24.698539 2613 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/073addf9-45a4-4183-ba3c-13a2309ae575-hostproc" (OuterVolumeSpecName: "hostproc") pod "073addf9-45a4-4183-ba3c-13a2309ae575" (UID: "073addf9-45a4-4183-ba3c-13a2309ae575"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:30:24.700142 kubelet[2613]: I0710 00:30:24.698599 2613 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/073addf9-45a4-4183-ba3c-13a2309ae575-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "073addf9-45a4-4183-ba3c-13a2309ae575" (UID: "073addf9-45a4-4183-ba3c-13a2309ae575"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:30:24.700319 kubelet[2613]: I0710 00:30:24.698624 2613 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/073addf9-45a4-4183-ba3c-13a2309ae575-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "073addf9-45a4-4183-ba3c-13a2309ae575" (UID: "073addf9-45a4-4183-ba3c-13a2309ae575"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:30:24.700319 kubelet[2613]: I0710 00:30:24.698892 2613 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/073addf9-45a4-4183-ba3c-13a2309ae575-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "073addf9-45a4-4183-ba3c-13a2309ae575" (UID: "073addf9-45a4-4183-ba3c-13a2309ae575"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:30:24.700319 kubelet[2613]: I0710 00:30:24.698977 2613 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8aadeeaa-ea3e-41ae-a389-2d682a038c74-kube-api-access-f58n2" (OuterVolumeSpecName: "kube-api-access-f58n2") pod "8aadeeaa-ea3e-41ae-a389-2d682a038c74" (UID: "8aadeeaa-ea3e-41ae-a389-2d682a038c74"). InnerVolumeSpecName "kube-api-access-f58n2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 10 00:30:24.700319 kubelet[2613]: I0710 00:30:24.699295 2613 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/073addf9-45a4-4183-ba3c-13a2309ae575-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "073addf9-45a4-4183-ba3c-13a2309ae575" (UID: "073addf9-45a4-4183-ba3c-13a2309ae575"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 10 00:30:24.700444 kubelet[2613]: I0710 00:30:24.699907 2613 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8aadeeaa-ea3e-41ae-a389-2d682a038c74-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8aadeeaa-ea3e-41ae-a389-2d682a038c74" (UID: "8aadeeaa-ea3e-41ae-a389-2d682a038c74"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 10 00:30:24.701466 kubelet[2613]: I0710 00:30:24.701183 2613 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/073addf9-45a4-4183-ba3c-13a2309ae575-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "073addf9-45a4-4183-ba3c-13a2309ae575" (UID: "073addf9-45a4-4183-ba3c-13a2309ae575"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 10 00:30:24.794798 kubelet[2613]: I0710 00:30:24.794623 2613 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/073addf9-45a4-4183-ba3c-13a2309ae575-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 10 00:30:24.794798 kubelet[2613]: I0710 00:30:24.794657 2613 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/073addf9-45a4-4183-ba3c-13a2309ae575-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 10 00:30:24.794798 kubelet[2613]: I0710 00:30:24.794665 2613 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/073addf9-45a4-4183-ba3c-13a2309ae575-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 10 00:30:24.794798 kubelet[2613]: I0710 00:30:24.794674 2613 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/073addf9-45a4-4183-ba3c-13a2309ae575-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 10 00:30:24.794798 kubelet[2613]: I0710 00:30:24.794683 2613 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/073addf9-45a4-4183-ba3c-13a2309ae575-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 10 00:30:24.794798 kubelet[2613]: I0710 00:30:24.794692 2613 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8aadeeaa-ea3e-41ae-a389-2d682a038c74-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 10 00:30:24.794798 kubelet[2613]: I0710 00:30:24.794700 2613 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/073addf9-45a4-4183-ba3c-13a2309ae575-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 10 00:30:24.794798 kubelet[2613]: I0710 00:30:24.794708 2613 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/073addf9-45a4-4183-ba3c-13a2309ae575-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 10 00:30:24.795097 kubelet[2613]: I0710 00:30:24.794716 2613 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbjb6\" (UniqueName: \"kubernetes.io/projected/073addf9-45a4-4183-ba3c-13a2309ae575-kube-api-access-wbjb6\") on node \"localhost\" DevicePath \"\"" Jul 10 00:30:24.795097 kubelet[2613]: I0710 00:30:24.794725 2613 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/073addf9-45a4-4183-ba3c-13a2309ae575-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 10 00:30:24.795097 kubelet[2613]: I0710 00:30:24.794733 2613 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f58n2\" (UniqueName: \"kubernetes.io/projected/8aadeeaa-ea3e-41ae-a389-2d682a038c74-kube-api-access-f58n2\") on node \"localhost\" DevicePath \"\"" Jul 10 00:30:24.795097 kubelet[2613]: I0710 00:30:24.794740 2613 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/073addf9-45a4-4183-ba3c-13a2309ae575-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 10 00:30:24.795097 kubelet[2613]: I0710 00:30:24.794748 2613 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/073addf9-45a4-4183-ba3c-13a2309ae575-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 10 00:30:24.795097 kubelet[2613]: I0710 00:30:24.794757 2613 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/073addf9-45a4-4183-ba3c-13a2309ae575-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 10 00:30:24.795097 kubelet[2613]: I0710 00:30:24.794764 2613 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/073addf9-45a4-4183-ba3c-13a2309ae575-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 10 00:30:24.795097 kubelet[2613]: I0710 00:30:24.794772 2613 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/073addf9-45a4-4183-ba3c-13a2309ae575-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 10 00:30:25.339624 kubelet[2613]: I0710 00:30:25.339586 2613 scope.go:117] "RemoveContainer" containerID="122ae575a58e82c2ffa0019a7a6c287d9a2ea27e3dabcf23bf3ab946bebdea41" Jul 10 00:30:25.342540 containerd[1542]: time="2025-07-10T00:30:25.342494644Z" level=info msg="RemoveContainer for \"122ae575a58e82c2ffa0019a7a6c287d9a2ea27e3dabcf23bf3ab946bebdea41\"" Jul 10 00:30:25.347695 containerd[1542]: time="2025-07-10T00:30:25.347636476Z" level=info msg="RemoveContainer for \"122ae575a58e82c2ffa0019a7a6c287d9a2ea27e3dabcf23bf3ab946bebdea41\" returns successfully" Jul 10 00:30:25.348499 kubelet[2613]: I0710 00:30:25.348000 2613 scope.go:117] "RemoveContainer" containerID="35ee7df12f7057c16b7bc3c56484655b1dec2c74c916b838541b97c933a6f4ae" Jul 10 00:30:25.349233 containerd[1542]: time="2025-07-10T00:30:25.349185393Z" level=info msg="RemoveContainer for \"35ee7df12f7057c16b7bc3c56484655b1dec2c74c916b838541b97c933a6f4ae\"" Jul 10 00:30:25.352923 containerd[1542]: time="2025-07-10T00:30:25.352885667Z" level=info msg="RemoveContainer for \"35ee7df12f7057c16b7bc3c56484655b1dec2c74c916b838541b97c933a6f4ae\" returns successfully" Jul 10 00:30:25.353172 kubelet[2613]: I0710 00:30:25.353140 2613 scope.go:117] "RemoveContainer" containerID="5f893321818dbd02ca4c2cdc5767488596fa4415d398374dc08f148b58d6fa9f" Jul 10 00:30:25.354459 containerd[1542]: time="2025-07-10T00:30:25.354425585Z" level=info msg="RemoveContainer for \"5f893321818dbd02ca4c2cdc5767488596fa4415d398374dc08f148b58d6fa9f\"" Jul 10 00:30:25.356990 containerd[1542]: time="2025-07-10T00:30:25.356955421Z" level=info msg="RemoveContainer for \"5f893321818dbd02ca4c2cdc5767488596fa4415d398374dc08f148b58d6fa9f\" returns successfully" Jul 10 00:30:25.357660 kubelet[2613]: I0710 00:30:25.357629 2613 scope.go:117] "RemoveContainer" containerID="902b2816d9c3a1c1fb845755d52beec5a82c1a27d5e0e6722325fbf813a37077" Jul 10 00:30:25.359440 containerd[1542]: time="2025-07-10T00:30:25.359407537Z" level=info msg="RemoveContainer for \"902b2816d9c3a1c1fb845755d52beec5a82c1a27d5e0e6722325fbf813a37077\"" Jul 10 00:30:25.363549 containerd[1542]: time="2025-07-10T00:30:25.363504170Z" level=info msg="RemoveContainer for \"902b2816d9c3a1c1fb845755d52beec5a82c1a27d5e0e6722325fbf813a37077\" returns successfully" Jul 10 00:30:25.363768 kubelet[2613]: I0710 00:30:25.363721 2613 scope.go:117] "RemoveContainer" containerID="7be49fb5efe49145c903465d73f3c01c8514edb65dfae67f9ea7347f693d9992" Jul 10 00:30:25.368334 containerd[1542]: time="2025-07-10T00:30:25.368292962Z" level=info msg="RemoveContainer for \"7be49fb5efe49145c903465d73f3c01c8514edb65dfae67f9ea7347f693d9992\"" Jul 10 00:30:25.467994 systemd[1]: var-lib-kubelet-pods-8aadeeaa\x2dea3e\x2d41ae\x2da389\x2d2d682a038c74-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df58n2.mount: Deactivated successfully. Jul 10 00:30:25.468154 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8edc2e84da8d416d9737c6129f6960c67f9abff1ffed0e9afc7323e026e69a7b-rootfs.mount: Deactivated successfully. Jul 10 00:30:25.468256 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8edc2e84da8d416d9737c6129f6960c67f9abff1ffed0e9afc7323e026e69a7b-shm.mount: Deactivated successfully. Jul 10 00:30:25.468358 systemd[1]: var-lib-kubelet-pods-073addf9\x2d45a4\x2d4183\x2dba3c\x2d13a2309ae575-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwbjb6.mount: Deactivated successfully. Jul 10 00:30:25.468459 systemd[1]: var-lib-kubelet-pods-073addf9\x2d45a4\x2d4183\x2dba3c\x2d13a2309ae575-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 10 00:30:25.468559 systemd[1]: var-lib-kubelet-pods-073addf9\x2d45a4\x2d4183\x2dba3c\x2d13a2309ae575-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 10 00:30:25.479960 containerd[1542]: time="2025-07-10T00:30:25.479847381Z" level=info msg="RemoveContainer for \"7be49fb5efe49145c903465d73f3c01c8514edb65dfae67f9ea7347f693d9992\" returns successfully" Jul 10 00:30:25.480428 kubelet[2613]: I0710 00:30:25.480126 2613 scope.go:117] "RemoveContainer" containerID="122ae575a58e82c2ffa0019a7a6c287d9a2ea27e3dabcf23bf3ab946bebdea41" Jul 10 00:30:25.480507 containerd[1542]: time="2025-07-10T00:30:25.480443820Z" level=error msg="ContainerStatus for \"122ae575a58e82c2ffa0019a7a6c287d9a2ea27e3dabcf23bf3ab946bebdea41\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"122ae575a58e82c2ffa0019a7a6c287d9a2ea27e3dabcf23bf3ab946bebdea41\": not found" Jul 10 00:30:25.489318 kubelet[2613]: E0710 00:30:25.489040 2613 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"122ae575a58e82c2ffa0019a7a6c287d9a2ea27e3dabcf23bf3ab946bebdea41\": not found" containerID="122ae575a58e82c2ffa0019a7a6c287d9a2ea27e3dabcf23bf3ab946bebdea41" Jul 10 00:30:25.489318 kubelet[2613]: I0710 00:30:25.489090 2613 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"122ae575a58e82c2ffa0019a7a6c287d9a2ea27e3dabcf23bf3ab946bebdea41"} err="failed to get container status \"122ae575a58e82c2ffa0019a7a6c287d9a2ea27e3dabcf23bf3ab946bebdea41\": rpc error: code = NotFound desc = an error occurred when try to find container \"122ae575a58e82c2ffa0019a7a6c287d9a2ea27e3dabcf23bf3ab946bebdea41\": not found" Jul 10 00:30:25.489318 kubelet[2613]: I0710 00:30:25.489175 2613 scope.go:117] "RemoveContainer" containerID="35ee7df12f7057c16b7bc3c56484655b1dec2c74c916b838541b97c933a6f4ae" Jul 10 00:30:25.489845 containerd[1542]: time="2025-07-10T00:30:25.489746525Z" level=error msg="ContainerStatus for \"35ee7df12f7057c16b7bc3c56484655b1dec2c74c916b838541b97c933a6f4ae\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"35ee7df12f7057c16b7bc3c56484655b1dec2c74c916b838541b97c933a6f4ae\": not found" Jul 10 00:30:25.489909 kubelet[2613]: E0710 00:30:25.489890 2613 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"35ee7df12f7057c16b7bc3c56484655b1dec2c74c916b838541b97c933a6f4ae\": not found" containerID="35ee7df12f7057c16b7bc3c56484655b1dec2c74c916b838541b97c933a6f4ae" Jul 10 00:30:25.489957 kubelet[2613]: I0710 00:30:25.489939 2613 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"35ee7df12f7057c16b7bc3c56484655b1dec2c74c916b838541b97c933a6f4ae"} err="failed to get container status \"35ee7df12f7057c16b7bc3c56484655b1dec2c74c916b838541b97c933a6f4ae\": rpc error: code = NotFound desc = an error occurred when try to find container \"35ee7df12f7057c16b7bc3c56484655b1dec2c74c916b838541b97c933a6f4ae\": not found" Jul 10 00:30:25.489980 kubelet[2613]: I0710 00:30:25.489959 2613 scope.go:117] "RemoveContainer" containerID="5f893321818dbd02ca4c2cdc5767488596fa4415d398374dc08f148b58d6fa9f" Jul 10 00:30:25.491266 containerd[1542]: time="2025-07-10T00:30:25.490152684Z" level=error msg="ContainerStatus for \"5f893321818dbd02ca4c2cdc5767488596fa4415d398374dc08f148b58d6fa9f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5f893321818dbd02ca4c2cdc5767488596fa4415d398374dc08f148b58d6fa9f\": not found" Jul 10 00:30:25.491266 containerd[1542]: time="2025-07-10T00:30:25.490497444Z" level=error msg="ContainerStatus for \"902b2816d9c3a1c1fb845755d52beec5a82c1a27d5e0e6722325fbf813a37077\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"902b2816d9c3a1c1fb845755d52beec5a82c1a27d5e0e6722325fbf813a37077\": not found" Jul 10 00:30:25.491266 containerd[1542]: time="2025-07-10T00:30:25.490788483Z" level=error msg="ContainerStatus for \"7be49fb5efe49145c903465d73f3c01c8514edb65dfae67f9ea7347f693d9992\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7be49fb5efe49145c903465d73f3c01c8514edb65dfae67f9ea7347f693d9992\": not found" Jul 10 00:30:25.491497 kubelet[2613]: E0710 00:30:25.490280 2613 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5f893321818dbd02ca4c2cdc5767488596fa4415d398374dc08f148b58d6fa9f\": not found" containerID="5f893321818dbd02ca4c2cdc5767488596fa4415d398374dc08f148b58d6fa9f" Jul 10 00:30:25.491497 kubelet[2613]: I0710 00:30:25.490297 2613 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5f893321818dbd02ca4c2cdc5767488596fa4415d398374dc08f148b58d6fa9f"} err="failed to get container status \"5f893321818dbd02ca4c2cdc5767488596fa4415d398374dc08f148b58d6fa9f\": rpc error: code = NotFound desc = an error occurred when try to find container \"5f893321818dbd02ca4c2cdc5767488596fa4415d398374dc08f148b58d6fa9f\": not found" Jul 10 00:30:25.491497 kubelet[2613]: I0710 00:30:25.490310 2613 scope.go:117] "RemoveContainer" containerID="902b2816d9c3a1c1fb845755d52beec5a82c1a27d5e0e6722325fbf813a37077" Jul 10 00:30:25.491497 kubelet[2613]: E0710 00:30:25.490592 2613 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"902b2816d9c3a1c1fb845755d52beec5a82c1a27d5e0e6722325fbf813a37077\": not found" containerID="902b2816d9c3a1c1fb845755d52beec5a82c1a27d5e0e6722325fbf813a37077" Jul 10 00:30:25.491497 kubelet[2613]: I0710 00:30:25.490619 2613 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"902b2816d9c3a1c1fb845755d52beec5a82c1a27d5e0e6722325fbf813a37077"} err="failed to get container status \"902b2816d9c3a1c1fb845755d52beec5a82c1a27d5e0e6722325fbf813a37077\": rpc error: code = NotFound desc = an error occurred when try to find container \"902b2816d9c3a1c1fb845755d52beec5a82c1a27d5e0e6722325fbf813a37077\": not found" Jul 10 00:30:25.491497 kubelet[2613]: I0710 00:30:25.490633 2613 scope.go:117] "RemoveContainer" containerID="7be49fb5efe49145c903465d73f3c01c8514edb65dfae67f9ea7347f693d9992" Jul 10 00:30:25.491750 kubelet[2613]: E0710 00:30:25.490892 2613 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7be49fb5efe49145c903465d73f3c01c8514edb65dfae67f9ea7347f693d9992\": not found" containerID="7be49fb5efe49145c903465d73f3c01c8514edb65dfae67f9ea7347f693d9992" Jul 10 00:30:25.491750 kubelet[2613]: I0710 00:30:25.490914 2613 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7be49fb5efe49145c903465d73f3c01c8514edb65dfae67f9ea7347f693d9992"} err="failed to get container status \"7be49fb5efe49145c903465d73f3c01c8514edb65dfae67f9ea7347f693d9992\": rpc error: code = NotFound desc = an error occurred when try to find container \"7be49fb5efe49145c903465d73f3c01c8514edb65dfae67f9ea7347f693d9992\": not found" Jul 10 00:30:25.491750 kubelet[2613]: I0710 00:30:25.490940 2613 scope.go:117] "RemoveContainer" containerID="79d04ab1cc6488f4fc224c23e4998b2ecfa8c8f91d7f612b6efeced108b4dac4" Jul 10 00:30:25.493445 containerd[1542]: time="2025-07-10T00:30:25.493392399Z" level=info msg="RemoveContainer for \"79d04ab1cc6488f4fc224c23e4998b2ecfa8c8f91d7f612b6efeced108b4dac4\"" Jul 10 00:30:25.496886 containerd[1542]: time="2025-07-10T00:30:25.496836754Z" level=info msg="RemoveContainer for \"79d04ab1cc6488f4fc224c23e4998b2ecfa8c8f91d7f612b6efeced108b4dac4\" returns successfully" Jul 10 00:30:25.497107 kubelet[2613]: I0710 00:30:25.497077 2613 scope.go:117] "RemoveContainer" containerID="79d04ab1cc6488f4fc224c23e4998b2ecfa8c8f91d7f612b6efeced108b4dac4" Jul 10 00:30:25.497464 containerd[1542]: time="2025-07-10T00:30:25.497369073Z" level=error msg="ContainerStatus for \"79d04ab1cc6488f4fc224c23e4998b2ecfa8c8f91d7f612b6efeced108b4dac4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"79d04ab1cc6488f4fc224c23e4998b2ecfa8c8f91d7f612b6efeced108b4dac4\": not found" Jul 10 00:30:25.497559 kubelet[2613]: E0710 00:30:25.497497 2613 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"79d04ab1cc6488f4fc224c23e4998b2ecfa8c8f91d7f612b6efeced108b4dac4\": not found" containerID="79d04ab1cc6488f4fc224c23e4998b2ecfa8c8f91d7f612b6efeced108b4dac4" Jul 10 00:30:25.497559 kubelet[2613]: I0710 00:30:25.497521 2613 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"79d04ab1cc6488f4fc224c23e4998b2ecfa8c8f91d7f612b6efeced108b4dac4"} err="failed to get container status \"79d04ab1cc6488f4fc224c23e4998b2ecfa8c8f91d7f612b6efeced108b4dac4\": rpc error: code = NotFound desc = an error occurred when try to find container \"79d04ab1cc6488f4fc224c23e4998b2ecfa8c8f91d7f612b6efeced108b4dac4\": not found" Jul 10 00:30:26.073890 kubelet[2613]: E0710 00:30:26.073846 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:26.076238 kubelet[2613]: I0710 00:30:26.076086 2613 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="073addf9-45a4-4183-ba3c-13a2309ae575" path="/var/lib/kubelet/pods/073addf9-45a4-4183-ba3c-13a2309ae575/volumes" Jul 10 00:30:26.076683 kubelet[2613]: I0710 00:30:26.076645 2613 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8aadeeaa-ea3e-41ae-a389-2d682a038c74" path="/var/lib/kubelet/pods/8aadeeaa-ea3e-41ae-a389-2d682a038c74/volumes" Jul 10 00:30:26.143933 kubelet[2613]: E0710 00:30:26.143884 2613 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 10 00:30:26.387496 sshd[4271]: pam_unix(sshd:session): session closed for user core Jul 10 00:30:26.395527 systemd[1]: Started sshd@23-10.0.0.64:22-10.0.0.1:54720.service - OpenSSH per-connection server daemon (10.0.0.1:54720). Jul 10 00:30:26.395924 systemd[1]: sshd@22-10.0.0.64:22-10.0.0.1:39054.service: Deactivated successfully. Jul 10 00:30:26.399531 systemd-logind[1522]: Session 23 logged out. Waiting for processes to exit. Jul 10 00:30:26.399726 systemd[1]: session-23.scope: Deactivated successfully. Jul 10 00:30:26.401251 systemd-logind[1522]: Removed session 23. Jul 10 00:30:26.426754 sshd[4439]: Accepted publickey for core from 10.0.0.1 port 54720 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:30:26.428293 sshd[4439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:30:26.432297 systemd-logind[1522]: New session 24 of user core. Jul 10 00:30:26.442570 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 10 00:30:27.396179 sshd[4439]: pam_unix(sshd:session): session closed for user core Jul 10 00:30:27.405526 systemd[1]: Started sshd@24-10.0.0.64:22-10.0.0.1:54732.service - OpenSSH per-connection server daemon (10.0.0.1:54732). Jul 10 00:30:27.405952 systemd[1]: sshd@23-10.0.0.64:22-10.0.0.1:54720.service: Deactivated successfully. Jul 10 00:30:27.413663 systemd-logind[1522]: Session 24 logged out. Waiting for processes to exit. Jul 10 00:30:27.414109 systemd[1]: session-24.scope: Deactivated successfully. Jul 10 00:30:27.417050 kubelet[2613]: E0710 00:30:27.416249 2613 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8aadeeaa-ea3e-41ae-a389-2d682a038c74" containerName="cilium-operator" Jul 10 00:30:27.420295 kubelet[2613]: E0710 00:30:27.419081 2613 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="073addf9-45a4-4183-ba3c-13a2309ae575" containerName="clean-cilium-state" Jul 10 00:30:27.420295 kubelet[2613]: E0710 00:30:27.419139 2613 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="073addf9-45a4-4183-ba3c-13a2309ae575" containerName="cilium-agent" Jul 10 00:30:27.420295 kubelet[2613]: E0710 00:30:27.419150 2613 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="073addf9-45a4-4183-ba3c-13a2309ae575" containerName="mount-cgroup" Jul 10 00:30:27.420295 kubelet[2613]: E0710 00:30:27.419156 2613 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="073addf9-45a4-4183-ba3c-13a2309ae575" containerName="apply-sysctl-overwrites" Jul 10 00:30:27.420295 kubelet[2613]: E0710 00:30:27.419162 2613 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="073addf9-45a4-4183-ba3c-13a2309ae575" containerName="mount-bpf-fs" Jul 10 00:30:27.420295 kubelet[2613]: I0710 00:30:27.419276 2613 memory_manager.go:354] "RemoveStaleState removing state" podUID="073addf9-45a4-4183-ba3c-13a2309ae575" containerName="cilium-agent" Jul 10 00:30:27.420295 kubelet[2613]: I0710 00:30:27.419288 2613 memory_manager.go:354] "RemoveStaleState removing state" podUID="8aadeeaa-ea3e-41ae-a389-2d682a038c74" containerName="cilium-operator" Jul 10 00:30:27.419566 systemd-logind[1522]: Removed session 24. Jul 10 00:30:27.478543 sshd[4453]: Accepted publickey for core from 10.0.0.1 port 54732 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:30:27.479969 sshd[4453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:30:27.484762 systemd-logind[1522]: New session 25 of user core. Jul 10 00:30:27.492564 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 10 00:30:27.514263 kubelet[2613]: I0710 00:30:27.514172 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0c34d8aa-ba9a-45a7-a9d4-04dd86c6e3c0-cilium-cgroup\") pod \"cilium-z5jjd\" (UID: \"0c34d8aa-ba9a-45a7-a9d4-04dd86c6e3c0\") " pod="kube-system/cilium-z5jjd" Jul 10 00:30:27.514263 kubelet[2613]: I0710 00:30:27.514225 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0c34d8aa-ba9a-45a7-a9d4-04dd86c6e3c0-cilium-ipsec-secrets\") pod \"cilium-z5jjd\" (UID: \"0c34d8aa-ba9a-45a7-a9d4-04dd86c6e3c0\") " pod="kube-system/cilium-z5jjd" Jul 10 00:30:27.514437 kubelet[2613]: I0710 00:30:27.514288 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0c34d8aa-ba9a-45a7-a9d4-04dd86c6e3c0-hubble-tls\") pod \"cilium-z5jjd\" (UID: \"0c34d8aa-ba9a-45a7-a9d4-04dd86c6e3c0\") " pod="kube-system/cilium-z5jjd" Jul 10 00:30:27.514437 kubelet[2613]: I0710 00:30:27.514335 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0c34d8aa-ba9a-45a7-a9d4-04dd86c6e3c0-host-proc-sys-net\") pod \"cilium-z5jjd\" (UID: \"0c34d8aa-ba9a-45a7-a9d4-04dd86c6e3c0\") " pod="kube-system/cilium-z5jjd" Jul 10 00:30:27.514437 kubelet[2613]: I0710 00:30:27.514375 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0c34d8aa-ba9a-45a7-a9d4-04dd86c6e3c0-clustermesh-secrets\") pod \"cilium-z5jjd\" (UID: \"0c34d8aa-ba9a-45a7-a9d4-04dd86c6e3c0\") " pod="kube-system/cilium-z5jjd" Jul 10 00:30:27.514437 kubelet[2613]: I0710 00:30:27.514393 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0c34d8aa-ba9a-45a7-a9d4-04dd86c6e3c0-bpf-maps\") pod \"cilium-z5jjd\" (UID: \"0c34d8aa-ba9a-45a7-a9d4-04dd86c6e3c0\") " pod="kube-system/cilium-z5jjd" Jul 10 00:30:27.514437 kubelet[2613]: I0710 00:30:27.514412 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbzdd\" (UniqueName: \"kubernetes.io/projected/0c34d8aa-ba9a-45a7-a9d4-04dd86c6e3c0-kube-api-access-cbzdd\") pod \"cilium-z5jjd\" (UID: \"0c34d8aa-ba9a-45a7-a9d4-04dd86c6e3c0\") " pod="kube-system/cilium-z5jjd" Jul 10 00:30:27.514437 kubelet[2613]: I0710 00:30:27.514428 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0c34d8aa-ba9a-45a7-a9d4-04dd86c6e3c0-cilium-config-path\") pod \"cilium-z5jjd\" (UID: \"0c34d8aa-ba9a-45a7-a9d4-04dd86c6e3c0\") " pod="kube-system/cilium-z5jjd" Jul 10 00:30:27.514571 kubelet[2613]: I0710 00:30:27.514443 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0c34d8aa-ba9a-45a7-a9d4-04dd86c6e3c0-host-proc-sys-kernel\") pod \"cilium-z5jjd\" (UID: \"0c34d8aa-ba9a-45a7-a9d4-04dd86c6e3c0\") " pod="kube-system/cilium-z5jjd" Jul 10 00:30:27.514571 kubelet[2613]: I0710 00:30:27.514460 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0c34d8aa-ba9a-45a7-a9d4-04dd86c6e3c0-hostproc\") pod \"cilium-z5jjd\" (UID: \"0c34d8aa-ba9a-45a7-a9d4-04dd86c6e3c0\") " pod="kube-system/cilium-z5jjd" Jul 10 00:30:27.514571 kubelet[2613]: I0710 00:30:27.514475 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c34d8aa-ba9a-45a7-a9d4-04dd86c6e3c0-lib-modules\") pod \"cilium-z5jjd\" (UID: \"0c34d8aa-ba9a-45a7-a9d4-04dd86c6e3c0\") " pod="kube-system/cilium-z5jjd" Jul 10 00:30:27.514571 kubelet[2613]: I0710 00:30:27.514490 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c34d8aa-ba9a-45a7-a9d4-04dd86c6e3c0-xtables-lock\") pod \"cilium-z5jjd\" (UID: \"0c34d8aa-ba9a-45a7-a9d4-04dd86c6e3c0\") " pod="kube-system/cilium-z5jjd" Jul 10 00:30:27.514571 kubelet[2613]: I0710 00:30:27.514507 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0c34d8aa-ba9a-45a7-a9d4-04dd86c6e3c0-cilium-run\") pod \"cilium-z5jjd\" (UID: \"0c34d8aa-ba9a-45a7-a9d4-04dd86c6e3c0\") " pod="kube-system/cilium-z5jjd" Jul 10 00:30:27.514571 kubelet[2613]: I0710 00:30:27.514522 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c34d8aa-ba9a-45a7-a9d4-04dd86c6e3c0-etc-cni-netd\") pod \"cilium-z5jjd\" (UID: \"0c34d8aa-ba9a-45a7-a9d4-04dd86c6e3c0\") " pod="kube-system/cilium-z5jjd" Jul 10 00:30:27.514738 kubelet[2613]: I0710 00:30:27.514537 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0c34d8aa-ba9a-45a7-a9d4-04dd86c6e3c0-cni-path\") pod \"cilium-z5jjd\" (UID: \"0c34d8aa-ba9a-45a7-a9d4-04dd86c6e3c0\") " pod="kube-system/cilium-z5jjd" Jul 10 00:30:27.545028 sshd[4453]: pam_unix(sshd:session): session closed for user core Jul 10 00:30:27.552521 systemd[1]: Started sshd@25-10.0.0.64:22-10.0.0.1:54736.service - OpenSSH per-connection server daemon (10.0.0.1:54736). Jul 10 00:30:27.552917 systemd[1]: sshd@24-10.0.0.64:22-10.0.0.1:54732.service: Deactivated successfully. Jul 10 00:30:27.555915 systemd[1]: session-25.scope: Deactivated successfully. Jul 10 00:30:27.556637 systemd-logind[1522]: Session 25 logged out. Waiting for processes to exit. Jul 10 00:30:27.557875 systemd-logind[1522]: Removed session 25. Jul 10 00:30:27.584123 sshd[4462]: Accepted publickey for core from 10.0.0.1 port 54736 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:30:27.585817 sshd[4462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:30:27.590292 systemd-logind[1522]: New session 26 of user core. Jul 10 00:30:27.598518 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 10 00:30:27.744674 kubelet[2613]: E0710 00:30:27.744247 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:27.744814 containerd[1542]: time="2025-07-10T00:30:27.744763268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z5jjd,Uid:0c34d8aa-ba9a-45a7-a9d4-04dd86c6e3c0,Namespace:kube-system,Attempt:0,}" Jul 10 00:30:27.773434 containerd[1542]: time="2025-07-10T00:30:27.773351223Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:30:27.773543 containerd[1542]: time="2025-07-10T00:30:27.773427863Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:30:27.773836 containerd[1542]: time="2025-07-10T00:30:27.773804782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:30:27.773927 containerd[1542]: time="2025-07-10T00:30:27.773901782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:30:27.809471 containerd[1542]: time="2025-07-10T00:30:27.809432767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z5jjd,Uid:0c34d8aa-ba9a-45a7-a9d4-04dd86c6e3c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f26731cf8c66da05332abe1042c5bb99282290715b7a2eeb59c2c6003054d33\"" Jul 10 00:30:27.810452 kubelet[2613]: E0710 00:30:27.810078 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:27.813877 containerd[1542]: time="2025-07-10T00:30:27.813811360Z" level=info msg="CreateContainer within sandbox \"2f26731cf8c66da05332abe1042c5bb99282290715b7a2eeb59c2c6003054d33\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 00:30:27.824266 containerd[1542]: time="2025-07-10T00:30:27.824192864Z" level=info msg="CreateContainer within sandbox \"2f26731cf8c66da05332abe1042c5bb99282290715b7a2eeb59c2c6003054d33\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"04d2f882fb092b41e6c868fb39a381d9f3a00a37caabd520b75e0ee3de8fb304\"" Jul 10 00:30:27.824865 containerd[1542]: time="2025-07-10T00:30:27.824764143Z" level=info msg="StartContainer for \"04d2f882fb092b41e6c868fb39a381d9f3a00a37caabd520b75e0ee3de8fb304\"" Jul 10 00:30:27.869456 containerd[1542]: time="2025-07-10T00:30:27.869411674Z" level=info msg="StartContainer for \"04d2f882fb092b41e6c868fb39a381d9f3a00a37caabd520b75e0ee3de8fb304\" returns successfully" Jul 10 00:30:27.908818 kubelet[2613]: I0710 00:30:27.908232 2613 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-10T00:30:27Z","lastTransitionTime":"2025-07-10T00:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 10 00:30:27.941106 containerd[1542]: time="2025-07-10T00:30:27.940902922Z" level=info msg="shim disconnected" id=04d2f882fb092b41e6c868fb39a381d9f3a00a37caabd520b75e0ee3de8fb304 namespace=k8s.io Jul 10 00:30:27.941106 containerd[1542]: time="2025-07-10T00:30:27.940953922Z" level=warning msg="cleaning up after shim disconnected" id=04d2f882fb092b41e6c868fb39a381d9f3a00a37caabd520b75e0ee3de8fb304 namespace=k8s.io Jul 10 00:30:27.941106 containerd[1542]: time="2025-07-10T00:30:27.940963002Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:30:28.353419 kubelet[2613]: E0710 00:30:28.353375 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:28.357069 containerd[1542]: time="2025-07-10T00:30:28.357017046Z" level=info msg="CreateContainer within sandbox \"2f26731cf8c66da05332abe1042c5bb99282290715b7a2eeb59c2c6003054d33\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 00:30:28.369055 containerd[1542]: time="2025-07-10T00:30:28.368955948Z" level=info msg="CreateContainer within sandbox \"2f26731cf8c66da05332abe1042c5bb99282290715b7a2eeb59c2c6003054d33\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"532645738044e4fe57e7d273c3053709264409be4d91642d531359b7916248ce\"" Jul 10 00:30:28.370757 containerd[1542]: time="2025-07-10T00:30:28.369765147Z" level=info msg="StartContainer for \"532645738044e4fe57e7d273c3053709264409be4d91642d531359b7916248ce\"" Jul 10 00:30:28.422469 containerd[1542]: time="2025-07-10T00:30:28.422420067Z" level=info msg="StartContainer for \"532645738044e4fe57e7d273c3053709264409be4d91642d531359b7916248ce\" returns successfully" Jul 10 00:30:28.449750 containerd[1542]: time="2025-07-10T00:30:28.449683465Z" level=info msg="shim disconnected" id=532645738044e4fe57e7d273c3053709264409be4d91642d531359b7916248ce namespace=k8s.io Jul 10 00:30:28.449750 containerd[1542]: time="2025-07-10T00:30:28.449746105Z" level=warning msg="cleaning up after shim disconnected" id=532645738044e4fe57e7d273c3053709264409be4d91642d531359b7916248ce namespace=k8s.io Jul 10 00:30:28.449750 containerd[1542]: time="2025-07-10T00:30:28.449758545Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:30:29.356282 kubelet[2613]: E0710 00:30:29.356187 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:29.361187 containerd[1542]: time="2025-07-10T00:30:29.361141647Z" level=info msg="CreateContainer within sandbox \"2f26731cf8c66da05332abe1042c5bb99282290715b7a2eeb59c2c6003054d33\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 00:30:29.389730 containerd[1542]: time="2025-07-10T00:30:29.389678925Z" level=info msg="CreateContainer within sandbox \"2f26731cf8c66da05332abe1042c5bb99282290715b7a2eeb59c2c6003054d33\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b53e93ea1c9f508443dc56c972fabe6752dc78d39e81e5bb3e6c85dce8c23f4b\"" Jul 10 00:30:29.391497 containerd[1542]: time="2025-07-10T00:30:29.390473643Z" level=info msg="StartContainer for \"b53e93ea1c9f508443dc56c972fabe6752dc78d39e81e5bb3e6c85dce8c23f4b\"" Jul 10 00:30:29.443029 containerd[1542]: time="2025-07-10T00:30:29.442914525Z" level=info msg="StartContainer for \"b53e93ea1c9f508443dc56c972fabe6752dc78d39e81e5bb3e6c85dce8c23f4b\" returns successfully" Jul 10 00:30:29.461831 containerd[1542]: time="2025-07-10T00:30:29.461775577Z" level=info msg="shim disconnected" id=b53e93ea1c9f508443dc56c972fabe6752dc78d39e81e5bb3e6c85dce8c23f4b namespace=k8s.io Jul 10 00:30:29.461831 containerd[1542]: time="2025-07-10T00:30:29.461827657Z" level=warning msg="cleaning up after shim disconnected" id=b53e93ea1c9f508443dc56c972fabe6752dc78d39e81e5bb3e6c85dce8c23f4b namespace=k8s.io Jul 10 00:30:29.461831 containerd[1542]: time="2025-07-10T00:30:29.461836337Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:30:29.621110 systemd[1]: run-containerd-runc-k8s.io-b53e93ea1c9f508443dc56c972fabe6752dc78d39e81e5bb3e6c85dce8c23f4b-runc.p4Nz4x.mount: Deactivated successfully. Jul 10 00:30:29.621286 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b53e93ea1c9f508443dc56c972fabe6752dc78d39e81e5bb3e6c85dce8c23f4b-rootfs.mount: Deactivated successfully. Jul 10 00:30:30.073544 kubelet[2613]: E0710 00:30:30.073447 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:30.361890 kubelet[2613]: E0710 00:30:30.361778 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:30.365634 containerd[1542]: time="2025-07-10T00:30:30.365594399Z" level=info msg="CreateContainer within sandbox \"2f26731cf8c66da05332abe1042c5bb99282290715b7a2eeb59c2c6003054d33\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 00:30:30.382894 containerd[1542]: time="2025-07-10T00:30:30.382815654Z" level=info msg="CreateContainer within sandbox \"2f26731cf8c66da05332abe1042c5bb99282290715b7a2eeb59c2c6003054d33\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"dbd4df9e2211bf297e47d3a8671a0d3fe75038c7931233b1691ba687bb4aa3c3\"" Jul 10 00:30:30.384225 containerd[1542]: time="2025-07-10T00:30:30.384052252Z" level=info msg="StartContainer for \"dbd4df9e2211bf297e47d3a8671a0d3fe75038c7931233b1691ba687bb4aa3c3\"" Jul 10 00:30:30.425311 containerd[1542]: time="2025-07-10T00:30:30.425256512Z" level=info msg="StartContainer for \"dbd4df9e2211bf297e47d3a8671a0d3fe75038c7931233b1691ba687bb4aa3c3\" returns successfully" Jul 10 00:30:30.446289 containerd[1542]: time="2025-07-10T00:30:30.446230441Z" level=info msg="shim disconnected" id=dbd4df9e2211bf297e47d3a8671a0d3fe75038c7931233b1691ba687bb4aa3c3 namespace=k8s.io Jul 10 00:30:30.446289 containerd[1542]: time="2025-07-10T00:30:30.446286841Z" level=warning msg="cleaning up after shim disconnected" id=dbd4df9e2211bf297e47d3a8671a0d3fe75038c7931233b1691ba687bb4aa3c3 namespace=k8s.io Jul 10 00:30:30.446289 containerd[1542]: time="2025-07-10T00:30:30.446296601Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:30:30.621154 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dbd4df9e2211bf297e47d3a8671a0d3fe75038c7931233b1691ba687bb4aa3c3-rootfs.mount: Deactivated successfully. Jul 10 00:30:31.145623 kubelet[2613]: E0710 00:30:31.145580 2613 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 10 00:30:31.365445 kubelet[2613]: E0710 00:30:31.365415 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:31.368543 containerd[1542]: time="2025-07-10T00:30:31.368505743Z" level=info msg="CreateContainer within sandbox \"2f26731cf8c66da05332abe1042c5bb99282290715b7a2eeb59c2c6003054d33\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 00:30:31.428725 containerd[1542]: time="2025-07-10T00:30:31.428348977Z" level=info msg="CreateContainer within sandbox \"2f26731cf8c66da05332abe1042c5bb99282290715b7a2eeb59c2c6003054d33\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c49cb86bf23dae4d1c64423a12a80726cd5b4776e228582166ecdf9168d1ad0e\"" Jul 10 00:30:31.428952 containerd[1542]: time="2025-07-10T00:30:31.428924656Z" level=info msg="StartContainer for \"c49cb86bf23dae4d1c64423a12a80726cd5b4776e228582166ecdf9168d1ad0e\"" Jul 10 00:30:31.485603 containerd[1542]: time="2025-07-10T00:30:31.485470255Z" level=info msg="StartContainer for \"c49cb86bf23dae4d1c64423a12a80726cd5b4776e228582166ecdf9168d1ad0e\" returns successfully" Jul 10 00:30:31.754218 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 10 00:30:32.369486 kubelet[2613]: E0710 00:30:32.369436 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:33.746243 kubelet[2613]: E0710 00:30:33.745150 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:34.042383 systemd[1]: run-containerd-runc-k8s.io-c49cb86bf23dae4d1c64423a12a80726cd5b4776e228582166ecdf9168d1ad0e-runc.4QzjsJ.mount: Deactivated successfully. Jul 10 00:30:34.705647 systemd-networkd[1226]: lxc_health: Link UP Jul 10 00:30:34.716362 systemd-networkd[1226]: lxc_health: Gained carrier Jul 10 00:30:35.746677 kubelet[2613]: E0710 00:30:35.745894 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:35.771931 kubelet[2613]: I0710 00:30:35.771876 2613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-z5jjd" podStartSLOduration=8.771858284 podStartE2EDuration="8.771858284s" podCreationTimestamp="2025-07-10 00:30:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:30:32.386623695 +0000 UTC m=+86.390892677" watchObservedRunningTime="2025-07-10 00:30:35.771858284 +0000 UTC m=+89.776127146" Jul 10 00:30:36.381119 kubelet[2613]: E0710 00:30:36.380944 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:36.426588 systemd-networkd[1226]: lxc_health: Gained IPv6LL Jul 10 00:30:37.075230 kubelet[2613]: E0710 00:30:37.075143 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:37.383321 kubelet[2613]: E0710 00:30:37.382973 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:40.398571 systemd[1]: run-containerd-runc-k8s.io-c49cb86bf23dae4d1c64423a12a80726cd5b4776e228582166ecdf9168d1ad0e-runc.l4B4Uz.mount: Deactivated successfully. Jul 10 00:30:40.446931 sshd[4462]: pam_unix(sshd:session): session closed for user core Jul 10 00:30:40.449779 systemd[1]: sshd@25-10.0.0.64:22-10.0.0.1:54736.service: Deactivated successfully. Jul 10 00:30:40.452470 systemd-logind[1522]: Session 26 logged out. Waiting for processes to exit. Jul 10 00:30:40.453554 systemd[1]: session-26.scope: Deactivated successfully. Jul 10 00:30:40.454976 systemd-logind[1522]: Removed session 26.