Jul 10 00:40:36.905673 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 10 00:40:36.905694 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Jul 9 22:54:34 -00 2025 Jul 10 00:40:36.905704 kernel: KASLR enabled Jul 10 00:40:36.905710 kernel: efi: EFI v2.7 by EDK II Jul 10 00:40:36.905716 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jul 10 00:40:36.905721 kernel: random: crng init done Jul 10 00:40:36.905728 kernel: ACPI: Early table checksum verification disabled Jul 10 00:40:36.905734 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jul 10 00:40:36.905740 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 10 00:40:36.905748 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:40:36.905754 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:40:36.905760 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:40:36.905766 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:40:36.905772 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:40:36.905779 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:40:36.905787 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:40:36.905793 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:40:36.905800 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:40:36.905806 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 10 00:40:36.905812 kernel: NUMA: Failed to initialise from firmware Jul 10 00:40:36.905819 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 10 00:40:36.905825 kernel: NUMA: NODE_DATA [mem 0xdc95a800-0xdc95ffff] Jul 10 00:40:36.905832 kernel: Zone ranges: Jul 10 00:40:36.905838 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 10 00:40:36.905844 kernel: DMA32 empty Jul 10 00:40:36.905852 kernel: Normal empty Jul 10 00:40:36.905858 kernel: Movable zone start for each node Jul 10 00:40:36.905865 kernel: Early memory node ranges Jul 10 00:40:36.905871 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jul 10 00:40:36.905878 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 10 00:40:36.905884 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 10 00:40:36.905890 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 10 00:40:36.905897 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 10 00:40:36.905903 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 10 00:40:36.905909 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 10 00:40:36.905915 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 10 00:40:36.905922 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 10 00:40:36.905929 kernel: psci: probing for conduit method from ACPI. Jul 10 00:40:36.905936 kernel: psci: PSCIv1.1 detected in firmware. Jul 10 00:40:36.905942 kernel: psci: Using standard PSCI v0.2 function IDs Jul 10 00:40:36.905951 kernel: psci: Trusted OS migration not required Jul 10 00:40:36.905958 kernel: psci: SMC Calling Convention v1.1 Jul 10 00:40:36.905965 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 10 00:40:36.905973 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 10 00:40:36.905980 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 10 00:40:36.905987 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 10 00:40:36.905993 kernel: Detected PIPT I-cache on CPU0 Jul 10 00:40:36.906000 kernel: CPU features: detected: GIC system register CPU interface Jul 10 00:40:36.906007 kernel: CPU features: detected: Hardware dirty bit management Jul 10 00:40:36.906013 kernel: CPU features: detected: Spectre-v4 Jul 10 00:40:36.906020 kernel: CPU features: detected: Spectre-BHB Jul 10 00:40:36.906026 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 10 00:40:36.906033 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 10 00:40:36.906041 kernel: CPU features: detected: ARM erratum 1418040 Jul 10 00:40:36.906048 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 10 00:40:36.906055 kernel: alternatives: applying boot alternatives Jul 10 00:40:36.906063 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2fac453dfc912247542078b0007b053dde4e47cc6ef808508492c36b6016a78f Jul 10 00:40:36.906070 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 10 00:40:36.906076 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 10 00:40:36.906083 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 10 00:40:36.906089 kernel: Fallback order for Node 0: 0 Jul 10 00:40:36.906096 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 10 00:40:36.906103 kernel: Policy zone: DMA Jul 10 00:40:36.906109 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 10 00:40:36.906132 kernel: software IO TLB: area num 4. Jul 10 00:40:36.906139 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 10 00:40:36.906146 kernel: Memory: 2386412K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185876K reserved, 0K cma-reserved) Jul 10 00:40:36.906153 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 10 00:40:36.906159 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 10 00:40:36.906167 kernel: rcu: RCU event tracing is enabled. Jul 10 00:40:36.906174 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 10 00:40:36.906180 kernel: Trampoline variant of Tasks RCU enabled. Jul 10 00:40:36.906187 kernel: Tracing variant of Tasks RCU enabled. Jul 10 00:40:36.906194 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 10 00:40:36.906201 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 10 00:40:36.906208 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 10 00:40:36.906216 kernel: GICv3: 256 SPIs implemented Jul 10 00:40:36.906222 kernel: GICv3: 0 Extended SPIs implemented Jul 10 00:40:36.906229 kernel: Root IRQ handler: gic_handle_irq Jul 10 00:40:36.906236 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 10 00:40:36.906242 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 10 00:40:36.906249 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 10 00:40:36.906256 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jul 10 00:40:36.906263 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jul 10 00:40:36.906269 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 10 00:40:36.906276 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 10 00:40:36.906291 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 10 00:40:36.906300 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 00:40:36.906307 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 10 00:40:36.906314 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 10 00:40:36.906321 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 10 00:40:36.906327 kernel: arm-pv: using stolen time PV Jul 10 00:40:36.906334 kernel: Console: colour dummy device 80x25 Jul 10 00:40:36.906341 kernel: ACPI: Core revision 20230628 Jul 10 00:40:36.906348 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 10 00:40:36.906355 kernel: pid_max: default: 32768 minimum: 301 Jul 10 00:40:36.906362 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 10 00:40:36.906371 kernel: landlock: Up and running. Jul 10 00:40:36.906378 kernel: SELinux: Initializing. Jul 10 00:40:36.906385 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 00:40:36.906392 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 00:40:36.906399 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 10 00:40:36.906406 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 10 00:40:36.906412 kernel: rcu: Hierarchical SRCU implementation. Jul 10 00:40:36.906419 kernel: rcu: Max phase no-delay instances is 400. Jul 10 00:40:36.906426 kernel: Platform MSI: ITS@0x8080000 domain created Jul 10 00:40:36.906435 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 10 00:40:36.906442 kernel: Remapping and enabling EFI services. Jul 10 00:40:36.906448 kernel: smp: Bringing up secondary CPUs ... Jul 10 00:40:36.906455 kernel: Detected PIPT I-cache on CPU1 Jul 10 00:40:36.906463 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 10 00:40:36.906478 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 10 00:40:36.906485 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 00:40:36.906492 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 10 00:40:36.906499 kernel: Detected PIPT I-cache on CPU2 Jul 10 00:40:36.906506 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 10 00:40:36.906515 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 10 00:40:36.906522 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 00:40:36.906534 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 10 00:40:36.906543 kernel: Detected PIPT I-cache on CPU3 Jul 10 00:40:36.906550 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 10 00:40:36.906557 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 10 00:40:36.906564 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 00:40:36.906572 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 10 00:40:36.906579 kernel: smp: Brought up 1 node, 4 CPUs Jul 10 00:40:36.906588 kernel: SMP: Total of 4 processors activated. Jul 10 00:40:36.906595 kernel: CPU features: detected: 32-bit EL0 Support Jul 10 00:40:36.906602 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 10 00:40:36.906609 kernel: CPU features: detected: Common not Private translations Jul 10 00:40:36.906617 kernel: CPU features: detected: CRC32 instructions Jul 10 00:40:36.906624 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 10 00:40:36.906631 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 10 00:40:36.906639 kernel: CPU features: detected: LSE atomic instructions Jul 10 00:40:36.906647 kernel: CPU features: detected: Privileged Access Never Jul 10 00:40:36.906655 kernel: CPU features: detected: RAS Extension Support Jul 10 00:40:36.906662 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 10 00:40:36.906669 kernel: CPU: All CPU(s) started at EL1 Jul 10 00:40:36.906677 kernel: alternatives: applying system-wide alternatives Jul 10 00:40:36.906684 kernel: devtmpfs: initialized Jul 10 00:40:36.906691 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 10 00:40:36.906699 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 10 00:40:36.906706 kernel: pinctrl core: initialized pinctrl subsystem Jul 10 00:40:36.906714 kernel: SMBIOS 3.0.0 present. Jul 10 00:40:36.906722 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jul 10 00:40:36.906729 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 10 00:40:36.906736 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 10 00:40:36.906744 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 10 00:40:36.906751 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 10 00:40:36.906758 kernel: audit: initializing netlink subsys (disabled) Jul 10 00:40:36.906766 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Jul 10 00:40:36.906773 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 10 00:40:36.906782 kernel: cpuidle: using governor menu Jul 10 00:40:36.906789 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 10 00:40:36.906796 kernel: ASID allocator initialised with 32768 entries Jul 10 00:40:36.906803 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 10 00:40:36.906811 kernel: Serial: AMBA PL011 UART driver Jul 10 00:40:36.906818 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 10 00:40:36.906825 kernel: Modules: 0 pages in range for non-PLT usage Jul 10 00:40:36.906832 kernel: Modules: 509008 pages in range for PLT usage Jul 10 00:40:36.906839 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 10 00:40:36.906848 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 10 00:40:36.906856 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 10 00:40:36.906863 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 10 00:40:36.906871 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 10 00:40:36.906878 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 10 00:40:36.906885 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 10 00:40:36.906892 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 10 00:40:36.906899 kernel: ACPI: Added _OSI(Module Device) Jul 10 00:40:36.906907 kernel: ACPI: Added _OSI(Processor Device) Jul 10 00:40:36.906915 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 10 00:40:36.906923 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 10 00:40:36.906930 kernel: ACPI: Interpreter enabled Jul 10 00:40:36.906937 kernel: ACPI: Using GIC for interrupt routing Jul 10 00:40:36.906944 kernel: ACPI: MCFG table detected, 1 entries Jul 10 00:40:36.906952 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 10 00:40:36.906959 kernel: printk: console [ttyAMA0] enabled Jul 10 00:40:36.906966 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 10 00:40:36.907101 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 10 00:40:36.907175 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 10 00:40:36.907238 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 10 00:40:36.907309 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 10 00:40:36.907372 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 10 00:40:36.907381 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 10 00:40:36.907389 kernel: PCI host bridge to bus 0000:00 Jul 10 00:40:36.907457 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 10 00:40:36.907531 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 10 00:40:36.907591 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 10 00:40:36.907650 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 10 00:40:36.907730 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 10 00:40:36.907807 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 10 00:40:36.907875 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 10 00:40:36.907947 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 10 00:40:36.908015 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 10 00:40:36.908090 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 10 00:40:36.908157 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 10 00:40:36.908223 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 10 00:40:36.908289 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 10 00:40:36.908350 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 10 00:40:36.908412 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 10 00:40:36.908422 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 10 00:40:36.908430 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 10 00:40:36.908437 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 10 00:40:36.908445 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 10 00:40:36.908452 kernel: iommu: Default domain type: Translated Jul 10 00:40:36.908460 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 10 00:40:36.908488 kernel: efivars: Registered efivars operations Jul 10 00:40:36.908497 kernel: vgaarb: loaded Jul 10 00:40:36.908507 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 10 00:40:36.908514 kernel: VFS: Disk quotas dquot_6.6.0 Jul 10 00:40:36.908522 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 10 00:40:36.908529 kernel: pnp: PnP ACPI init Jul 10 00:40:36.908604 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 10 00:40:36.908615 kernel: pnp: PnP ACPI: found 1 devices Jul 10 00:40:36.908623 kernel: NET: Registered PF_INET protocol family Jul 10 00:40:36.908630 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 10 00:40:36.908640 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 10 00:40:36.908648 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 10 00:40:36.908655 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 10 00:40:36.908663 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 10 00:40:36.908671 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 10 00:40:36.908678 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 00:40:36.908686 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 00:40:36.908693 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 10 00:40:36.908701 kernel: PCI: CLS 0 bytes, default 64 Jul 10 00:40:36.908709 kernel: kvm [1]: HYP mode not available Jul 10 00:40:36.908717 kernel: Initialise system trusted keyrings Jul 10 00:40:36.908724 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 10 00:40:36.908732 kernel: Key type asymmetric registered Jul 10 00:40:36.908739 kernel: Asymmetric key parser 'x509' registered Jul 10 00:40:36.908747 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 10 00:40:36.908754 kernel: io scheduler mq-deadline registered Jul 10 00:40:36.908762 kernel: io scheduler kyber registered Jul 10 00:40:36.908769 kernel: io scheduler bfq registered Jul 10 00:40:36.908778 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 10 00:40:36.908786 kernel: ACPI: button: Power Button [PWRB] Jul 10 00:40:36.908794 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 10 00:40:36.908861 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 10 00:40:36.908871 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 10 00:40:36.908879 kernel: thunder_xcv, ver 1.0 Jul 10 00:40:36.908886 kernel: thunder_bgx, ver 1.0 Jul 10 00:40:36.908894 kernel: nicpf, ver 1.0 Jul 10 00:40:36.908901 kernel: nicvf, ver 1.0 Jul 10 00:40:36.908982 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 10 00:40:36.909047 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-10T00:40:36 UTC (1752108036) Jul 10 00:40:36.909060 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 10 00:40:36.909068 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 10 00:40:36.909076 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 10 00:40:36.909083 kernel: watchdog: Hard watchdog permanently disabled Jul 10 00:40:36.909091 kernel: NET: Registered PF_INET6 protocol family Jul 10 00:40:36.909098 kernel: Segment Routing with IPv6 Jul 10 00:40:36.909109 kernel: In-situ OAM (IOAM) with IPv6 Jul 10 00:40:36.909117 kernel: NET: Registered PF_PACKET protocol family Jul 10 00:40:36.909124 kernel: Key type dns_resolver registered Jul 10 00:40:36.909131 kernel: registered taskstats version 1 Jul 10 00:40:36.909139 kernel: Loading compiled-in X.509 certificates Jul 10 00:40:36.909147 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 9cbc45ab00feb4acb0fa362a962909c99fb6ef52' Jul 10 00:40:36.909154 kernel: Key type .fscrypt registered Jul 10 00:40:36.909161 kernel: Key type fscrypt-provisioning registered Jul 10 00:40:36.909174 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 10 00:40:36.909183 kernel: ima: Allocated hash algorithm: sha1 Jul 10 00:40:36.909190 kernel: ima: No architecture policies found Jul 10 00:40:36.909198 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 10 00:40:36.909205 kernel: clk: Disabling unused clocks Jul 10 00:40:36.909215 kernel: Freeing unused kernel memory: 39424K Jul 10 00:40:36.909224 kernel: Run /init as init process Jul 10 00:40:36.909234 kernel: with arguments: Jul 10 00:40:36.909241 kernel: /init Jul 10 00:40:36.909248 kernel: with environment: Jul 10 00:40:36.909257 kernel: HOME=/ Jul 10 00:40:36.909265 kernel: TERM=linux Jul 10 00:40:36.909272 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 10 00:40:36.909286 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 10 00:40:36.909296 systemd[1]: Detected virtualization kvm. Jul 10 00:40:36.909305 systemd[1]: Detected architecture arm64. Jul 10 00:40:36.909312 systemd[1]: Running in initrd. Jul 10 00:40:36.909320 systemd[1]: No hostname configured, using default hostname. Jul 10 00:40:36.909330 systemd[1]: Hostname set to . Jul 10 00:40:36.909340 systemd[1]: Initializing machine ID from VM UUID. Jul 10 00:40:36.909348 systemd[1]: Queued start job for default target initrd.target. Jul 10 00:40:36.909356 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:40:36.909364 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:40:36.909373 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 10 00:40:36.909381 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 00:40:36.909389 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 10 00:40:36.909399 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 10 00:40:36.909409 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 10 00:40:36.909418 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 10 00:40:36.909426 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:40:36.909434 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:40:36.909444 systemd[1]: Reached target paths.target - Path Units. Jul 10 00:40:36.909454 systemd[1]: Reached target slices.target - Slice Units. Jul 10 00:40:36.909462 systemd[1]: Reached target swap.target - Swaps. Jul 10 00:40:36.909486 systemd[1]: Reached target timers.target - Timer Units. Jul 10 00:40:36.909495 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 00:40:36.909503 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 00:40:36.909511 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 10 00:40:36.909519 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 10 00:40:36.909527 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:40:36.909535 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 00:40:36.909545 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:40:36.909553 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 00:40:36.909560 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 10 00:40:36.909568 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 00:40:36.909576 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 10 00:40:36.909584 systemd[1]: Starting systemd-fsck-usr.service... Jul 10 00:40:36.909591 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 00:40:36.909599 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 00:40:36.909607 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:40:36.909616 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 10 00:40:36.909624 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:40:36.909632 systemd[1]: Finished systemd-fsck-usr.service. Jul 10 00:40:36.909641 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 00:40:36.909650 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:40:36.909679 systemd-journald[239]: Collecting audit messages is disabled. Jul 10 00:40:36.909698 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 00:40:36.909707 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 00:40:36.909717 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 10 00:40:36.909725 systemd-journald[239]: Journal started Jul 10 00:40:36.909744 systemd-journald[239]: Runtime Journal (/run/log/journal/2c1da40fcc0a48bc91dd8219cab31643) is 5.9M, max 47.3M, 41.4M free. Jul 10 00:40:36.892967 systemd-modules-load[240]: Inserted module 'overlay' Jul 10 00:40:36.913256 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 00:40:36.913295 kernel: Bridge firewalling registered Jul 10 00:40:36.914714 systemd-modules-load[240]: Inserted module 'br_netfilter' Jul 10 00:40:36.917081 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 00:40:36.920836 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 00:40:36.929643 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:40:36.931314 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 00:40:36.933551 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:40:36.934983 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:40:36.938803 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 10 00:40:36.940452 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:40:36.943481 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:40:36.947498 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 00:40:36.955533 dracut-cmdline[275]: dracut-dracut-053 Jul 10 00:40:36.958155 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2fac453dfc912247542078b0007b053dde4e47cc6ef808508492c36b6016a78f Jul 10 00:40:36.980341 systemd-resolved[282]: Positive Trust Anchors: Jul 10 00:40:36.980359 systemd-resolved[282]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:40:36.980390 systemd-resolved[282]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 00:40:36.985499 systemd-resolved[282]: Defaulting to hostname 'linux'. Jul 10 00:40:36.986525 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 00:40:36.989806 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:40:37.028502 kernel: SCSI subsystem initialized Jul 10 00:40:37.032490 kernel: Loading iSCSI transport class v2.0-870. Jul 10 00:40:37.040494 kernel: iscsi: registered transport (tcp) Jul 10 00:40:37.053859 kernel: iscsi: registered transport (qla4xxx) Jul 10 00:40:37.053916 kernel: QLogic iSCSI HBA Driver Jul 10 00:40:37.106513 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 10 00:40:37.114621 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 10 00:40:37.135329 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 10 00:40:37.135396 kernel: device-mapper: uevent: version 1.0.3 Jul 10 00:40:37.137010 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 10 00:40:37.184506 kernel: raid6: neonx8 gen() 15780 MB/s Jul 10 00:40:37.201491 kernel: raid6: neonx4 gen() 15618 MB/s Jul 10 00:40:37.218490 kernel: raid6: neonx2 gen() 13250 MB/s Jul 10 00:40:37.235487 kernel: raid6: neonx1 gen() 10492 MB/s Jul 10 00:40:37.252488 kernel: raid6: int64x8 gen() 6956 MB/s Jul 10 00:40:37.269489 kernel: raid6: int64x4 gen() 7344 MB/s Jul 10 00:40:37.286488 kernel: raid6: int64x2 gen() 6130 MB/s Jul 10 00:40:37.303567 kernel: raid6: int64x1 gen() 5058 MB/s Jul 10 00:40:37.303605 kernel: raid6: using algorithm neonx8 gen() 15780 MB/s Jul 10 00:40:37.321542 kernel: raid6: .... xor() 11926 MB/s, rmw enabled Jul 10 00:40:37.321557 kernel: raid6: using neon recovery algorithm Jul 10 00:40:37.326488 kernel: xor: measuring software checksum speed Jul 10 00:40:37.327755 kernel: 8regs : 17552 MB/sec Jul 10 00:40:37.327768 kernel: 32regs : 19650 MB/sec Jul 10 00:40:37.329051 kernel: arm64_neon : 26839 MB/sec Jul 10 00:40:37.329063 kernel: xor: using function: arm64_neon (26839 MB/sec) Jul 10 00:40:37.380502 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 10 00:40:37.391805 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 10 00:40:37.407659 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:40:37.419645 systemd-udevd[462]: Using default interface naming scheme 'v255'. Jul 10 00:40:37.423018 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:40:37.425903 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 10 00:40:37.445249 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Jul 10 00:40:37.475165 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 00:40:37.496672 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 00:40:37.536874 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:40:37.546880 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 10 00:40:37.558970 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 10 00:40:37.560682 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 00:40:37.563602 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:40:37.565616 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 00:40:37.575672 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 10 00:40:37.584877 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 10 00:40:37.586864 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 10 00:40:37.588479 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 10 00:40:37.601620 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 10 00:40:37.601644 kernel: GPT:9289727 != 19775487 Jul 10 00:40:37.601654 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 10 00:40:37.601663 kernel: GPT:9289727 != 19775487 Jul 10 00:40:37.601671 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 10 00:40:37.601683 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:40:37.614907 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 10 00:40:37.619001 kernel: BTRFS: device fsid e18a5201-bc0c-484b-ba1b-be3c0a720c32 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (528) Jul 10 00:40:37.619026 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (516) Jul 10 00:40:37.626669 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 10 00:40:37.630907 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 10 00:40:37.632060 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 10 00:40:37.638606 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 10 00:40:37.649618 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 10 00:40:37.650648 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 00:40:37.650714 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:40:37.653759 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 00:40:37.654761 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:40:37.654822 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:40:37.656774 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:40:37.659418 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:40:37.671943 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:40:37.689654 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 00:40:37.706490 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:40:37.772576 disk-uuid[551]: Primary Header is updated. Jul 10 00:40:37.772576 disk-uuid[551]: Secondary Entries is updated. Jul 10 00:40:37.772576 disk-uuid[551]: Secondary Header is updated. Jul 10 00:40:37.776493 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:40:38.786503 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:40:38.787000 disk-uuid[565]: The operation has completed successfully. Jul 10 00:40:38.809213 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 10 00:40:38.809351 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 10 00:40:38.832667 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 10 00:40:38.835555 sh[578]: Success Jul 10 00:40:38.847496 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 10 00:40:38.877865 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 10 00:40:38.887906 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 10 00:40:38.889849 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 10 00:40:38.900369 kernel: BTRFS info (device dm-0): first mount of filesystem e18a5201-bc0c-484b-ba1b-be3c0a720c32 Jul 10 00:40:38.900426 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 10 00:40:38.900457 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 10 00:40:38.901524 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 10 00:40:38.902237 kernel: BTRFS info (device dm-0): using free space tree Jul 10 00:40:38.906217 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 10 00:40:38.907570 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 10 00:40:38.916668 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 10 00:40:38.918658 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 10 00:40:38.926058 kernel: BTRFS info (device vda6): first mount of filesystem 8ce7827a-be35-4e5a-9c5c-f9bfd6370ac0 Jul 10 00:40:38.926105 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 00:40:38.926117 kernel: BTRFS info (device vda6): using free space tree Jul 10 00:40:38.928644 kernel: BTRFS info (device vda6): auto enabling async discard Jul 10 00:40:38.937854 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 10 00:40:38.940495 kernel: BTRFS info (device vda6): last unmount of filesystem 8ce7827a-be35-4e5a-9c5c-f9bfd6370ac0 Jul 10 00:40:38.944987 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 10 00:40:38.950678 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 10 00:40:39.021522 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 00:40:39.039705 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 00:40:39.046325 ignition[672]: Ignition 2.19.0 Jul 10 00:40:39.046336 ignition[672]: Stage: fetch-offline Jul 10 00:40:39.046373 ignition[672]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:40:39.046382 ignition[672]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:40:39.046545 ignition[672]: parsed url from cmdline: "" Jul 10 00:40:39.046548 ignition[672]: no config URL provided Jul 10 00:40:39.046553 ignition[672]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 00:40:39.046560 ignition[672]: no config at "/usr/lib/ignition/user.ign" Jul 10 00:40:39.046583 ignition[672]: op(1): [started] loading QEMU firmware config module Jul 10 00:40:39.046590 ignition[672]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 10 00:40:39.057410 ignition[672]: op(1): [finished] loading QEMU firmware config module Jul 10 00:40:39.073085 systemd-networkd[769]: lo: Link UP Jul 10 00:40:39.073098 systemd-networkd[769]: lo: Gained carrier Jul 10 00:40:39.073830 systemd-networkd[769]: Enumeration completed Jul 10 00:40:39.073929 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 00:40:39.074238 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:40:39.074241 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:40:39.075202 systemd-networkd[769]: eth0: Link UP Jul 10 00:40:39.075205 systemd-networkd[769]: eth0: Gained carrier Jul 10 00:40:39.075211 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:40:39.075509 systemd[1]: Reached target network.target - Network. Jul 10 00:40:39.093525 systemd-networkd[769]: eth0: DHCPv4 address 10.0.0.128/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 00:40:39.104914 ignition[672]: parsing config with SHA512: 278caa6a2c2e672e5ebc2debe6c106e1665ac84fac32b96d79dafd4f245ef8e446794e5af708f03d11ce1fc22843fd4bb8dfed087b8afb2b9c1ddef0e6a142c6 Jul 10 00:40:39.108967 unknown[672]: fetched base config from "system" Jul 10 00:40:39.108976 unknown[672]: fetched user config from "qemu" Jul 10 00:40:39.110976 ignition[672]: fetch-offline: fetch-offline passed Jul 10 00:40:39.111077 ignition[672]: Ignition finished successfully Jul 10 00:40:39.112584 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 00:40:39.115872 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 10 00:40:39.124645 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 10 00:40:39.136422 ignition[775]: Ignition 2.19.0 Jul 10 00:40:39.136435 ignition[775]: Stage: kargs Jul 10 00:40:39.136630 ignition[775]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:40:39.136646 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:40:39.140100 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 10 00:40:39.137571 ignition[775]: kargs: kargs passed Jul 10 00:40:39.137617 ignition[775]: Ignition finished successfully Jul 10 00:40:39.151628 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 10 00:40:39.161808 ignition[784]: Ignition 2.19.0 Jul 10 00:40:39.161818 ignition[784]: Stage: disks Jul 10 00:40:39.162004 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:40:39.162014 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:40:39.162942 ignition[784]: disks: disks passed Jul 10 00:40:39.164897 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 10 00:40:39.162994 ignition[784]: Ignition finished successfully Jul 10 00:40:39.166290 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 10 00:40:39.167650 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 10 00:40:39.169592 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 00:40:39.171122 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 00:40:39.172987 systemd[1]: Reached target basic.target - Basic System. Jul 10 00:40:39.187696 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 10 00:40:39.198535 systemd-fsck[794]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 10 00:40:39.202800 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 10 00:40:39.212037 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 10 00:40:39.256497 kernel: EXT4-fs (vda9): mounted filesystem c566fdd5-af6f-4008-858c-a2aed765f9b4 r/w with ordered data mode. Quota mode: none. Jul 10 00:40:39.256656 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 10 00:40:39.257979 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 10 00:40:39.273590 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 00:40:39.275365 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 10 00:40:39.276731 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 10 00:40:39.276774 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 10 00:40:39.283327 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (802) Jul 10 00:40:39.276797 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 00:40:39.284221 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 10 00:40:39.288969 kernel: BTRFS info (device vda6): first mount of filesystem 8ce7827a-be35-4e5a-9c5c-f9bfd6370ac0 Jul 10 00:40:39.289007 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 00:40:39.289020 kernel: BTRFS info (device vda6): using free space tree Jul 10 00:40:39.287695 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 10 00:40:39.292493 kernel: BTRFS info (device vda6): auto enabling async discard Jul 10 00:40:39.293721 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 00:40:39.329380 initrd-setup-root[827]: cut: /sysroot/etc/passwd: No such file or directory Jul 10 00:40:39.333277 initrd-setup-root[834]: cut: /sysroot/etc/group: No such file or directory Jul 10 00:40:39.338147 initrd-setup-root[841]: cut: /sysroot/etc/shadow: No such file or directory Jul 10 00:40:39.342837 initrd-setup-root[848]: cut: /sysroot/etc/gshadow: No such file or directory Jul 10 00:40:39.420035 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 10 00:40:39.429573 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 10 00:40:39.432132 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 10 00:40:39.437482 kernel: BTRFS info (device vda6): last unmount of filesystem 8ce7827a-be35-4e5a-9c5c-f9bfd6370ac0 Jul 10 00:40:39.454024 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 10 00:40:39.456822 ignition[917]: INFO : Ignition 2.19.0 Jul 10 00:40:39.456822 ignition[917]: INFO : Stage: mount Jul 10 00:40:39.458345 ignition[917]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:40:39.458345 ignition[917]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:40:39.458345 ignition[917]: INFO : mount: mount passed Jul 10 00:40:39.458345 ignition[917]: INFO : Ignition finished successfully Jul 10 00:40:39.460296 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 10 00:40:39.473593 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 10 00:40:39.899188 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 10 00:40:39.914710 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 00:40:39.921488 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (930) Jul 10 00:40:39.921535 kernel: BTRFS info (device vda6): first mount of filesystem 8ce7827a-be35-4e5a-9c5c-f9bfd6370ac0 Jul 10 00:40:39.921546 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 00:40:39.922974 kernel: BTRFS info (device vda6): using free space tree Jul 10 00:40:39.925480 kernel: BTRFS info (device vda6): auto enabling async discard Jul 10 00:40:39.926457 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 00:40:39.946392 ignition[948]: INFO : Ignition 2.19.0 Jul 10 00:40:39.946392 ignition[948]: INFO : Stage: files Jul 10 00:40:39.948068 ignition[948]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:40:39.948068 ignition[948]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:40:39.948068 ignition[948]: DEBUG : files: compiled without relabeling support, skipping Jul 10 00:40:39.951387 ignition[948]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 10 00:40:39.951387 ignition[948]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 10 00:40:39.954844 ignition[948]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 10 00:40:39.956159 ignition[948]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 10 00:40:39.956159 ignition[948]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 10 00:40:39.955401 unknown[948]: wrote ssh authorized keys file for user: core Jul 10 00:40:39.959780 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 10 00:40:39.959780 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jul 10 00:40:39.994676 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 10 00:40:40.258367 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 10 00:40:40.258367 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 10 00:40:40.262052 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 10 00:40:40.604253 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 10 00:40:40.708417 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 10 00:40:40.710351 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 10 00:40:40.710351 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 10 00:40:40.710351 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:40:40.710351 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:40:40.710351 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:40:40.710351 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:40:40.710351 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:40:40.710351 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:40:40.710351 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:40:40.710351 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:40:40.710351 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 10 00:40:40.710351 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 10 00:40:40.710351 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 10 00:40:40.710351 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jul 10 00:40:40.842703 systemd-networkd[769]: eth0: Gained IPv6LL Jul 10 00:40:41.068907 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 10 00:40:41.752233 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 10 00:40:41.752233 ignition[948]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 10 00:40:41.755951 ignition[948]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:40:41.755951 ignition[948]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:40:41.755951 ignition[948]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 10 00:40:41.755951 ignition[948]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 10 00:40:41.755951 ignition[948]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 00:40:41.755951 ignition[948]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 00:40:41.755951 ignition[948]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 10 00:40:41.755951 ignition[948]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 10 00:40:41.787054 ignition[948]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 00:40:41.791371 ignition[948]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 00:40:41.793602 ignition[948]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 10 00:40:41.793602 ignition[948]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 10 00:40:41.793602 ignition[948]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 10 00:40:41.793602 ignition[948]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:40:41.793602 ignition[948]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:40:41.793602 ignition[948]: INFO : files: files passed Jul 10 00:40:41.793602 ignition[948]: INFO : Ignition finished successfully Jul 10 00:40:41.794604 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 10 00:40:41.805961 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 10 00:40:41.808223 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 10 00:40:41.812153 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 10 00:40:41.813139 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 10 00:40:41.815592 initrd-setup-root-after-ignition[976]: grep: /sysroot/oem/oem-release: No such file or directory Jul 10 00:40:41.817743 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:40:41.817743 initrd-setup-root-after-ignition[978]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:40:41.821095 initrd-setup-root-after-ignition[982]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:40:41.823728 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 00:40:41.825353 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 10 00:40:41.838663 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 10 00:40:41.858074 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 10 00:40:41.858184 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 10 00:40:41.860462 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 10 00:40:41.862459 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 10 00:40:41.864347 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 10 00:40:41.865107 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 10 00:40:41.881329 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 00:40:41.891622 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 10 00:40:41.899228 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:40:41.900573 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:40:41.902709 systemd[1]: Stopped target timers.target - Timer Units. Jul 10 00:40:41.904531 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 10 00:40:41.904648 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 00:40:41.907261 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 10 00:40:41.909192 systemd[1]: Stopped target basic.target - Basic System. Jul 10 00:40:41.910724 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 10 00:40:41.912323 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 00:40:41.914177 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 10 00:40:41.916052 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 10 00:40:41.917745 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 00:40:41.919604 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 10 00:40:41.921494 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 10 00:40:41.923156 systemd[1]: Stopped target swap.target - Swaps. Jul 10 00:40:41.924583 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 10 00:40:41.924694 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 10 00:40:41.926856 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:40:41.928668 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:40:41.930566 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 10 00:40:41.931572 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:40:41.932780 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 10 00:40:41.932894 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 10 00:40:41.935572 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 10 00:40:41.935681 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 00:40:41.937594 systemd[1]: Stopped target paths.target - Path Units. Jul 10 00:40:41.939131 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 10 00:40:41.943526 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:40:41.944775 systemd[1]: Stopped target slices.target - Slice Units. Jul 10 00:40:41.946828 systemd[1]: Stopped target sockets.target - Socket Units. Jul 10 00:40:41.948338 systemd[1]: iscsid.socket: Deactivated successfully. Jul 10 00:40:41.948430 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 00:40:41.949904 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 10 00:40:41.949983 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 00:40:41.951443 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 10 00:40:41.951575 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 00:40:41.953291 systemd[1]: ignition-files.service: Deactivated successfully. Jul 10 00:40:41.953389 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 10 00:40:41.965623 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 10 00:40:41.967107 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 10 00:40:41.968022 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 10 00:40:41.968142 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:40:41.969970 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 10 00:40:41.970085 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 00:40:41.976935 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 10 00:40:41.977028 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 10 00:40:41.980888 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 10 00:40:41.982061 ignition[1003]: INFO : Ignition 2.19.0 Jul 10 00:40:41.982061 ignition[1003]: INFO : Stage: umount Jul 10 00:40:41.982061 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:40:41.982061 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:40:41.982061 ignition[1003]: INFO : umount: umount passed Jul 10 00:40:41.982061 ignition[1003]: INFO : Ignition finished successfully Jul 10 00:40:41.981941 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 10 00:40:41.982052 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 10 00:40:41.983166 systemd[1]: Stopped target network.target - Network. Jul 10 00:40:41.984399 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 10 00:40:41.984456 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 10 00:40:41.986001 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 10 00:40:41.986044 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 10 00:40:41.987820 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 10 00:40:41.987861 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 10 00:40:41.989635 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 10 00:40:41.989679 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 10 00:40:41.991445 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 10 00:40:41.993054 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 10 00:40:42.001543 systemd-networkd[769]: eth0: DHCPv6 lease lost Jul 10 00:40:42.003361 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 10 00:40:42.003502 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 10 00:40:42.006862 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 10 00:40:42.006963 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 10 00:40:42.008708 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 10 00:40:42.008761 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:40:42.015576 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 10 00:40:42.016906 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 10 00:40:42.016960 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 00:40:42.018810 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:40:42.018850 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:40:42.020550 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 10 00:40:42.020588 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 10 00:40:42.022501 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 10 00:40:42.022541 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:40:42.024370 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:40:42.034368 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 10 00:40:42.034483 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 10 00:40:42.045109 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 10 00:40:42.045248 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:40:42.047508 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 10 00:40:42.047546 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 10 00:40:42.048663 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 10 00:40:42.048697 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:40:42.050824 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 10 00:40:42.050870 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 10 00:40:42.053590 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 10 00:40:42.053633 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 10 00:40:42.056399 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 00:40:42.056445 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:40:42.065596 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 10 00:40:42.066641 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 10 00:40:42.066695 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:40:42.068823 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 10 00:40:42.068865 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 00:40:42.070859 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 10 00:40:42.070898 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:40:42.073087 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:40:42.073128 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:40:42.075423 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 10 00:40:42.075525 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 10 00:40:42.077489 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 10 00:40:42.077564 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 10 00:40:42.079873 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 10 00:40:42.081288 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 10 00:40:42.081345 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 10 00:40:42.089588 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 10 00:40:42.096781 systemd[1]: Switching root. Jul 10 00:40:42.116536 systemd-journald[239]: Journal stopped Jul 10 00:40:42.861974 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Jul 10 00:40:42.862027 kernel: SELinux: policy capability network_peer_controls=1 Jul 10 00:40:42.862039 kernel: SELinux: policy capability open_perms=1 Jul 10 00:40:42.862048 kernel: SELinux: policy capability extended_socket_class=1 Jul 10 00:40:42.862057 kernel: SELinux: policy capability always_check_network=0 Jul 10 00:40:42.862068 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 10 00:40:42.862078 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 10 00:40:42.862088 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 10 00:40:42.862100 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 10 00:40:42.862110 kernel: audit: type=1403 audit(1752108042.290:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 10 00:40:42.862120 systemd[1]: Successfully loaded SELinux policy in 31.615ms. Jul 10 00:40:42.862141 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.299ms. Jul 10 00:40:42.862153 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 10 00:40:42.862167 systemd[1]: Detected virtualization kvm. Jul 10 00:40:42.862181 systemd[1]: Detected architecture arm64. Jul 10 00:40:42.862192 systemd[1]: Detected first boot. Jul 10 00:40:42.862202 systemd[1]: Initializing machine ID from VM UUID. Jul 10 00:40:42.862214 zram_generator::config[1047]: No configuration found. Jul 10 00:40:42.862225 systemd[1]: Populated /etc with preset unit settings. Jul 10 00:40:42.862236 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 10 00:40:42.862254 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 10 00:40:42.862266 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 10 00:40:42.862277 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 10 00:40:42.862288 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 10 00:40:42.862298 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 10 00:40:42.862311 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 10 00:40:42.862321 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 10 00:40:42.862332 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 10 00:40:42.862343 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 10 00:40:42.862353 systemd[1]: Created slice user.slice - User and Session Slice. Jul 10 00:40:42.862363 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:40:42.862374 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:40:42.862384 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 10 00:40:42.862395 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 10 00:40:42.862408 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 10 00:40:42.862419 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 00:40:42.862430 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 10 00:40:42.862440 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:40:42.862451 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 10 00:40:42.862461 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 10 00:40:42.862487 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 10 00:40:42.862501 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 10 00:40:42.862512 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:40:42.862522 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 00:40:42.862533 systemd[1]: Reached target slices.target - Slice Units. Jul 10 00:40:42.862543 systemd[1]: Reached target swap.target - Swaps. Jul 10 00:40:42.862554 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 10 00:40:42.862565 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 10 00:40:42.862575 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:40:42.862586 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 00:40:42.862596 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:40:42.862608 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 10 00:40:42.862619 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 10 00:40:42.862629 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 10 00:40:42.862639 systemd[1]: Mounting media.mount - External Media Directory... Jul 10 00:40:42.862651 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 10 00:40:42.862662 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 10 00:40:42.862672 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 10 00:40:42.862683 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 10 00:40:42.862695 systemd[1]: Reached target machines.target - Containers. Jul 10 00:40:42.862706 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 10 00:40:42.862717 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:40:42.862727 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 00:40:42.862738 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 10 00:40:42.862752 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:40:42.862762 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 00:40:42.862773 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:40:42.862783 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 10 00:40:42.862795 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:40:42.862805 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 10 00:40:42.862816 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 10 00:40:42.862826 kernel: fuse: init (API version 7.39) Jul 10 00:40:42.862836 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 10 00:40:42.862846 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 10 00:40:42.862856 systemd[1]: Stopped systemd-fsck-usr.service. Jul 10 00:40:42.862866 kernel: ACPI: bus type drm_connector registered Jul 10 00:40:42.862878 kernel: loop: module loaded Jul 10 00:40:42.862888 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 00:40:42.862899 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 00:40:42.862910 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 00:40:42.862920 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 10 00:40:42.862931 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 00:40:42.862956 systemd-journald[1118]: Collecting audit messages is disabled. Jul 10 00:40:42.862976 systemd[1]: verity-setup.service: Deactivated successfully. Jul 10 00:40:42.862988 systemd[1]: Stopped verity-setup.service. Jul 10 00:40:42.862999 systemd-journald[1118]: Journal started Jul 10 00:40:42.863020 systemd-journald[1118]: Runtime Journal (/run/log/journal/2c1da40fcc0a48bc91dd8219cab31643) is 5.9M, max 47.3M, 41.4M free. Jul 10 00:40:42.662315 systemd[1]: Queued start job for default target multi-user.target. Jul 10 00:40:42.674967 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 10 00:40:42.675333 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 10 00:40:42.867501 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 00:40:42.868106 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 10 00:40:42.869243 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 10 00:40:42.870464 systemd[1]: Mounted media.mount - External Media Directory. Jul 10 00:40:42.871506 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 10 00:40:42.872635 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 10 00:40:42.873790 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 10 00:40:42.874989 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 10 00:40:42.876413 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:40:42.877877 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 10 00:40:42.878032 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 10 00:40:42.879383 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:40:42.879562 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:40:42.880893 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:40:42.881024 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 00:40:42.882333 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:40:42.882487 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:40:42.883882 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 10 00:40:42.884033 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 10 00:40:42.885430 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:40:42.885615 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:40:42.886899 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 00:40:42.888846 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:40:42.890293 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 10 00:40:42.902055 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 00:40:42.909593 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 10 00:40:42.911657 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 10 00:40:42.912729 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 10 00:40:42.912774 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 00:40:42.914813 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 10 00:40:42.916962 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 10 00:40:42.919027 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 10 00:40:42.920152 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:40:42.921792 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 10 00:40:42.923735 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 10 00:40:42.924883 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:40:42.926624 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 10 00:40:42.927791 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 00:40:42.929742 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:40:42.934663 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 10 00:40:42.935716 systemd-journald[1118]: Time spent on flushing to /var/log/journal/2c1da40fcc0a48bc91dd8219cab31643 is 26.447ms for 858 entries. Jul 10 00:40:42.935716 systemd-journald[1118]: System Journal (/var/log/journal/2c1da40fcc0a48bc91dd8219cab31643) is 8.0M, max 195.6M, 187.6M free. Jul 10 00:40:42.982913 systemd-journald[1118]: Received client request to flush runtime journal. Jul 10 00:40:42.982971 kernel: loop0: detected capacity change from 0 to 114328 Jul 10 00:40:42.982994 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 10 00:40:42.938240 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 00:40:42.943195 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:40:42.944761 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 10 00:40:42.946161 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 10 00:40:42.947679 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 10 00:40:42.952646 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 10 00:40:42.955175 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 10 00:40:42.957306 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 10 00:40:42.960402 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 10 00:40:42.971707 udevadm[1166]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 10 00:40:42.986906 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 10 00:40:42.989993 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:40:42.993408 systemd-tmpfiles[1160]: ACLs are not supported, ignoring. Jul 10 00:40:42.993424 systemd-tmpfiles[1160]: ACLs are not supported, ignoring. Jul 10 00:40:42.994146 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 10 00:40:42.995986 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 10 00:40:42.998164 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 00:40:42.998500 kernel: loop1: detected capacity change from 0 to 114432 Jul 10 00:40:43.005641 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 10 00:40:43.024914 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 10 00:40:43.034646 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 00:40:43.037692 kernel: loop2: detected capacity change from 0 to 211168 Jul 10 00:40:43.050081 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. Jul 10 00:40:43.050098 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. Jul 10 00:40:43.054051 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:40:43.069496 kernel: loop3: detected capacity change from 0 to 114328 Jul 10 00:40:43.074488 kernel: loop4: detected capacity change from 0 to 114432 Jul 10 00:40:43.078495 kernel: loop5: detected capacity change from 0 to 211168 Jul 10 00:40:43.083229 (sd-merge)[1187]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 10 00:40:43.083647 (sd-merge)[1187]: Merged extensions into '/usr'. Jul 10 00:40:43.087087 systemd[1]: Reloading requested from client PID 1159 ('systemd-sysext') (unit systemd-sysext.service)... Jul 10 00:40:43.087103 systemd[1]: Reloading... Jul 10 00:40:43.139517 zram_generator::config[1214]: No configuration found. Jul 10 00:40:43.195071 ldconfig[1154]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 10 00:40:43.257235 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:40:43.301687 systemd[1]: Reloading finished in 214 ms. Jul 10 00:40:43.328090 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 10 00:40:43.329596 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 10 00:40:43.343646 systemd[1]: Starting ensure-sysext.service... Jul 10 00:40:43.345694 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 00:40:43.367690 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 10 00:40:43.367948 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 10 00:40:43.368639 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 10 00:40:43.368855 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Jul 10 00:40:43.368908 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Jul 10 00:40:43.369154 systemd[1]: Reloading requested from client PID 1248 ('systemctl') (unit ensure-sysext.service)... Jul 10 00:40:43.369169 systemd[1]: Reloading... Jul 10 00:40:43.371309 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 00:40:43.371320 systemd-tmpfiles[1249]: Skipping /boot Jul 10 00:40:43.378673 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 00:40:43.378686 systemd-tmpfiles[1249]: Skipping /boot Jul 10 00:40:43.414527 zram_generator::config[1276]: No configuration found. Jul 10 00:40:43.506759 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:40:43.552307 systemd[1]: Reloading finished in 182 ms. Jul 10 00:40:43.568869 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 10 00:40:43.580876 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:40:43.588981 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 10 00:40:43.591664 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 10 00:40:43.593902 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 10 00:40:43.596801 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 00:40:43.599746 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:40:43.604849 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 10 00:40:43.623076 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 10 00:40:43.627717 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:40:43.630529 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:40:43.635761 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:40:43.638781 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:40:43.639903 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:40:43.641184 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 10 00:40:43.645544 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 10 00:40:43.647884 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:40:43.648616 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:40:43.650136 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:40:43.650278 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:40:43.652887 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:40:43.653095 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:40:43.656978 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 10 00:40:43.659570 systemd-udevd[1318]: Using default interface naming scheme 'v255'. Jul 10 00:40:43.659890 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 10 00:40:43.667194 augenrules[1339]: No rules Jul 10 00:40:43.667403 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 10 00:40:43.669360 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 10 00:40:43.674375 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:40:43.683748 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:40:43.687964 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 00:40:43.692268 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:40:43.694668 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:40:43.696694 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:40:43.696844 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:40:43.697596 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:40:43.699126 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 10 00:40:43.701207 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:40:43.702539 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:40:43.708499 systemd[1]: Finished ensure-sysext.service. Jul 10 00:40:43.713057 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:40:43.721684 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:40:43.726183 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:40:43.726780 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:40:43.732658 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:40:43.732801 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 00:40:43.738499 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1360) Jul 10 00:40:43.741420 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 10 00:40:43.759795 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 00:40:43.761016 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:40:43.761101 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 00:40:43.767616 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 10 00:40:43.771691 systemd-resolved[1316]: Positive Trust Anchors: Jul 10 00:40:43.771708 systemd-resolved[1316]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:40:43.771741 systemd-resolved[1316]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 00:40:43.778100 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 10 00:40:43.781216 systemd-resolved[1316]: Defaulting to hostname 'linux'. Jul 10 00:40:43.784877 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 10 00:40:43.786222 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 00:40:43.787411 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:40:43.803294 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 10 00:40:43.836582 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 10 00:40:43.838077 systemd[1]: Reached target time-set.target - System Time Set. Jul 10 00:40:43.840913 systemd-networkd[1385]: lo: Link UP Jul 10 00:40:43.840920 systemd-networkd[1385]: lo: Gained carrier Jul 10 00:40:43.842391 systemd-networkd[1385]: Enumeration completed Jul 10 00:40:43.843322 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 00:40:43.846542 systemd[1]: Reached target network.target - Network. Jul 10 00:40:43.850585 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:40:43.850596 systemd-networkd[1385]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:40:43.855044 systemd-networkd[1385]: eth0: Link UP Jul 10 00:40:43.855052 systemd-networkd[1385]: eth0: Gained carrier Jul 10 00:40:43.855106 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:40:43.859720 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 10 00:40:43.862044 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:40:43.875876 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 10 00:40:43.879551 systemd-networkd[1385]: eth0: DHCPv4 address 10.0.0.128/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 00:40:43.880505 systemd-timesyncd[1387]: Network configuration changed, trying to establish connection. Jul 10 00:40:43.881628 systemd-timesyncd[1387]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 10 00:40:43.881673 systemd-timesyncd[1387]: Initial clock synchronization to Thu 2025-07-10 00:40:43.865796 UTC. Jul 10 00:40:43.886653 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 10 00:40:43.909499 lvm[1403]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 10 00:40:43.916619 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:40:43.944303 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 10 00:40:43.946164 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:40:43.947280 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 00:40:43.949098 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 10 00:40:43.950647 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 10 00:40:43.952460 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 10 00:40:43.954035 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 10 00:40:43.955305 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 10 00:40:43.957479 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 10 00:40:43.957519 systemd[1]: Reached target paths.target - Path Units. Jul 10 00:40:43.958355 systemd[1]: Reached target timers.target - Timer Units. Jul 10 00:40:43.960311 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 10 00:40:43.963200 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 10 00:40:43.977671 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 10 00:40:43.980214 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 10 00:40:43.982041 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 10 00:40:43.983935 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 00:40:43.985180 systemd[1]: Reached target basic.target - Basic System. Jul 10 00:40:43.986439 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 10 00:40:43.986503 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 10 00:40:43.986976 lvm[1411]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 10 00:40:43.988068 systemd[1]: Starting containerd.service - containerd container runtime... Jul 10 00:40:43.992143 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 10 00:40:43.995330 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 10 00:40:43.997292 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 10 00:40:43.998288 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 10 00:40:44.001780 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 10 00:40:44.007494 jq[1414]: false Jul 10 00:40:44.008718 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 10 00:40:44.012683 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 10 00:40:44.015037 extend-filesystems[1415]: Found loop3 Jul 10 00:40:44.016632 extend-filesystems[1415]: Found loop4 Jul 10 00:40:44.016632 extend-filesystems[1415]: Found loop5 Jul 10 00:40:44.016632 extend-filesystems[1415]: Found vda Jul 10 00:40:44.016632 extend-filesystems[1415]: Found vda1 Jul 10 00:40:44.016632 extend-filesystems[1415]: Found vda2 Jul 10 00:40:44.016632 extend-filesystems[1415]: Found vda3 Jul 10 00:40:44.016632 extend-filesystems[1415]: Found usr Jul 10 00:40:44.016632 extend-filesystems[1415]: Found vda4 Jul 10 00:40:44.016632 extend-filesystems[1415]: Found vda6 Jul 10 00:40:44.016632 extend-filesystems[1415]: Found vda7 Jul 10 00:40:44.016632 extend-filesystems[1415]: Found vda9 Jul 10 00:40:44.016632 extend-filesystems[1415]: Checking size of /dev/vda9 Jul 10 00:40:44.030713 dbus-daemon[1413]: [system] SELinux support is enabled Jul 10 00:40:44.017854 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 10 00:40:44.027791 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 10 00:40:44.033669 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 10 00:40:44.034306 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 10 00:40:44.036102 systemd[1]: Starting update-engine.service - Update Engine... Jul 10 00:40:44.040375 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 10 00:40:44.041971 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 10 00:40:44.046795 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 10 00:40:44.054422 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1369) Jul 10 00:40:44.054505 jq[1433]: true Jul 10 00:40:44.051845 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 10 00:40:44.052029 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 10 00:40:44.052324 systemd[1]: motdgen.service: Deactivated successfully. Jul 10 00:40:44.052499 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 10 00:40:44.060686 extend-filesystems[1415]: Resized partition /dev/vda9 Jul 10 00:40:44.055725 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 10 00:40:44.063803 extend-filesystems[1439]: resize2fs 1.47.1 (20-May-2024) Jul 10 00:40:44.072063 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 10 00:40:44.057515 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 10 00:40:44.083953 (ntainerd)[1443]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 10 00:40:44.085452 jq[1440]: true Jul 10 00:40:44.093157 systemd-logind[1428]: Watching system buttons on /dev/input/event0 (Power Button) Jul 10 00:40:44.095303 systemd-logind[1428]: New seat seat0. Jul 10 00:40:44.099026 update_engine[1432]: I20250710 00:40:44.098057 1432 main.cc:92] Flatcar Update Engine starting Jul 10 00:40:44.100038 tar[1438]: linux-arm64/LICENSE Jul 10 00:40:44.100038 tar[1438]: linux-arm64/helm Jul 10 00:40:44.103624 update_engine[1432]: I20250710 00:40:44.102510 1432 update_check_scheduler.cc:74] Next update check in 2m45s Jul 10 00:40:44.107119 systemd[1]: Started systemd-logind.service - User Login Management. Jul 10 00:40:44.112491 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 10 00:40:44.114319 systemd[1]: Started update-engine.service - Update Engine. Jul 10 00:40:44.116272 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 10 00:40:44.116434 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 10 00:40:44.118328 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 10 00:40:44.118551 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 10 00:40:44.121586 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 10 00:40:44.129533 extend-filesystems[1439]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 10 00:40:44.129533 extend-filesystems[1439]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 10 00:40:44.129533 extend-filesystems[1439]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 10 00:40:44.135649 extend-filesystems[1415]: Resized filesystem in /dev/vda9 Jul 10 00:40:44.130221 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 10 00:40:44.131819 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 10 00:40:44.168322 bash[1469]: Updated "/home/core/.ssh/authorized_keys" Jul 10 00:40:44.175523 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 10 00:40:44.177512 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 10 00:40:44.186568 locksmithd[1453]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 10 00:40:44.280133 containerd[1443]: time="2025-07-10T00:40:44.279450546Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 10 00:40:44.308134 containerd[1443]: time="2025-07-10T00:40:44.308068027Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:40:44.310507 containerd[1443]: time="2025-07-10T00:40:44.309579803Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:40:44.310507 containerd[1443]: time="2025-07-10T00:40:44.309619943Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 10 00:40:44.310507 containerd[1443]: time="2025-07-10T00:40:44.309635136Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 10 00:40:44.310507 containerd[1443]: time="2025-07-10T00:40:44.309783983Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 10 00:40:44.310507 containerd[1443]: time="2025-07-10T00:40:44.309801294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 10 00:40:44.310507 containerd[1443]: time="2025-07-10T00:40:44.309850350Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:40:44.310507 containerd[1443]: time="2025-07-10T00:40:44.309861864Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:40:44.310507 containerd[1443]: time="2025-07-10T00:40:44.310018227Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:40:44.310507 containerd[1443]: time="2025-07-10T00:40:44.310033780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 10 00:40:44.310507 containerd[1443]: time="2025-07-10T00:40:44.310046373Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:40:44.310507 containerd[1443]: time="2025-07-10T00:40:44.310056528Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 10 00:40:44.310736 containerd[1443]: time="2025-07-10T00:40:44.310168953Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:40:44.310736 containerd[1443]: time="2025-07-10T00:40:44.310353902Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:40:44.310736 containerd[1443]: time="2025-07-10T00:40:44.310445217Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:40:44.310736 containerd[1443]: time="2025-07-10T00:40:44.310459210Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 10 00:40:44.310904 containerd[1443]: time="2025-07-10T00:40:44.310884841Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 10 00:40:44.311006 containerd[1443]: time="2025-07-10T00:40:44.310988870Z" level=info msg="metadata content store policy set" policy=shared Jul 10 00:40:44.314079 containerd[1443]: time="2025-07-10T00:40:44.314054402Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 10 00:40:44.314204 containerd[1443]: time="2025-07-10T00:40:44.314187376Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 10 00:40:44.314288 containerd[1443]: time="2025-07-10T00:40:44.314273414Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 10 00:40:44.314351 containerd[1443]: time="2025-07-10T00:40:44.314338982Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 10 00:40:44.314403 containerd[1443]: time="2025-07-10T00:40:44.314391716Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 10 00:40:44.314599 containerd[1443]: time="2025-07-10T00:40:44.314577384Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 10 00:40:44.314896 containerd[1443]: time="2025-07-10T00:40:44.314878356Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 10 00:40:44.315167 containerd[1443]: time="2025-07-10T00:40:44.315144665Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 10 00:40:44.315247 containerd[1443]: time="2025-07-10T00:40:44.315232942Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 10 00:40:44.315307 containerd[1443]: time="2025-07-10T00:40:44.315293872Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 10 00:40:44.315360 containerd[1443]: time="2025-07-10T00:40:44.315348525Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 10 00:40:44.315412 containerd[1443]: time="2025-07-10T00:40:44.315400139Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 10 00:40:44.315466 containerd[1443]: time="2025-07-10T00:40:44.315454793Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 10 00:40:44.315562 containerd[1443]: time="2025-07-10T00:40:44.315548027Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 10 00:40:44.315626 containerd[1443]: time="2025-07-10T00:40:44.315613514Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 10 00:40:44.315677 containerd[1443]: time="2025-07-10T00:40:44.315666329Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 10 00:40:44.315730 containerd[1443]: time="2025-07-10T00:40:44.315719582Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 10 00:40:44.315781 containerd[1443]: time="2025-07-10T00:40:44.315769878Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 10 00:40:44.315853 containerd[1443]: time="2025-07-10T00:40:44.315839923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 10 00:40:44.315905 containerd[1443]: time="2025-07-10T00:40:44.315893537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 10 00:40:44.315966 containerd[1443]: time="2025-07-10T00:40:44.315953787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 10 00:40:44.316026 containerd[1443]: time="2025-07-10T00:40:44.316013078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 10 00:40:44.316099 containerd[1443]: time="2025-07-10T00:40:44.316072649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 10 00:40:44.316154 containerd[1443]: time="2025-07-10T00:40:44.316141935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 10 00:40:44.316203 containerd[1443]: time="2025-07-10T00:40:44.316192670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 10 00:40:44.316267 containerd[1443]: time="2025-07-10T00:40:44.316253759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 10 00:40:44.316320 containerd[1443]: time="2025-07-10T00:40:44.316308133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 10 00:40:44.316375 containerd[1443]: time="2025-07-10T00:40:44.316364105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 10 00:40:44.316426 containerd[1443]: time="2025-07-10T00:40:44.316414440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 10 00:40:44.316504 containerd[1443]: time="2025-07-10T00:40:44.316489603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 10 00:40:44.316564 containerd[1443]: time="2025-07-10T00:40:44.316550853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 10 00:40:44.316635 containerd[1443]: time="2025-07-10T00:40:44.316621419Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 10 00:40:44.316696 containerd[1443]: time="2025-07-10T00:40:44.316685467Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 10 00:40:44.316751 containerd[1443]: time="2025-07-10T00:40:44.316738361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 10 00:40:44.316801 containerd[1443]: time="2025-07-10T00:40:44.316789456Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 10 00:40:44.316990 containerd[1443]: time="2025-07-10T00:40:44.316975484Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 10 00:40:44.317198 containerd[1443]: time="2025-07-10T00:40:44.317179864Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 10 00:40:44.317267 containerd[1443]: time="2025-07-10T00:40:44.317253867Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 10 00:40:44.317319 containerd[1443]: time="2025-07-10T00:40:44.317306961Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 10 00:40:44.317364 containerd[1443]: time="2025-07-10T00:40:44.317352499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 10 00:40:44.317417 containerd[1443]: time="2025-07-10T00:40:44.317405153Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 10 00:40:44.317466 containerd[1443]: time="2025-07-10T00:40:44.317454689Z" level=info msg="NRI interface is disabled by configuration." Jul 10 00:40:44.317546 containerd[1443]: time="2025-07-10T00:40:44.317532570Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 10 00:40:44.317958 containerd[1443]: time="2025-07-10T00:40:44.317898870Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 10 00:40:44.318504 containerd[1443]: time="2025-07-10T00:40:44.318126038Z" level=info msg="Connect containerd service" Jul 10 00:40:44.318504 containerd[1443]: time="2025-07-10T00:40:44.318165099Z" level=info msg="using legacy CRI server" Jul 10 00:40:44.318504 containerd[1443]: time="2025-07-10T00:40:44.318172615Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 10 00:40:44.318504 containerd[1443]: time="2025-07-10T00:40:44.318258173Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 10 00:40:44.319018 containerd[1443]: time="2025-07-10T00:40:44.318989933Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:40:44.319316 containerd[1443]: time="2025-07-10T00:40:44.319284708Z" level=info msg="Start subscribing containerd event" Jul 10 00:40:44.319401 containerd[1443]: time="2025-07-10T00:40:44.319387178Z" level=info msg="Start recovering state" Jul 10 00:40:44.319517 containerd[1443]: time="2025-07-10T00:40:44.319503521Z" level=info msg="Start event monitor" Jul 10 00:40:44.320719 containerd[1443]: time="2025-07-10T00:40:44.320691256Z" level=info msg="Start snapshots syncer" Jul 10 00:40:44.320803 containerd[1443]: time="2025-07-10T00:40:44.320789288Z" level=info msg="Start cni network conf syncer for default" Jul 10 00:40:44.320855 containerd[1443]: time="2025-07-10T00:40:44.320837704Z" level=info msg="Start streaming server" Jul 10 00:40:44.321069 containerd[1443]: time="2025-07-10T00:40:44.320056529Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 10 00:40:44.321255 containerd[1443]: time="2025-07-10T00:40:44.321230991Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 10 00:40:44.321377 containerd[1443]: time="2025-07-10T00:40:44.321354450Z" level=info msg="containerd successfully booted in 0.043415s" Jul 10 00:40:44.321435 systemd[1]: Started containerd.service - containerd container runtime. Jul 10 00:40:44.440697 sshd_keygen[1431]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 10 00:40:44.460519 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 10 00:40:44.468746 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 10 00:40:44.475508 systemd[1]: issuegen.service: Deactivated successfully. Jul 10 00:40:44.475718 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 10 00:40:44.478439 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 10 00:40:44.490846 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 10 00:40:44.493876 tar[1438]: linux-arm64/README.md Jul 10 00:40:44.493938 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 10 00:40:44.496208 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 10 00:40:44.497718 systemd[1]: Reached target getty.target - Login Prompts. Jul 10 00:40:44.506419 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 10 00:40:45.514677 systemd-networkd[1385]: eth0: Gained IPv6LL Jul 10 00:40:45.517072 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 10 00:40:45.518935 systemd[1]: Reached target network-online.target - Network is Online. Jul 10 00:40:45.533715 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 10 00:40:45.535887 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:40:45.537983 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 10 00:40:45.553642 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 10 00:40:45.554527 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 10 00:40:45.556653 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 10 00:40:45.560817 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 10 00:40:46.078738 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:40:46.080234 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 10 00:40:46.081407 systemd[1]: Startup finished in 596ms (kernel) + 5.586s (initrd) + 3.823s (userspace) = 10.005s. Jul 10 00:40:46.082130 (kubelet)[1526]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:40:46.497090 kubelet[1526]: E0710 00:40:46.496967 1526 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:40:46.499366 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:40:46.499552 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:40:49.192517 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 10 00:40:49.193674 systemd[1]: Started sshd@0-10.0.0.128:22-10.0.0.1:53624.service - OpenSSH per-connection server daemon (10.0.0.1:53624). Jul 10 00:40:49.255920 sshd[1540]: Accepted publickey for core from 10.0.0.1 port 53624 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:40:49.257755 sshd[1540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:40:49.267261 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 10 00:40:49.281857 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 10 00:40:49.283547 systemd-logind[1428]: New session 1 of user core. Jul 10 00:40:49.290870 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 10 00:40:49.293190 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 10 00:40:49.299034 (systemd)[1544]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:40:49.369556 systemd[1544]: Queued start job for default target default.target. Jul 10 00:40:49.386443 systemd[1544]: Created slice app.slice - User Application Slice. Jul 10 00:40:49.386498 systemd[1544]: Reached target paths.target - Paths. Jul 10 00:40:49.386511 systemd[1544]: Reached target timers.target - Timers. Jul 10 00:40:49.387819 systemd[1544]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 10 00:40:49.400431 systemd[1544]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 10 00:40:49.400577 systemd[1544]: Reached target sockets.target - Sockets. Jul 10 00:40:49.400592 systemd[1544]: Reached target basic.target - Basic System. Jul 10 00:40:49.400643 systemd[1544]: Reached target default.target - Main User Target. Jul 10 00:40:49.400672 systemd[1544]: Startup finished in 96ms. Jul 10 00:40:49.400816 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 10 00:40:49.402142 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 10 00:40:49.461055 systemd[1]: Started sshd@1-10.0.0.128:22-10.0.0.1:53636.service - OpenSSH per-connection server daemon (10.0.0.1:53636). Jul 10 00:40:49.499177 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 53636 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:40:49.500393 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:40:49.504530 systemd-logind[1428]: New session 2 of user core. Jul 10 00:40:49.514694 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 10 00:40:49.566082 sshd[1555]: pam_unix(sshd:session): session closed for user core Jul 10 00:40:49.574899 systemd[1]: sshd@1-10.0.0.128:22-10.0.0.1:53636.service: Deactivated successfully. Jul 10 00:40:49.576307 systemd[1]: session-2.scope: Deactivated successfully. Jul 10 00:40:49.577621 systemd-logind[1428]: Session 2 logged out. Waiting for processes to exit. Jul 10 00:40:49.578742 systemd[1]: Started sshd@2-10.0.0.128:22-10.0.0.1:53644.service - OpenSSH per-connection server daemon (10.0.0.1:53644). Jul 10 00:40:49.579541 systemd-logind[1428]: Removed session 2. Jul 10 00:40:49.613739 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 53644 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:40:49.614871 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:40:49.618812 systemd-logind[1428]: New session 3 of user core. Jul 10 00:40:49.635631 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 10 00:40:49.684530 sshd[1562]: pam_unix(sshd:session): session closed for user core Jul 10 00:40:49.696876 systemd[1]: sshd@2-10.0.0.128:22-10.0.0.1:53644.service: Deactivated successfully. Jul 10 00:40:49.698274 systemd[1]: session-3.scope: Deactivated successfully. Jul 10 00:40:49.698873 systemd-logind[1428]: Session 3 logged out. Waiting for processes to exit. Jul 10 00:40:49.700642 systemd[1]: Started sshd@3-10.0.0.128:22-10.0.0.1:53648.service - OpenSSH per-connection server daemon (10.0.0.1:53648). Jul 10 00:40:49.701425 systemd-logind[1428]: Removed session 3. Jul 10 00:40:49.736247 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 53648 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:40:49.737769 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:40:49.742624 systemd-logind[1428]: New session 4 of user core. Jul 10 00:40:49.753687 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 10 00:40:49.805911 sshd[1569]: pam_unix(sshd:session): session closed for user core Jul 10 00:40:49.824965 systemd[1]: sshd@3-10.0.0.128:22-10.0.0.1:53648.service: Deactivated successfully. Jul 10 00:40:49.826312 systemd[1]: session-4.scope: Deactivated successfully. Jul 10 00:40:49.826964 systemd-logind[1428]: Session 4 logged out. Waiting for processes to exit. Jul 10 00:40:49.828619 systemd[1]: Started sshd@4-10.0.0.128:22-10.0.0.1:53662.service - OpenSSH per-connection server daemon (10.0.0.1:53662). Jul 10 00:40:49.831760 systemd-logind[1428]: Removed session 4. Jul 10 00:40:49.864843 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 53662 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:40:49.866088 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:40:49.870150 systemd-logind[1428]: New session 5 of user core. Jul 10 00:40:49.880617 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 10 00:40:49.941724 sudo[1579]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 10 00:40:49.942004 sudo[1579]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:40:49.958293 sudo[1579]: pam_unix(sudo:session): session closed for user root Jul 10 00:40:49.959890 sshd[1576]: pam_unix(sshd:session): session closed for user core Jul 10 00:40:49.971019 systemd[1]: sshd@4-10.0.0.128:22-10.0.0.1:53662.service: Deactivated successfully. Jul 10 00:40:49.972534 systemd[1]: session-5.scope: Deactivated successfully. Jul 10 00:40:49.975520 systemd-logind[1428]: Session 5 logged out. Waiting for processes to exit. Jul 10 00:40:49.980875 systemd[1]: Started sshd@5-10.0.0.128:22-10.0.0.1:53670.service - OpenSSH per-connection server daemon (10.0.0.1:53670). Jul 10 00:40:49.982338 systemd-logind[1428]: Removed session 5. Jul 10 00:40:50.013984 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 53670 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:40:50.015156 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:40:50.018950 systemd-logind[1428]: New session 6 of user core. Jul 10 00:40:50.033684 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 10 00:40:50.084072 sudo[1588]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 10 00:40:50.084353 sudo[1588]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:40:50.087326 sudo[1588]: pam_unix(sudo:session): session closed for user root Jul 10 00:40:50.091626 sudo[1587]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 10 00:40:50.091884 sudo[1587]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:40:50.113747 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 10 00:40:50.115150 auditctl[1591]: No rules Jul 10 00:40:50.116074 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 00:40:50.116325 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 10 00:40:50.120065 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 10 00:40:50.143495 augenrules[1609]: No rules Jul 10 00:40:50.145554 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 10 00:40:50.147170 sudo[1587]: pam_unix(sudo:session): session closed for user root Jul 10 00:40:50.148966 sshd[1584]: pam_unix(sshd:session): session closed for user core Jul 10 00:40:50.163894 systemd[1]: sshd@5-10.0.0.128:22-10.0.0.1:53670.service: Deactivated successfully. Jul 10 00:40:50.165538 systemd[1]: session-6.scope: Deactivated successfully. Jul 10 00:40:50.166859 systemd-logind[1428]: Session 6 logged out. Waiting for processes to exit. Jul 10 00:40:50.176753 systemd[1]: Started sshd@6-10.0.0.128:22-10.0.0.1:53678.service - OpenSSH per-connection server daemon (10.0.0.1:53678). Jul 10 00:40:50.177532 systemd-logind[1428]: Removed session 6. Jul 10 00:40:50.209677 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 53678 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:40:50.210878 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:40:50.214454 systemd-logind[1428]: New session 7 of user core. Jul 10 00:40:50.222622 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 10 00:40:50.273916 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 10 00:40:50.274200 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:40:50.615704 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 10 00:40:50.615845 (dockerd)[1638]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 10 00:40:50.871035 dockerd[1638]: time="2025-07-10T00:40:50.870899743Z" level=info msg="Starting up" Jul 10 00:40:51.066419 dockerd[1638]: time="2025-07-10T00:40:51.066168369Z" level=info msg="Loading containers: start." Jul 10 00:40:51.153506 kernel: Initializing XFRM netlink socket Jul 10 00:40:51.216465 systemd-networkd[1385]: docker0: Link UP Jul 10 00:40:51.231778 dockerd[1638]: time="2025-07-10T00:40:51.231716558Z" level=info msg="Loading containers: done." Jul 10 00:40:51.242903 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3948172779-merged.mount: Deactivated successfully. Jul 10 00:40:51.245007 dockerd[1638]: time="2025-07-10T00:40:51.244943508Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 10 00:40:51.245086 dockerd[1638]: time="2025-07-10T00:40:51.245063301Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 10 00:40:51.245201 dockerd[1638]: time="2025-07-10T00:40:51.245170259Z" level=info msg="Daemon has completed initialization" Jul 10 00:40:51.277643 dockerd[1638]: time="2025-07-10T00:40:51.277516437Z" level=info msg="API listen on /run/docker.sock" Jul 10 00:40:51.277741 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 10 00:40:51.758478 containerd[1443]: time="2025-07-10T00:40:51.758438519Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 10 00:40:52.493726 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3912385473.mount: Deactivated successfully. Jul 10 00:40:53.469101 containerd[1443]: time="2025-07-10T00:40:53.469040742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:53.469466 containerd[1443]: time="2025-07-10T00:40:53.469365142Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=27351718" Jul 10 00:40:53.470399 containerd[1443]: time="2025-07-10T00:40:53.470355896Z" level=info msg="ImageCreate event name:\"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:53.474210 containerd[1443]: time="2025-07-10T00:40:53.474146335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:53.475229 containerd[1443]: time="2025-07-10T00:40:53.475180272Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"27348516\" in 1.716688295s" Jul 10 00:40:53.475229 containerd[1443]: time="2025-07-10T00:40:53.475218618Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\"" Jul 10 00:40:53.478332 containerd[1443]: time="2025-07-10T00:40:53.478296880Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 10 00:40:54.466505 containerd[1443]: time="2025-07-10T00:40:54.466443961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:54.471315 containerd[1443]: time="2025-07-10T00:40:54.471275071Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=23537625" Jul 10 00:40:54.472500 containerd[1443]: time="2025-07-10T00:40:54.472058430Z" level=info msg="ImageCreate event name:\"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:54.476118 containerd[1443]: time="2025-07-10T00:40:54.476070274Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:54.477421 containerd[1443]: time="2025-07-10T00:40:54.477384963Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"25092541\" in 999.050936ms" Jul 10 00:40:54.477461 containerd[1443]: time="2025-07-10T00:40:54.477421670Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\"" Jul 10 00:40:54.477901 containerd[1443]: time="2025-07-10T00:40:54.477869709Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 10 00:40:55.462511 containerd[1443]: time="2025-07-10T00:40:55.462429624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:55.463219 containerd[1443]: time="2025-07-10T00:40:55.462960920Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=18293517" Jul 10 00:40:55.463941 containerd[1443]: time="2025-07-10T00:40:55.463908671Z" level=info msg="ImageCreate event name:\"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:55.466952 containerd[1443]: time="2025-07-10T00:40:55.466919746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:55.468273 containerd[1443]: time="2025-07-10T00:40:55.468130007Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"19848451\" in 990.22579ms" Jul 10 00:40:55.468273 containerd[1443]: time="2025-07-10T00:40:55.468165914Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\"" Jul 10 00:40:55.468894 containerd[1443]: time="2025-07-10T00:40:55.468606881Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 10 00:40:56.509806 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3764812709.mount: Deactivated successfully. Jul 10 00:40:56.510855 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 10 00:40:56.521734 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:40:56.627620 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:40:56.632448 (kubelet)[1865]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:40:56.687071 kubelet[1865]: E0710 00:40:56.686969 1865 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:40:56.690566 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:40:56.690715 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:40:57.006991 containerd[1443]: time="2025-07-10T00:40:57.006751426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:57.008361 containerd[1443]: time="2025-07-10T00:40:57.008308919Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=28199474" Jul 10 00:40:57.009035 containerd[1443]: time="2025-07-10T00:40:57.008963186Z" level=info msg="ImageCreate event name:\"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:57.011546 containerd[1443]: time="2025-07-10T00:40:57.011503000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:57.012047 containerd[1443]: time="2025-07-10T00:40:57.012015992Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"28198491\" in 1.543371165s" Jul 10 00:40:57.012083 containerd[1443]: time="2025-07-10T00:40:57.012049821Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\"" Jul 10 00:40:57.012647 containerd[1443]: time="2025-07-10T00:40:57.012487439Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 10 00:40:57.622911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3309650186.mount: Deactivated successfully. Jul 10 00:40:58.313002 containerd[1443]: time="2025-07-10T00:40:58.312956595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:58.314226 containerd[1443]: time="2025-07-10T00:40:58.314192086Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Jul 10 00:40:58.314813 containerd[1443]: time="2025-07-10T00:40:58.314762746Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:58.318578 containerd[1443]: time="2025-07-10T00:40:58.318540434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:58.319236 containerd[1443]: time="2025-07-10T00:40:58.319192868Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.306674559s" Jul 10 00:40:58.319275 containerd[1443]: time="2025-07-10T00:40:58.319233016Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jul 10 00:40:58.319849 containerd[1443]: time="2025-07-10T00:40:58.319732338Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 10 00:40:58.753739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3427663132.mount: Deactivated successfully. Jul 10 00:40:58.758432 containerd[1443]: time="2025-07-10T00:40:58.758383434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:58.759103 containerd[1443]: time="2025-07-10T00:40:58.759061420Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 10 00:40:58.759759 containerd[1443]: time="2025-07-10T00:40:58.759722892Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:58.762212 containerd[1443]: time="2025-07-10T00:40:58.762182756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:40:58.762970 containerd[1443]: time="2025-07-10T00:40:58.762936838Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 443.175549ms" Jul 10 00:40:58.763014 containerd[1443]: time="2025-07-10T00:40:58.762969908Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 10 00:40:58.763419 containerd[1443]: time="2025-07-10T00:40:58.763391855Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 10 00:40:59.183616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4211978746.mount: Deactivated successfully. Jul 10 00:41:00.912343 containerd[1443]: time="2025-07-10T00:41:00.912284052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:41:00.912819 containerd[1443]: time="2025-07-10T00:41:00.912788942Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334601" Jul 10 00:41:00.913713 containerd[1443]: time="2025-07-10T00:41:00.913658725Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:41:00.917006 containerd[1443]: time="2025-07-10T00:41:00.916950831Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:41:00.918407 containerd[1443]: time="2025-07-10T00:41:00.918344978Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.154920294s" Jul 10 00:41:00.918407 containerd[1443]: time="2025-07-10T00:41:00.918384646Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jul 10 00:41:05.571069 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:41:05.582718 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:41:05.603864 systemd[1]: Reloading requested from client PID 2015 ('systemctl') (unit session-7.scope)... Jul 10 00:41:05.603886 systemd[1]: Reloading... Jul 10 00:41:05.676581 zram_generator::config[2054]: No configuration found. Jul 10 00:41:05.823138 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:41:05.888663 systemd[1]: Reloading finished in 284 ms. Jul 10 00:41:05.934272 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:41:05.938055 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 00:41:05.938420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:41:05.940297 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:41:06.057140 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:41:06.063298 (kubelet)[2101]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 00:41:06.102561 kubelet[2101]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:41:06.102561 kubelet[2101]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 00:41:06.102561 kubelet[2101]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:41:06.102561 kubelet[2101]: I0710 00:41:06.101329 2101 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:41:06.835713 kubelet[2101]: I0710 00:41:06.835663 2101 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 10 00:41:06.835713 kubelet[2101]: I0710 00:41:06.835696 2101 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:41:06.835989 kubelet[2101]: I0710 00:41:06.835950 2101 server.go:956] "Client rotation is on, will bootstrap in background" Jul 10 00:41:06.890540 kubelet[2101]: E0710 00:41:06.890489 2101 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.128:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 10 00:41:06.891655 kubelet[2101]: I0710 00:41:06.891515 2101 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:41:06.900297 kubelet[2101]: E0710 00:41:06.900251 2101 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 10 00:41:06.900297 kubelet[2101]: I0710 00:41:06.900295 2101 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 10 00:41:06.903841 kubelet[2101]: I0710 00:41:06.903814 2101 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:41:06.904835 kubelet[2101]: I0710 00:41:06.904790 2101 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:41:06.904991 kubelet[2101]: I0710 00:41:06.904833 2101 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 00:41:06.905073 kubelet[2101]: I0710 00:41:06.905053 2101 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:41:06.905073 kubelet[2101]: I0710 00:41:06.905061 2101 container_manager_linux.go:303] "Creating device plugin manager" Jul 10 00:41:06.905274 kubelet[2101]: I0710 00:41:06.905250 2101 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:41:06.907789 kubelet[2101]: I0710 00:41:06.907769 2101 kubelet.go:480] "Attempting to sync node with API server" Jul 10 00:41:06.907841 kubelet[2101]: I0710 00:41:06.907798 2101 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:41:06.907841 kubelet[2101]: I0710 00:41:06.907823 2101 kubelet.go:386] "Adding apiserver pod source" Jul 10 00:41:06.908920 kubelet[2101]: I0710 00:41:06.908833 2101 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:41:06.911746 kubelet[2101]: I0710 00:41:06.911690 2101 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 10 00:41:06.912530 kubelet[2101]: I0710 00:41:06.912435 2101 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 10 00:41:06.912612 kubelet[2101]: W0710 00:41:06.912586 2101 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 10 00:41:06.913848 kubelet[2101]: E0710 00:41:06.913768 2101 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 10 00:41:06.915111 kubelet[2101]: E0710 00:41:06.915061 2101 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.128:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 10 00:41:06.915236 kubelet[2101]: I0710 00:41:06.915216 2101 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 00:41:06.915267 kubelet[2101]: I0710 00:41:06.915257 2101 server.go:1289] "Started kubelet" Jul 10 00:41:06.915833 kubelet[2101]: I0710 00:41:06.915330 2101 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:41:06.915833 kubelet[2101]: I0710 00:41:06.915655 2101 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:41:06.915833 kubelet[2101]: I0710 00:41:06.915700 2101 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:41:06.916681 kubelet[2101]: I0710 00:41:06.916378 2101 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:41:06.917436 kubelet[2101]: I0710 00:41:06.917269 2101 server.go:317] "Adding debug handlers to kubelet server" Jul 10 00:41:06.918622 kubelet[2101]: E0710 00:41:06.918300 2101 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:41:06.918622 kubelet[2101]: I0710 00:41:06.918503 2101 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 00:41:06.918707 kubelet[2101]: I0710 00:41:06.918698 2101 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 00:41:06.918771 kubelet[2101]: I0710 00:41:06.918747 2101 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:41:06.919099 kubelet[2101]: I0710 00:41:06.919074 2101 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:41:06.919641 kubelet[2101]: E0710 00:41:06.919614 2101 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 10 00:41:06.919844 kubelet[2101]: E0710 00:41:06.918190 2101 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.128:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.128:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1850bd00f894294f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-10 00:41:06.915232079 +0000 UTC m=+0.848396900,LastTimestamp:2025-07-10 00:41:06.915232079 +0000 UTC m=+0.848396900,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 10 00:41:06.920175 kubelet[2101]: I0710 00:41:06.920145 2101 factory.go:223] Registration of the systemd container factory successfully Jul 10 00:41:06.920239 kubelet[2101]: I0710 00:41:06.920226 2101 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:41:06.920550 kubelet[2101]: E0710 00:41:06.920519 2101 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="200ms" Jul 10 00:41:06.921614 kubelet[2101]: E0710 00:41:06.921536 2101 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:41:06.921870 kubelet[2101]: I0710 00:41:06.921841 2101 factory.go:223] Registration of the containerd container factory successfully Jul 10 00:41:06.934243 kubelet[2101]: I0710 00:41:06.934010 2101 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 00:41:06.934243 kubelet[2101]: I0710 00:41:06.934027 2101 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 00:41:06.934243 kubelet[2101]: I0710 00:41:06.934045 2101 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:41:06.934632 kubelet[2101]: I0710 00:41:06.934608 2101 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 10 00:41:06.935731 kubelet[2101]: I0710 00:41:06.935712 2101 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 10 00:41:06.935813 kubelet[2101]: I0710 00:41:06.935805 2101 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 10 00:41:06.936014 kubelet[2101]: I0710 00:41:06.935999 2101 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 00:41:06.936084 kubelet[2101]: I0710 00:41:06.936074 2101 kubelet.go:2436] "Starting kubelet main sync loop" Jul 10 00:41:06.936176 kubelet[2101]: E0710 00:41:06.936158 2101 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:41:06.936763 kubelet[2101]: E0710 00:41:06.936739 2101 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 10 00:41:06.937424 kubelet[2101]: I0710 00:41:06.937401 2101 policy_none.go:49] "None policy: Start" Jul 10 00:41:06.937807 kubelet[2101]: I0710 00:41:06.937560 2101 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 00:41:06.937807 kubelet[2101]: I0710 00:41:06.937594 2101 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:41:06.942788 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 10 00:41:06.955845 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 10 00:41:06.958928 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 10 00:41:06.966170 kubelet[2101]: E0710 00:41:06.966142 2101 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 10 00:41:06.966374 kubelet[2101]: I0710 00:41:06.966339 2101 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:41:06.966374 kubelet[2101]: I0710 00:41:06.966358 2101 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:41:06.966842 kubelet[2101]: I0710 00:41:06.966576 2101 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:41:06.967448 kubelet[2101]: E0710 00:41:06.967421 2101 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 00:41:06.967528 kubelet[2101]: E0710 00:41:06.967494 2101 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 10 00:41:07.044583 systemd[1]: Created slice kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice - libcontainer container kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice. Jul 10 00:41:07.060195 kubelet[2101]: E0710 00:41:07.060154 2101 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:41:07.064114 systemd[1]: Created slice kubepods-burstable-pode61f47be2af0678118d1991e48f15f08.slice - libcontainer container kubepods-burstable-pode61f47be2af0678118d1991e48f15f08.slice. Jul 10 00:41:07.067930 kubelet[2101]: I0710 00:41:07.067901 2101 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:41:07.068356 kubelet[2101]: E0710 00:41:07.068330 2101 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.128:6443/api/v1/nodes\": dial tcp 10.0.0.128:6443: connect: connection refused" node="localhost" Jul 10 00:41:07.075009 kubelet[2101]: E0710 00:41:07.074988 2101 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:41:07.076331 systemd[1]: Created slice kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice - libcontainer container kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice. Jul 10 00:41:07.077824 kubelet[2101]: E0710 00:41:07.077783 2101 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:41:07.121082 kubelet[2101]: E0710 00:41:07.120978 2101 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="400ms" Jul 10 00:41:07.220357 kubelet[2101]: I0710 00:41:07.220303 2101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:41:07.220357 kubelet[2101]: I0710 00:41:07.220347 2101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:41:07.220539 kubelet[2101]: I0710 00:41:07.220374 2101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 10 00:41:07.220539 kubelet[2101]: I0710 00:41:07.220391 2101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e61f47be2af0678118d1991e48f15f08-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e61f47be2af0678118d1991e48f15f08\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:41:07.220539 kubelet[2101]: I0710 00:41:07.220408 2101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e61f47be2af0678118d1991e48f15f08-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e61f47be2af0678118d1991e48f15f08\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:41:07.220539 kubelet[2101]: I0710 00:41:07.220451 2101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:41:07.220539 kubelet[2101]: I0710 00:41:07.220530 2101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:41:07.220689 kubelet[2101]: I0710 00:41:07.220559 2101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e61f47be2af0678118d1991e48f15f08-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e61f47be2af0678118d1991e48f15f08\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:41:07.220689 kubelet[2101]: I0710 00:41:07.220577 2101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:41:07.270400 kubelet[2101]: I0710 00:41:07.270369 2101 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:41:07.270769 kubelet[2101]: E0710 00:41:07.270733 2101 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.128:6443/api/v1/nodes\": dial tcp 10.0.0.128:6443: connect: connection refused" node="localhost" Jul 10 00:41:07.361342 kubelet[2101]: E0710 00:41:07.361255 2101 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:07.361911 containerd[1443]: time="2025-07-10T00:41:07.361842751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,}" Jul 10 00:41:07.376171 kubelet[2101]: E0710 00:41:07.376064 2101 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:07.376813 containerd[1443]: time="2025-07-10T00:41:07.376449849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e61f47be2af0678118d1991e48f15f08,Namespace:kube-system,Attempt:0,}" Jul 10 00:41:07.378194 kubelet[2101]: E0710 00:41:07.378163 2101 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:07.378839 containerd[1443]: time="2025-07-10T00:41:07.378609577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,}" Jul 10 00:41:07.522148 kubelet[2101]: E0710 00:41:07.522097 2101 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="800ms" Jul 10 00:41:07.672112 kubelet[2101]: I0710 00:41:07.671849 2101 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:41:07.672190 kubelet[2101]: E0710 00:41:07.672146 2101 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.128:6443/api/v1/nodes\": dial tcp 10.0.0.128:6443: connect: connection refused" node="localhost" Jul 10 00:41:07.859726 kubelet[2101]: E0710 00:41:07.859677 2101 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 10 00:41:07.922454 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount546023867.mount: Deactivated successfully. Jul 10 00:41:07.926711 containerd[1443]: time="2025-07-10T00:41:07.926664299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:41:07.927745 containerd[1443]: time="2025-07-10T00:41:07.927698974Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 10 00:41:07.928316 containerd[1443]: time="2025-07-10T00:41:07.928279836Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:41:07.929309 containerd[1443]: time="2025-07-10T00:41:07.929272401Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:41:07.929946 containerd[1443]: time="2025-07-10T00:41:07.929914649Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 10 00:41:07.930631 containerd[1443]: time="2025-07-10T00:41:07.930600126Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 10 00:41:07.930777 containerd[1443]: time="2025-07-10T00:41:07.930741053Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:41:07.934266 containerd[1443]: time="2025-07-10T00:41:07.934221668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:41:07.935208 containerd[1443]: time="2025-07-10T00:41:07.935175122Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 573.219278ms" Jul 10 00:41:07.935916 containerd[1443]: time="2025-07-10T00:41:07.935876596Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 557.198915ms" Jul 10 00:41:07.937922 containerd[1443]: time="2025-07-10T00:41:07.937886440Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 561.344972ms" Jul 10 00:41:08.062447 kubelet[2101]: E0710 00:41:08.062400 2101 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.128:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 10 00:41:08.081235 containerd[1443]: time="2025-07-10T00:41:08.081042824Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:41:08.081696 containerd[1443]: time="2025-07-10T00:41:08.081637047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:41:08.081696 containerd[1443]: time="2025-07-10T00:41:08.081687196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:41:08.082286 containerd[1443]: time="2025-07-10T00:41:08.081858436Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:41:08.082286 containerd[1443]: time="2025-07-10T00:41:08.081938298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:41:08.082286 containerd[1443]: time="2025-07-10T00:41:08.081983848Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:41:08.082577 containerd[1443]: time="2025-07-10T00:41:08.082436144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:41:08.082977 containerd[1443]: time="2025-07-10T00:41:08.082782544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:41:08.082977 containerd[1443]: time="2025-07-10T00:41:08.082808138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:41:08.082977 containerd[1443]: time="2025-07-10T00:41:08.082901237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:41:08.083117 containerd[1443]: time="2025-07-10T00:41:08.083047763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:41:08.083213 containerd[1443]: time="2025-07-10T00:41:08.083178453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:41:08.106653 systemd[1]: Started cri-containerd-336d8da558ddfe7eb82b8ffeb9e3b1411594f2034bae4eac672c13fbe09638a9.scope - libcontainer container 336d8da558ddfe7eb82b8ffeb9e3b1411594f2034bae4eac672c13fbe09638a9. Jul 10 00:41:08.107761 systemd[1]: Started cri-containerd-9a70cc33cc9af6304bea9f797e40c526ef2a7a71c9393b504b68edf9a3deecdd.scope - libcontainer container 9a70cc33cc9af6304bea9f797e40c526ef2a7a71c9393b504b68edf9a3deecdd. Jul 10 00:41:08.112632 systemd[1]: Started cri-containerd-456752fbb61f5a9849f889c17b7fbb21e77ca22d7dca4df052c5d1916d08dc8a.scope - libcontainer container 456752fbb61f5a9849f889c17b7fbb21e77ca22d7dca4df052c5d1916d08dc8a. Jul 10 00:41:08.139523 containerd[1443]: time="2025-07-10T00:41:08.139444456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"336d8da558ddfe7eb82b8ffeb9e3b1411594f2034bae4eac672c13fbe09638a9\"" Jul 10 00:41:08.139985 containerd[1443]: time="2025-07-10T00:41:08.139751786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e61f47be2af0678118d1991e48f15f08,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a70cc33cc9af6304bea9f797e40c526ef2a7a71c9393b504b68edf9a3deecdd\"" Jul 10 00:41:08.142311 kubelet[2101]: E0710 00:41:08.142148 2101 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:08.142311 kubelet[2101]: E0710 00:41:08.142197 2101 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:08.146112 containerd[1443]: time="2025-07-10T00:41:08.145985994Z" level=info msg="CreateContainer within sandbox \"336d8da558ddfe7eb82b8ffeb9e3b1411594f2034bae4eac672c13fbe09638a9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 10 00:41:08.146803 containerd[1443]: time="2025-07-10T00:41:08.146748020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"456752fbb61f5a9849f889c17b7fbb21e77ca22d7dca4df052c5d1916d08dc8a\"" Jul 10 00:41:08.147657 kubelet[2101]: E0710 00:41:08.147632 2101 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:08.148180 containerd[1443]: time="2025-07-10T00:41:08.148153057Z" level=info msg="CreateContainer within sandbox \"9a70cc33cc9af6304bea9f797e40c526ef2a7a71c9393b504b68edf9a3deecdd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 10 00:41:08.150631 containerd[1443]: time="2025-07-10T00:41:08.150593777Z" level=info msg="CreateContainer within sandbox \"456752fbb61f5a9849f889c17b7fbb21e77ca22d7dca4df052c5d1916d08dc8a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 10 00:41:08.161168 containerd[1443]: time="2025-07-10T00:41:08.161126479Z" level=info msg="CreateContainer within sandbox \"336d8da558ddfe7eb82b8ffeb9e3b1411594f2034bae4eac672c13fbe09638a9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5bcc7fe989729f7134fc27e2a9170b8df2391b27d131f32a5f62a462f0b1bbe1\"" Jul 10 00:41:08.162158 containerd[1443]: time="2025-07-10T00:41:08.162129488Z" level=info msg="StartContainer for \"5bcc7fe989729f7134fc27e2a9170b8df2391b27d131f32a5f62a462f0b1bbe1\"" Jul 10 00:41:08.166818 containerd[1443]: time="2025-07-10T00:41:08.166761265Z" level=info msg="CreateContainer within sandbox \"456752fbb61f5a9849f889c17b7fbb21e77ca22d7dca4df052c5d1916d08dc8a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bafb61407bd6852c7af5f1edd80f85bf64e5fc9ae9b705f91672f998041c6193\"" Jul 10 00:41:08.167435 containerd[1443]: time="2025-07-10T00:41:08.167240595Z" level=info msg="StartContainer for \"bafb61407bd6852c7af5f1edd80f85bf64e5fc9ae9b705f91672f998041c6193\"" Jul 10 00:41:08.167435 containerd[1443]: time="2025-07-10T00:41:08.167338053Z" level=info msg="CreateContainer within sandbox \"9a70cc33cc9af6304bea9f797e40c526ef2a7a71c9393b504b68edf9a3deecdd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6cf71aa9a2c3d07cc83931b8bef58bb019237e5f75ef8dcae479bb13712b67a7\"" Jul 10 00:41:08.167838 containerd[1443]: time="2025-07-10T00:41:08.167810424Z" level=info msg="StartContainer for \"6cf71aa9a2c3d07cc83931b8bef58bb019237e5f75ef8dcae479bb13712b67a7\"" Jul 10 00:41:08.193628 systemd[1]: Started cri-containerd-5bcc7fe989729f7134fc27e2a9170b8df2391b27d131f32a5f62a462f0b1bbe1.scope - libcontainer container 5bcc7fe989729f7134fc27e2a9170b8df2391b27d131f32a5f62a462f0b1bbe1. Jul 10 00:41:08.198760 systemd[1]: Started cri-containerd-6cf71aa9a2c3d07cc83931b8bef58bb019237e5f75ef8dcae479bb13712b67a7.scope - libcontainer container 6cf71aa9a2c3d07cc83931b8bef58bb019237e5f75ef8dcae479bb13712b67a7. Jul 10 00:41:08.199775 systemd[1]: Started cri-containerd-bafb61407bd6852c7af5f1edd80f85bf64e5fc9ae9b705f91672f998041c6193.scope - libcontainer container bafb61407bd6852c7af5f1edd80f85bf64e5fc9ae9b705f91672f998041c6193. Jul 10 00:41:08.210764 kubelet[2101]: E0710 00:41:08.210700 2101 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 10 00:41:08.230637 containerd[1443]: time="2025-07-10T00:41:08.230249930Z" level=info msg="StartContainer for \"5bcc7fe989729f7134fc27e2a9170b8df2391b27d131f32a5f62a462f0b1bbe1\" returns successfully" Jul 10 00:41:08.243419 containerd[1443]: time="2025-07-10T00:41:08.243314970Z" level=info msg="StartContainer for \"6cf71aa9a2c3d07cc83931b8bef58bb019237e5f75ef8dcae479bb13712b67a7\" returns successfully" Jul 10 00:41:08.249864 containerd[1443]: time="2025-07-10T00:41:08.249005704Z" level=info msg="StartContainer for \"bafb61407bd6852c7af5f1edd80f85bf64e5fc9ae9b705f91672f998041c6193\" returns successfully" Jul 10 00:41:08.323034 kubelet[2101]: E0710 00:41:08.322943 2101 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="1.6s" Jul 10 00:41:08.390114 kubelet[2101]: E0710 00:41:08.390027 2101 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 10 00:41:08.474187 kubelet[2101]: I0710 00:41:08.474084 2101 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:41:08.945200 kubelet[2101]: E0710 00:41:08.945003 2101 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:41:08.945672 kubelet[2101]: E0710 00:41:08.945634 2101 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:08.949512 kubelet[2101]: E0710 00:41:08.949075 2101 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:41:08.949512 kubelet[2101]: E0710 00:41:08.949175 2101 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:08.949512 kubelet[2101]: E0710 00:41:08.949268 2101 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:41:08.949512 kubelet[2101]: E0710 00:41:08.949393 2101 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:09.945677 kubelet[2101]: E0710 00:41:09.945634 2101 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 10 00:41:09.952868 kubelet[2101]: E0710 00:41:09.952340 2101 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:41:09.952868 kubelet[2101]: E0710 00:41:09.952461 2101 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:09.952868 kubelet[2101]: E0710 00:41:09.952726 2101 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:41:09.952868 kubelet[2101]: E0710 00:41:09.952805 2101 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:09.954226 kubelet[2101]: E0710 00:41:09.954202 2101 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:41:09.954523 kubelet[2101]: E0710 00:41:09.954421 2101 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:10.146555 kubelet[2101]: I0710 00:41:10.145981 2101 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 10 00:41:10.146555 kubelet[2101]: E0710 00:41:10.146025 2101 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 10 00:41:10.221320 kubelet[2101]: I0710 00:41:10.220626 2101 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 00:41:10.226810 kubelet[2101]: E0710 00:41:10.226435 2101 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 10 00:41:10.226810 kubelet[2101]: I0710 00:41:10.226484 2101 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 10 00:41:10.228672 kubelet[2101]: E0710 00:41:10.228641 2101 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 10 00:41:10.228672 kubelet[2101]: I0710 00:41:10.228672 2101 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 10 00:41:10.230149 kubelet[2101]: E0710 00:41:10.230115 2101 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 10 00:41:10.913709 kubelet[2101]: I0710 00:41:10.913671 2101 apiserver.go:52] "Watching apiserver" Jul 10 00:41:10.919219 kubelet[2101]: I0710 00:41:10.919169 2101 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 00:41:12.042437 systemd[1]: Reloading requested from client PID 2390 ('systemctl') (unit session-7.scope)... Jul 10 00:41:12.042454 systemd[1]: Reloading... Jul 10 00:41:12.128507 zram_generator::config[2432]: No configuration found. Jul 10 00:41:12.217752 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:41:12.297724 systemd[1]: Reloading finished in 254 ms. Jul 10 00:41:12.331123 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:41:12.351788 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 00:41:12.352055 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:41:12.352112 systemd[1]: kubelet.service: Consumed 1.263s CPU time, 129.7M memory peak, 0B memory swap peak. Jul 10 00:41:12.359842 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:41:12.458603 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:41:12.462903 (kubelet)[2471]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 00:41:12.499342 kubelet[2471]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:41:12.499342 kubelet[2471]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 00:41:12.499342 kubelet[2471]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:41:12.499342 kubelet[2471]: I0710 00:41:12.497670 2471 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:41:12.506086 kubelet[2471]: I0710 00:41:12.503094 2471 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 10 00:41:12.506086 kubelet[2471]: I0710 00:41:12.503124 2471 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:41:12.506086 kubelet[2471]: I0710 00:41:12.503304 2471 server.go:956] "Client rotation is on, will bootstrap in background" Jul 10 00:41:12.506086 kubelet[2471]: I0710 00:41:12.504508 2471 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 10 00:41:12.508631 kubelet[2471]: I0710 00:41:12.508078 2471 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:41:12.515128 kubelet[2471]: E0710 00:41:12.515089 2471 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 10 00:41:12.515237 kubelet[2471]: I0710 00:41:12.515223 2471 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 10 00:41:12.518678 kubelet[2471]: I0710 00:41:12.518645 2471 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:41:12.518920 kubelet[2471]: I0710 00:41:12.518892 2471 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:41:12.519166 kubelet[2471]: I0710 00:41:12.518920 2471 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 00:41:12.519276 kubelet[2471]: I0710 00:41:12.519186 2471 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:41:12.519276 kubelet[2471]: I0710 00:41:12.519198 2471 container_manager_linux.go:303] "Creating device plugin manager" Jul 10 00:41:12.519276 kubelet[2471]: I0710 00:41:12.519243 2471 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:41:12.519407 kubelet[2471]: I0710 00:41:12.519394 2471 kubelet.go:480] "Attempting to sync node with API server" Jul 10 00:41:12.519435 kubelet[2471]: I0710 00:41:12.519410 2471 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:41:12.519459 kubelet[2471]: I0710 00:41:12.519434 2471 kubelet.go:386] "Adding apiserver pod source" Jul 10 00:41:12.519459 kubelet[2471]: I0710 00:41:12.519448 2471 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:41:12.523501 kubelet[2471]: I0710 00:41:12.521102 2471 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 10 00:41:12.523501 kubelet[2471]: I0710 00:41:12.521721 2471 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 10 00:41:12.528190 kubelet[2471]: I0710 00:41:12.528165 2471 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 00:41:12.528276 kubelet[2471]: I0710 00:41:12.528214 2471 server.go:1289] "Started kubelet" Jul 10 00:41:12.531491 kubelet[2471]: I0710 00:41:12.528434 2471 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:41:12.531491 kubelet[2471]: I0710 00:41:12.528650 2471 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:41:12.531491 kubelet[2471]: I0710 00:41:12.528904 2471 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:41:12.531491 kubelet[2471]: I0710 00:41:12.529279 2471 server.go:317] "Adding debug handlers to kubelet server" Jul 10 00:41:12.531491 kubelet[2471]: I0710 00:41:12.530799 2471 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:41:12.534370 kubelet[2471]: E0710 00:41:12.534345 2471 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:41:12.534370 kubelet[2471]: I0710 00:41:12.534375 2471 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 00:41:12.534578 kubelet[2471]: I0710 00:41:12.534558 2471 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 00:41:12.534694 kubelet[2471]: I0710 00:41:12.534672 2471 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:41:12.534694 kubelet[2471]: I0710 00:41:12.534670 2471 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:41:12.542803 kubelet[2471]: I0710 00:41:12.542767 2471 factory.go:223] Registration of the systemd container factory successfully Jul 10 00:41:12.543115 kubelet[2471]: I0710 00:41:12.543086 2471 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:41:12.547350 kubelet[2471]: I0710 00:41:12.547318 2471 factory.go:223] Registration of the containerd container factory successfully Jul 10 00:41:12.549118 kubelet[2471]: E0710 00:41:12.548066 2471 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:41:12.564378 kubelet[2471]: I0710 00:41:12.564332 2471 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 10 00:41:12.566398 kubelet[2471]: I0710 00:41:12.566380 2471 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 10 00:41:12.566398 kubelet[2471]: I0710 00:41:12.566435 2471 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 10 00:41:12.566398 kubelet[2471]: I0710 00:41:12.566453 2471 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 00:41:12.566398 kubelet[2471]: I0710 00:41:12.566461 2471 kubelet.go:2436] "Starting kubelet main sync loop" Jul 10 00:41:12.566398 kubelet[2471]: E0710 00:41:12.566511 2471 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:41:12.580549 kubelet[2471]: I0710 00:41:12.580523 2471 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 00:41:12.580700 kubelet[2471]: I0710 00:41:12.580672 2471 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 00:41:12.580759 kubelet[2471]: I0710 00:41:12.580751 2471 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:41:12.580947 kubelet[2471]: I0710 00:41:12.580932 2471 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 10 00:41:12.581033 kubelet[2471]: I0710 00:41:12.581010 2471 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 10 00:41:12.581084 kubelet[2471]: I0710 00:41:12.581077 2471 policy_none.go:49] "None policy: Start" Jul 10 00:41:12.581135 kubelet[2471]: I0710 00:41:12.581128 2471 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 00:41:12.581188 kubelet[2471]: I0710 00:41:12.581181 2471 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:41:12.581353 kubelet[2471]: I0710 00:41:12.581339 2471 state_mem.go:75] "Updated machine memory state" Jul 10 00:41:12.585418 kubelet[2471]: E0710 00:41:12.585387 2471 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 10 00:41:12.585776 kubelet[2471]: I0710 00:41:12.585595 2471 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:41:12.585776 kubelet[2471]: I0710 00:41:12.585615 2471 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:41:12.586272 kubelet[2471]: I0710 00:41:12.586230 2471 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:41:12.587823 kubelet[2471]: E0710 00:41:12.587763 2471 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 00:41:12.667711 kubelet[2471]: I0710 00:41:12.667600 2471 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 00:41:12.667711 kubelet[2471]: I0710 00:41:12.667646 2471 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 10 00:41:12.667711 kubelet[2471]: I0710 00:41:12.667689 2471 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 10 00:41:12.690395 kubelet[2471]: I0710 00:41:12.690359 2471 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:41:12.696276 kubelet[2471]: I0710 00:41:12.696245 2471 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 10 00:41:12.696397 kubelet[2471]: I0710 00:41:12.696328 2471 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 10 00:41:12.735836 kubelet[2471]: I0710 00:41:12.735780 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e61f47be2af0678118d1991e48f15f08-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e61f47be2af0678118d1991e48f15f08\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:41:12.735836 kubelet[2471]: I0710 00:41:12.735827 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:41:12.735996 kubelet[2471]: I0710 00:41:12.735851 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e61f47be2af0678118d1991e48f15f08-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e61f47be2af0678118d1991e48f15f08\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:41:12.735996 kubelet[2471]: I0710 00:41:12.735870 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e61f47be2af0678118d1991e48f15f08-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e61f47be2af0678118d1991e48f15f08\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:41:12.735996 kubelet[2471]: I0710 00:41:12.735931 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:41:12.735996 kubelet[2471]: I0710 00:41:12.735963 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:41:12.735996 kubelet[2471]: I0710 00:41:12.735983 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:41:12.736106 kubelet[2471]: I0710 00:41:12.735999 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:41:12.736106 kubelet[2471]: I0710 00:41:12.736015 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 10 00:41:12.973859 kubelet[2471]: E0710 00:41:12.973732 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:12.973859 kubelet[2471]: E0710 00:41:12.973732 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:12.973859 kubelet[2471]: E0710 00:41:12.973811 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:13.045010 sudo[2512]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 10 00:41:13.045287 sudo[2512]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 10 00:41:13.478932 sudo[2512]: pam_unix(sudo:session): session closed for user root Jul 10 00:41:13.520657 kubelet[2471]: I0710 00:41:13.520594 2471 apiserver.go:52] "Watching apiserver" Jul 10 00:41:13.534992 kubelet[2471]: I0710 00:41:13.534920 2471 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 00:41:13.576001 kubelet[2471]: I0710 00:41:13.575670 2471 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 10 00:41:13.576302 kubelet[2471]: I0710 00:41:13.576280 2471 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 00:41:13.578614 kubelet[2471]: E0710 00:41:13.578592 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:13.582222 kubelet[2471]: E0710 00:41:13.582143 2471 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 10 00:41:13.582328 kubelet[2471]: E0710 00:41:13.582281 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:13.582929 kubelet[2471]: E0710 00:41:13.582863 2471 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 10 00:41:13.583005 kubelet[2471]: E0710 00:41:13.582977 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:13.601080 kubelet[2471]: I0710 00:41:13.600951 2471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.6009369310000001 podStartE2EDuration="1.600936931s" podCreationTimestamp="2025-07-10 00:41:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:41:13.593378692 +0000 UTC m=+1.127248431" watchObservedRunningTime="2025-07-10 00:41:13.600936931 +0000 UTC m=+1.134806670" Jul 10 00:41:13.609124 kubelet[2471]: I0710 00:41:13.608777 2471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.608761839 podStartE2EDuration="1.608761839s" podCreationTimestamp="2025-07-10 00:41:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:41:13.600936172 +0000 UTC m=+1.134805911" watchObservedRunningTime="2025-07-10 00:41:13.608761839 +0000 UTC m=+1.142631538" Jul 10 00:41:13.617175 kubelet[2471]: I0710 00:41:13.617109 2471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.617093207 podStartE2EDuration="1.617093207s" podCreationTimestamp="2025-07-10 00:41:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:41:13.609028587 +0000 UTC m=+1.142898326" watchObservedRunningTime="2025-07-10 00:41:13.617093207 +0000 UTC m=+1.150962946" Jul 10 00:41:14.577328 kubelet[2471]: E0710 00:41:14.577291 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:14.577662 kubelet[2471]: E0710 00:41:14.577405 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:15.579059 kubelet[2471]: E0710 00:41:15.579025 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:16.142652 sudo[1620]: pam_unix(sudo:session): session closed for user root Jul 10 00:41:16.145196 sshd[1617]: pam_unix(sshd:session): session closed for user core Jul 10 00:41:16.147696 systemd[1]: sshd@6-10.0.0.128:22-10.0.0.1:53678.service: Deactivated successfully. Jul 10 00:41:16.149546 systemd[1]: session-7.scope: Deactivated successfully. Jul 10 00:41:16.150263 systemd[1]: session-7.scope: Consumed 8.129s CPU time, 152.1M memory peak, 0B memory swap peak. Jul 10 00:41:16.151416 systemd-logind[1428]: Session 7 logged out. Waiting for processes to exit. Jul 10 00:41:16.152697 systemd-logind[1428]: Removed session 7. Jul 10 00:41:16.725528 kubelet[2471]: E0710 00:41:16.725385 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:17.915914 kubelet[2471]: I0710 00:41:17.915885 2471 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 10 00:41:17.916586 containerd[1443]: time="2025-07-10T00:41:17.916550085Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 10 00:41:17.917434 kubelet[2471]: I0710 00:41:17.916719 2471 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 10 00:41:18.592931 systemd[1]: Created slice kubepods-besteffort-pod3cd2a963_0151_46ab_ac17_a49e9ad2740c.slice - libcontainer container kubepods-besteffort-pod3cd2a963_0151_46ab_ac17_a49e9ad2740c.slice. Jul 10 00:41:18.606629 systemd[1]: Created slice kubepods-burstable-podc1513086_0101_42a2_87e7_e5ce72618d17.slice - libcontainer container kubepods-burstable-podc1513086_0101_42a2_87e7_e5ce72618d17.slice. Jul 10 00:41:18.676637 kubelet[2471]: I0710 00:41:18.676588 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3cd2a963-0151-46ab-ac17-a49e9ad2740c-kube-proxy\") pod \"kube-proxy-7zrqg\" (UID: \"3cd2a963-0151-46ab-ac17-a49e9ad2740c\") " pod="kube-system/kube-proxy-7zrqg" Jul 10 00:41:18.676637 kubelet[2471]: I0710 00:41:18.676628 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3cd2a963-0151-46ab-ac17-a49e9ad2740c-lib-modules\") pod \"kube-proxy-7zrqg\" (UID: \"3cd2a963-0151-46ab-ac17-a49e9ad2740c\") " pod="kube-system/kube-proxy-7zrqg" Jul 10 00:41:18.676637 kubelet[2471]: I0710 00:41:18.676654 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c1513086-0101-42a2-87e7-e5ce72618d17-cilium-cgroup\") pod \"cilium-4w6gv\" (UID: \"c1513086-0101-42a2-87e7-e5ce72618d17\") " pod="kube-system/cilium-4w6gv" Jul 10 00:41:18.676637 kubelet[2471]: I0710 00:41:18.676722 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c1513086-0101-42a2-87e7-e5ce72618d17-cilium-run\") pod \"cilium-4w6gv\" (UID: \"c1513086-0101-42a2-87e7-e5ce72618d17\") " pod="kube-system/cilium-4w6gv" Jul 10 00:41:18.676637 kubelet[2471]: I0710 00:41:18.676786 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c1513086-0101-42a2-87e7-e5ce72618d17-hostproc\") pod \"cilium-4w6gv\" (UID: \"c1513086-0101-42a2-87e7-e5ce72618d17\") " pod="kube-system/cilium-4w6gv" Jul 10 00:41:18.676637 kubelet[2471]: I0710 00:41:18.676832 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1513086-0101-42a2-87e7-e5ce72618d17-lib-modules\") pod \"cilium-4w6gv\" (UID: \"c1513086-0101-42a2-87e7-e5ce72618d17\") " pod="kube-system/cilium-4w6gv" Jul 10 00:41:18.677564 kubelet[2471]: I0710 00:41:18.676868 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c1513086-0101-42a2-87e7-e5ce72618d17-host-proc-sys-net\") pod \"cilium-4w6gv\" (UID: \"c1513086-0101-42a2-87e7-e5ce72618d17\") " pod="kube-system/cilium-4w6gv" Jul 10 00:41:18.677564 kubelet[2471]: I0710 00:41:18.676886 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hqhh\" (UniqueName: \"kubernetes.io/projected/c1513086-0101-42a2-87e7-e5ce72618d17-kube-api-access-4hqhh\") pod \"cilium-4w6gv\" (UID: \"c1513086-0101-42a2-87e7-e5ce72618d17\") " pod="kube-system/cilium-4w6gv" Jul 10 00:41:18.677564 kubelet[2471]: I0710 00:41:18.676932 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c1513086-0101-42a2-87e7-e5ce72618d17-bpf-maps\") pod \"cilium-4w6gv\" (UID: \"c1513086-0101-42a2-87e7-e5ce72618d17\") " pod="kube-system/cilium-4w6gv" Jul 10 00:41:18.677564 kubelet[2471]: I0710 00:41:18.676966 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c1513086-0101-42a2-87e7-e5ce72618d17-etc-cni-netd\") pod \"cilium-4w6gv\" (UID: \"c1513086-0101-42a2-87e7-e5ce72618d17\") " pod="kube-system/cilium-4w6gv" Jul 10 00:41:18.677564 kubelet[2471]: I0710 00:41:18.676982 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1513086-0101-42a2-87e7-e5ce72618d17-xtables-lock\") pod \"cilium-4w6gv\" (UID: \"c1513086-0101-42a2-87e7-e5ce72618d17\") " pod="kube-system/cilium-4w6gv" Jul 10 00:41:18.677564 kubelet[2471]: I0710 00:41:18.676997 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c1513086-0101-42a2-87e7-e5ce72618d17-clustermesh-secrets\") pod \"cilium-4w6gv\" (UID: \"c1513086-0101-42a2-87e7-e5ce72618d17\") " pod="kube-system/cilium-4w6gv" Jul 10 00:41:18.677684 kubelet[2471]: I0710 00:41:18.677012 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c1513086-0101-42a2-87e7-e5ce72618d17-cilium-config-path\") pod \"cilium-4w6gv\" (UID: \"c1513086-0101-42a2-87e7-e5ce72618d17\") " pod="kube-system/cilium-4w6gv" Jul 10 00:41:18.677684 kubelet[2471]: I0710 00:41:18.677026 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c1513086-0101-42a2-87e7-e5ce72618d17-host-proc-sys-kernel\") pod \"cilium-4w6gv\" (UID: \"c1513086-0101-42a2-87e7-e5ce72618d17\") " pod="kube-system/cilium-4w6gv" Jul 10 00:41:18.677684 kubelet[2471]: I0710 00:41:18.677046 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3cd2a963-0151-46ab-ac17-a49e9ad2740c-xtables-lock\") pod \"kube-proxy-7zrqg\" (UID: \"3cd2a963-0151-46ab-ac17-a49e9ad2740c\") " pod="kube-system/kube-proxy-7zrqg" Jul 10 00:41:18.677684 kubelet[2471]: I0710 00:41:18.677061 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6vf6\" (UniqueName: \"kubernetes.io/projected/3cd2a963-0151-46ab-ac17-a49e9ad2740c-kube-api-access-l6vf6\") pod \"kube-proxy-7zrqg\" (UID: \"3cd2a963-0151-46ab-ac17-a49e9ad2740c\") " pod="kube-system/kube-proxy-7zrqg" Jul 10 00:41:18.677684 kubelet[2471]: I0710 00:41:18.677076 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c1513086-0101-42a2-87e7-e5ce72618d17-cni-path\") pod \"cilium-4w6gv\" (UID: \"c1513086-0101-42a2-87e7-e5ce72618d17\") " pod="kube-system/cilium-4w6gv" Jul 10 00:41:18.677779 kubelet[2471]: I0710 00:41:18.677089 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c1513086-0101-42a2-87e7-e5ce72618d17-hubble-tls\") pod \"cilium-4w6gv\" (UID: \"c1513086-0101-42a2-87e7-e5ce72618d17\") " pod="kube-system/cilium-4w6gv" Jul 10 00:41:18.790330 kubelet[2471]: E0710 00:41:18.790290 2471 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 10 00:41:18.790425 kubelet[2471]: E0710 00:41:18.790342 2471 projected.go:194] Error preparing data for projected volume kube-api-access-l6vf6 for pod kube-system/kube-proxy-7zrqg: configmap "kube-root-ca.crt" not found Jul 10 00:41:18.790544 kubelet[2471]: E0710 00:41:18.790513 2471 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3cd2a963-0151-46ab-ac17-a49e9ad2740c-kube-api-access-l6vf6 podName:3cd2a963-0151-46ab-ac17-a49e9ad2740c nodeName:}" failed. No retries permitted until 2025-07-10 00:41:19.290388239 +0000 UTC m=+6.824257978 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l6vf6" (UniqueName: "kubernetes.io/projected/3cd2a963-0151-46ab-ac17-a49e9ad2740c-kube-api-access-l6vf6") pod "kube-proxy-7zrqg" (UID: "3cd2a963-0151-46ab-ac17-a49e9ad2740c") : configmap "kube-root-ca.crt" not found Jul 10 00:41:18.796842 kubelet[2471]: E0710 00:41:18.796817 2471 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 10 00:41:18.796842 kubelet[2471]: E0710 00:41:18.796843 2471 projected.go:194] Error preparing data for projected volume kube-api-access-4hqhh for pod kube-system/cilium-4w6gv: configmap "kube-root-ca.crt" not found Jul 10 00:41:18.797020 kubelet[2471]: E0710 00:41:18.796896 2471 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1513086-0101-42a2-87e7-e5ce72618d17-kube-api-access-4hqhh podName:c1513086-0101-42a2-87e7-e5ce72618d17 nodeName:}" failed. No retries permitted until 2025-07-10 00:41:19.296877754 +0000 UTC m=+6.830747493 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4hqhh" (UniqueName: "kubernetes.io/projected/c1513086-0101-42a2-87e7-e5ce72618d17-kube-api-access-4hqhh") pod "cilium-4w6gv" (UID: "c1513086-0101-42a2-87e7-e5ce72618d17") : configmap "kube-root-ca.crt" not found Jul 10 00:41:19.103887 systemd[1]: Created slice kubepods-besteffort-pod84241c7a_8eed_4a8c_87b2_38dd8cf6e250.slice - libcontainer container kubepods-besteffort-pod84241c7a_8eed_4a8c_87b2_38dd8cf6e250.slice. Jul 10 00:41:19.180482 kubelet[2471]: I0710 00:41:19.180426 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/84241c7a-8eed-4a8c-87b2-38dd8cf6e250-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-zncpg\" (UID: \"84241c7a-8eed-4a8c-87b2-38dd8cf6e250\") " pod="kube-system/cilium-operator-6c4d7847fc-zncpg" Jul 10 00:41:19.180788 kubelet[2471]: I0710 00:41:19.180499 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd2fx\" (UniqueName: \"kubernetes.io/projected/84241c7a-8eed-4a8c-87b2-38dd8cf6e250-kube-api-access-vd2fx\") pod \"cilium-operator-6c4d7847fc-zncpg\" (UID: \"84241c7a-8eed-4a8c-87b2-38dd8cf6e250\") " pod="kube-system/cilium-operator-6c4d7847fc-zncpg" Jul 10 00:41:19.408427 kubelet[2471]: E0710 00:41:19.407882 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:19.408777 containerd[1443]: time="2025-07-10T00:41:19.408712917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-zncpg,Uid:84241c7a-8eed-4a8c-87b2-38dd8cf6e250,Namespace:kube-system,Attempt:0,}" Jul 10 00:41:19.434463 containerd[1443]: time="2025-07-10T00:41:19.434368084Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:41:19.434463 containerd[1443]: time="2025-07-10T00:41:19.434430114Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:41:19.434807 containerd[1443]: time="2025-07-10T00:41:19.434442472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:41:19.434807 containerd[1443]: time="2025-07-10T00:41:19.434760300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:41:19.456665 systemd[1]: Started cri-containerd-3efb64fe18a30298b7931e252bbdfdb0053d2aa379ea0470dbf11418457cf7b0.scope - libcontainer container 3efb64fe18a30298b7931e252bbdfdb0053d2aa379ea0470dbf11418457cf7b0. Jul 10 00:41:19.483427 containerd[1443]: time="2025-07-10T00:41:19.483369471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-zncpg,Uid:84241c7a-8eed-4a8c-87b2-38dd8cf6e250,Namespace:kube-system,Attempt:0,} returns sandbox id \"3efb64fe18a30298b7931e252bbdfdb0053d2aa379ea0470dbf11418457cf7b0\"" Jul 10 00:41:19.484692 kubelet[2471]: E0710 00:41:19.484190 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:19.486398 containerd[1443]: time="2025-07-10T00:41:19.486357667Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 10 00:41:19.503980 kubelet[2471]: E0710 00:41:19.503692 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:19.504192 containerd[1443]: time="2025-07-10T00:41:19.504143268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7zrqg,Uid:3cd2a963-0151-46ab-ac17-a49e9ad2740c,Namespace:kube-system,Attempt:0,}" Jul 10 00:41:19.511380 kubelet[2471]: E0710 00:41:19.510968 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:19.512408 containerd[1443]: time="2025-07-10T00:41:19.512365577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4w6gv,Uid:c1513086-0101-42a2-87e7-e5ce72618d17,Namespace:kube-system,Attempt:0,}" Jul 10 00:41:19.523077 containerd[1443]: time="2025-07-10T00:41:19.522980698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:41:19.523077 containerd[1443]: time="2025-07-10T00:41:19.523042168Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:41:19.523077 containerd[1443]: time="2025-07-10T00:41:19.523056726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:41:19.523216 containerd[1443]: time="2025-07-10T00:41:19.523139912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:41:19.533370 containerd[1443]: time="2025-07-10T00:41:19.531660573Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:41:19.533370 containerd[1443]: time="2025-07-10T00:41:19.532170410Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:41:19.533370 containerd[1443]: time="2025-07-10T00:41:19.532182968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:41:19.533370 containerd[1443]: time="2025-07-10T00:41:19.532266475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:41:19.540623 systemd[1]: Started cri-containerd-e808db2bb78aa4ed1b363829b79429db96ead1aefa8aa8efcbf647e8f678e914.scope - libcontainer container e808db2bb78aa4ed1b363829b79429db96ead1aefa8aa8efcbf647e8f678e914. Jul 10 00:41:19.544743 systemd[1]: Started cri-containerd-ec2a2b9edf1062838bca482ee517fa136febb5b902765b6819ea530400f4fab8.scope - libcontainer container ec2a2b9edf1062838bca482ee517fa136febb5b902765b6819ea530400f4fab8. Jul 10 00:41:19.568259 containerd[1443]: time="2025-07-10T00:41:19.568212256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7zrqg,Uid:3cd2a963-0151-46ab-ac17-a49e9ad2740c,Namespace:kube-system,Attempt:0,} returns sandbox id \"e808db2bb78aa4ed1b363829b79429db96ead1aefa8aa8efcbf647e8f678e914\"" Jul 10 00:41:19.569017 kubelet[2471]: E0710 00:41:19.568857 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:19.569145 containerd[1443]: time="2025-07-10T00:41:19.569113070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4w6gv,Uid:c1513086-0101-42a2-87e7-e5ce72618d17,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec2a2b9edf1062838bca482ee517fa136febb5b902765b6819ea530400f4fab8\"" Jul 10 00:41:19.569925 kubelet[2471]: E0710 00:41:19.569904 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:19.573763 containerd[1443]: time="2025-07-10T00:41:19.573716285Z" level=info msg="CreateContainer within sandbox \"e808db2bb78aa4ed1b363829b79429db96ead1aefa8aa8efcbf647e8f678e914\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 10 00:41:19.585884 containerd[1443]: time="2025-07-10T00:41:19.585732579Z" level=info msg="CreateContainer within sandbox \"e808db2bb78aa4ed1b363829b79429db96ead1aefa8aa8efcbf647e8f678e914\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1f7264d87336c3d478194db7193363b852e292b2a3981764d19201ef2d82d519\"" Jul 10 00:41:19.586563 containerd[1443]: time="2025-07-10T00:41:19.586529970Z" level=info msg="StartContainer for \"1f7264d87336c3d478194db7193363b852e292b2a3981764d19201ef2d82d519\"" Jul 10 00:41:19.627645 systemd[1]: Started cri-containerd-1f7264d87336c3d478194db7193363b852e292b2a3981764d19201ef2d82d519.scope - libcontainer container 1f7264d87336c3d478194db7193363b852e292b2a3981764d19201ef2d82d519. Jul 10 00:41:19.655540 containerd[1443]: time="2025-07-10T00:41:19.655465930Z" level=info msg="StartContainer for \"1f7264d87336c3d478194db7193363b852e292b2a3981764d19201ef2d82d519\" returns successfully" Jul 10 00:41:20.591301 kubelet[2471]: E0710 00:41:20.591138 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:20.604759 kubelet[2471]: I0710 00:41:20.604679 2471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7zrqg" podStartSLOduration=2.604651563 podStartE2EDuration="2.604651563s" podCreationTimestamp="2025-07-10 00:41:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:41:20.604466312 +0000 UTC m=+8.138336091" watchObservedRunningTime="2025-07-10 00:41:20.604651563 +0000 UTC m=+8.138521302" Jul 10 00:41:20.916761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4022986722.mount: Deactivated successfully. Jul 10 00:41:21.593050 kubelet[2471]: E0710 00:41:21.593012 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:21.806159 kubelet[2471]: E0710 00:41:21.806105 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:22.594738 kubelet[2471]: E0710 00:41:22.594619 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:23.328842 containerd[1443]: time="2025-07-10T00:41:23.328788474Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:41:23.330164 containerd[1443]: time="2025-07-10T00:41:23.330132522Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 10 00:41:23.333500 containerd[1443]: time="2025-07-10T00:41:23.330961404Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:41:23.333500 containerd[1443]: time="2025-07-10T00:41:23.332291814Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.845887874s" Jul 10 00:41:23.333500 containerd[1443]: time="2025-07-10T00:41:23.332325449Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 10 00:41:23.337246 containerd[1443]: time="2025-07-10T00:41:23.337218992Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 10 00:41:23.340479 containerd[1443]: time="2025-07-10T00:41:23.340444692Z" level=info msg="CreateContainer within sandbox \"3efb64fe18a30298b7931e252bbdfdb0053d2aa379ea0470dbf11418457cf7b0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 10 00:41:23.351190 containerd[1443]: time="2025-07-10T00:41:23.351134168Z" level=info msg="CreateContainer within sandbox \"3efb64fe18a30298b7931e252bbdfdb0053d2aa379ea0470dbf11418457cf7b0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f4667e955ace9287ebe17b231d3d12a5077909e3d4d4d4ac3f7c731442fee9b6\"" Jul 10 00:41:23.351667 containerd[1443]: time="2025-07-10T00:41:23.351644655Z" level=info msg="StartContainer for \"f4667e955ace9287ebe17b231d3d12a5077909e3d4d4d4ac3f7c731442fee9b6\"" Jul 10 00:41:23.387631 systemd[1]: Started cri-containerd-f4667e955ace9287ebe17b231d3d12a5077909e3d4d4d4ac3f7c731442fee9b6.scope - libcontainer container f4667e955ace9287ebe17b231d3d12a5077909e3d4d4d4ac3f7c731442fee9b6. Jul 10 00:41:23.407446 containerd[1443]: time="2025-07-10T00:41:23.407399985Z" level=info msg="StartContainer for \"f4667e955ace9287ebe17b231d3d12a5077909e3d4d4d4ac3f7c731442fee9b6\" returns successfully" Jul 10 00:41:23.602511 kubelet[2471]: E0710 00:41:23.602349 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:23.618149 kubelet[2471]: I0710 00:41:23.618072 2471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-zncpg" podStartSLOduration=0.770249839 podStartE2EDuration="4.61805699s" podCreationTimestamp="2025-07-10 00:41:19 +0000 UTC" firstStartedPulling="2025-07-10 00:41:19.48590766 +0000 UTC m=+7.019777399" lastFinishedPulling="2025-07-10 00:41:23.333714811 +0000 UTC m=+10.867584550" observedRunningTime="2025-07-10 00:41:23.617943966 +0000 UTC m=+11.151813705" watchObservedRunningTime="2025-07-10 00:41:23.61805699 +0000 UTC m=+11.151926729" Jul 10 00:41:24.292617 kubelet[2471]: E0710 00:41:24.292262 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:24.604093 kubelet[2471]: E0710 00:41:24.603635 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:26.734929 kubelet[2471]: E0710 00:41:26.733687 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:29.081920 update_engine[1432]: I20250710 00:41:29.081310 1432 update_attempter.cc:509] Updating boot flags... Jul 10 00:41:29.114504 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2914) Jul 10 00:41:29.164495 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2914) Jul 10 00:41:35.828833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1489271601.mount: Deactivated successfully. Jul 10 00:41:37.147604 containerd[1443]: time="2025-07-10T00:41:37.147542490Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:41:37.148165 containerd[1443]: time="2025-07-10T00:41:37.148111838Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 10 00:41:37.148991 containerd[1443]: time="2025-07-10T00:41:37.148954881Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:41:37.150545 containerd[1443]: time="2025-07-10T00:41:37.150489741Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 13.813234874s" Jul 10 00:41:37.150545 containerd[1443]: time="2025-07-10T00:41:37.150529817Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 10 00:41:37.156008 containerd[1443]: time="2025-07-10T00:41:37.155959761Z" level=info msg="CreateContainer within sandbox \"ec2a2b9edf1062838bca482ee517fa136febb5b902765b6819ea530400f4fab8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 00:41:37.191224 containerd[1443]: time="2025-07-10T00:41:37.191175621Z" level=info msg="CreateContainer within sandbox \"ec2a2b9edf1062838bca482ee517fa136febb5b902765b6819ea530400f4fab8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d23a808ae7dd1afd83da8fdc90ae9d5e86919e98bfa059ad5575fc585222f94f\"" Jul 10 00:41:37.191941 containerd[1443]: time="2025-07-10T00:41:37.191849640Z" level=info msg="StartContainer for \"d23a808ae7dd1afd83da8fdc90ae9d5e86919e98bfa059ad5575fc585222f94f\"" Jul 10 00:41:37.223660 systemd[1]: Started cri-containerd-d23a808ae7dd1afd83da8fdc90ae9d5e86919e98bfa059ad5575fc585222f94f.scope - libcontainer container d23a808ae7dd1afd83da8fdc90ae9d5e86919e98bfa059ad5575fc585222f94f. Jul 10 00:41:37.246053 containerd[1443]: time="2025-07-10T00:41:37.246006409Z" level=info msg="StartContainer for \"d23a808ae7dd1afd83da8fdc90ae9d5e86919e98bfa059ad5575fc585222f94f\" returns successfully" Jul 10 00:41:37.300065 systemd[1]: cri-containerd-d23a808ae7dd1afd83da8fdc90ae9d5e86919e98bfa059ad5575fc585222f94f.scope: Deactivated successfully. Jul 10 00:41:37.376527 containerd[1443]: time="2025-07-10T00:41:37.376447365Z" level=info msg="shim disconnected" id=d23a808ae7dd1afd83da8fdc90ae9d5e86919e98bfa059ad5575fc585222f94f namespace=k8s.io Jul 10 00:41:37.376527 containerd[1443]: time="2025-07-10T00:41:37.376521879Z" level=warning msg="cleaning up after shim disconnected" id=d23a808ae7dd1afd83da8fdc90ae9d5e86919e98bfa059ad5575fc585222f94f namespace=k8s.io Jul 10 00:41:37.376527 containerd[1443]: time="2025-07-10T00:41:37.376531278Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:41:37.632181 kubelet[2471]: E0710 00:41:37.631124 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:37.643494 containerd[1443]: time="2025-07-10T00:41:37.639007244Z" level=info msg="CreateContainer within sandbox \"ec2a2b9edf1062838bca482ee517fa136febb5b902765b6819ea530400f4fab8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 00:41:37.654713 containerd[1443]: time="2025-07-10T00:41:37.654440914Z" level=info msg="CreateContainer within sandbox \"ec2a2b9edf1062838bca482ee517fa136febb5b902765b6819ea530400f4fab8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e767e8ee8b9e8ebabdedcfc29c5c585867d55a4d1adac6e89678ffc9fa06ed04\"" Jul 10 00:41:37.656685 containerd[1443]: time="2025-07-10T00:41:37.656647712Z" level=info msg="StartContainer for \"e767e8ee8b9e8ebabdedcfc29c5c585867d55a4d1adac6e89678ffc9fa06ed04\"" Jul 10 00:41:37.689700 systemd[1]: Started cri-containerd-e767e8ee8b9e8ebabdedcfc29c5c585867d55a4d1adac6e89678ffc9fa06ed04.scope - libcontainer container e767e8ee8b9e8ebabdedcfc29c5c585867d55a4d1adac6e89678ffc9fa06ed04. Jul 10 00:41:37.712701 containerd[1443]: time="2025-07-10T00:41:37.712656632Z" level=info msg="StartContainer for \"e767e8ee8b9e8ebabdedcfc29c5c585867d55a4d1adac6e89678ffc9fa06ed04\" returns successfully" Jul 10 00:41:37.733055 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:41:37.733289 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:41:37.733361 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:41:37.740907 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:41:37.741169 systemd[1]: cri-containerd-e767e8ee8b9e8ebabdedcfc29c5c585867d55a4d1adac6e89678ffc9fa06ed04.scope: Deactivated successfully. Jul 10 00:41:37.759964 containerd[1443]: time="2025-07-10T00:41:37.759840279Z" level=info msg="shim disconnected" id=e767e8ee8b9e8ebabdedcfc29c5c585867d55a4d1adac6e89678ffc9fa06ed04 namespace=k8s.io Jul 10 00:41:37.759964 containerd[1443]: time="2025-07-10T00:41:37.759955028Z" level=warning msg="cleaning up after shim disconnected" id=e767e8ee8b9e8ebabdedcfc29c5c585867d55a4d1adac6e89678ffc9fa06ed04 namespace=k8s.io Jul 10 00:41:37.759964 containerd[1443]: time="2025-07-10T00:41:37.759965307Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:41:37.767758 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:41:38.186272 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d23a808ae7dd1afd83da8fdc90ae9d5e86919e98bfa059ad5575fc585222f94f-rootfs.mount: Deactivated successfully. Jul 10 00:41:38.634669 kubelet[2471]: E0710 00:41:38.633428 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:38.642146 containerd[1443]: time="2025-07-10T00:41:38.641834687Z" level=info msg="CreateContainer within sandbox \"ec2a2b9edf1062838bca482ee517fa136febb5b902765b6819ea530400f4fab8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 00:41:38.681261 containerd[1443]: time="2025-07-10T00:41:38.681157404Z" level=info msg="CreateContainer within sandbox \"ec2a2b9edf1062838bca482ee517fa136febb5b902765b6819ea530400f4fab8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4c4b056b715ce1002ba2270ebcdcc78cc25113a49a5afd1fa4916bc83e452dbf\"" Jul 10 00:41:38.681819 containerd[1443]: time="2025-07-10T00:41:38.681767430Z" level=info msg="StartContainer for \"4c4b056b715ce1002ba2270ebcdcc78cc25113a49a5afd1fa4916bc83e452dbf\"" Jul 10 00:41:38.710649 systemd[1]: Started cri-containerd-4c4b056b715ce1002ba2270ebcdcc78cc25113a49a5afd1fa4916bc83e452dbf.scope - libcontainer container 4c4b056b715ce1002ba2270ebcdcc78cc25113a49a5afd1fa4916bc83e452dbf. Jul 10 00:41:38.746049 containerd[1443]: time="2025-07-10T00:41:38.745989863Z" level=info msg="StartContainer for \"4c4b056b715ce1002ba2270ebcdcc78cc25113a49a5afd1fa4916bc83e452dbf\" returns successfully" Jul 10 00:41:38.747460 systemd[1]: cri-containerd-4c4b056b715ce1002ba2270ebcdcc78cc25113a49a5afd1fa4916bc83e452dbf.scope: Deactivated successfully. Jul 10 00:41:38.770328 containerd[1443]: time="2025-07-10T00:41:38.770271953Z" level=info msg="shim disconnected" id=4c4b056b715ce1002ba2270ebcdcc78cc25113a49a5afd1fa4916bc83e452dbf namespace=k8s.io Jul 10 00:41:38.770328 containerd[1443]: time="2025-07-10T00:41:38.770322029Z" level=warning msg="cleaning up after shim disconnected" id=4c4b056b715ce1002ba2270ebcdcc78cc25113a49a5afd1fa4916bc83e452dbf namespace=k8s.io Jul 10 00:41:38.770328 containerd[1443]: time="2025-07-10T00:41:38.770330468Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:41:39.185828 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c4b056b715ce1002ba2270ebcdcc78cc25113a49a5afd1fa4916bc83e452dbf-rootfs.mount: Deactivated successfully. Jul 10 00:41:39.641757 kubelet[2471]: E0710 00:41:39.641614 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:39.648802 containerd[1443]: time="2025-07-10T00:41:39.648750553Z" level=info msg="CreateContainer within sandbox \"ec2a2b9edf1062838bca482ee517fa136febb5b902765b6819ea530400f4fab8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 00:41:39.664032 containerd[1443]: time="2025-07-10T00:41:39.663973527Z" level=info msg="CreateContainer within sandbox \"ec2a2b9edf1062838bca482ee517fa136febb5b902765b6819ea530400f4fab8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"22a15ae55eeeb983f91ebe22bd5236d2efec813a0034dd55f87f820ef3089a2f\"" Jul 10 00:41:39.665777 containerd[1443]: time="2025-07-10T00:41:39.665746215Z" level=info msg="StartContainer for \"22a15ae55eeeb983f91ebe22bd5236d2efec813a0034dd55f87f820ef3089a2f\"" Jul 10 00:41:39.690668 systemd[1]: Started cri-containerd-22a15ae55eeeb983f91ebe22bd5236d2efec813a0034dd55f87f820ef3089a2f.scope - libcontainer container 22a15ae55eeeb983f91ebe22bd5236d2efec813a0034dd55f87f820ef3089a2f. Jul 10 00:41:39.710069 systemd[1]: cri-containerd-22a15ae55eeeb983f91ebe22bd5236d2efec813a0034dd55f87f820ef3089a2f.scope: Deactivated successfully. Jul 10 00:41:39.712245 containerd[1443]: time="2025-07-10T00:41:39.711712311Z" level=info msg="StartContainer for \"22a15ae55eeeb983f91ebe22bd5236d2efec813a0034dd55f87f820ef3089a2f\" returns successfully" Jul 10 00:41:39.730937 containerd[1443]: time="2025-07-10T00:41:39.730882547Z" level=info msg="shim disconnected" id=22a15ae55eeeb983f91ebe22bd5236d2efec813a0034dd55f87f820ef3089a2f namespace=k8s.io Jul 10 00:41:39.730937 containerd[1443]: time="2025-07-10T00:41:39.730934622Z" level=warning msg="cleaning up after shim disconnected" id=22a15ae55eeeb983f91ebe22bd5236d2efec813a0034dd55f87f820ef3089a2f namespace=k8s.io Jul 10 00:41:39.730937 containerd[1443]: time="2025-07-10T00:41:39.730944062Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:41:40.185864 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22a15ae55eeeb983f91ebe22bd5236d2efec813a0034dd55f87f820ef3089a2f-rootfs.mount: Deactivated successfully. Jul 10 00:41:40.645873 kubelet[2471]: E0710 00:41:40.645664 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:40.650181 containerd[1443]: time="2025-07-10T00:41:40.649850129Z" level=info msg="CreateContainer within sandbox \"ec2a2b9edf1062838bca482ee517fa136febb5b902765b6819ea530400f4fab8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 00:41:40.670665 containerd[1443]: time="2025-07-10T00:41:40.670612644Z" level=info msg="CreateContainer within sandbox \"ec2a2b9edf1062838bca482ee517fa136febb5b902765b6819ea530400f4fab8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e1a914d6f3cb7c9162b562d2db4a2b7f411c9947a4ee1be4e8c25dcfa1c3cb56\"" Jul 10 00:41:40.672283 containerd[1443]: time="2025-07-10T00:41:40.671363301Z" level=info msg="StartContainer for \"e1a914d6f3cb7c9162b562d2db4a2b7f411c9947a4ee1be4e8c25dcfa1c3cb56\"" Jul 10 00:41:40.704791 systemd[1]: Started cri-containerd-e1a914d6f3cb7c9162b562d2db4a2b7f411c9947a4ee1be4e8c25dcfa1c3cb56.scope - libcontainer container e1a914d6f3cb7c9162b562d2db4a2b7f411c9947a4ee1be4e8c25dcfa1c3cb56. Jul 10 00:41:40.732309 containerd[1443]: time="2025-07-10T00:41:40.732257081Z" level=info msg="StartContainer for \"e1a914d6f3cb7c9162b562d2db4a2b7f411c9947a4ee1be4e8c25dcfa1c3cb56\" returns successfully" Jul 10 00:41:40.882975 kubelet[2471]: I0710 00:41:40.882943 2471 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 10 00:41:40.947429 systemd[1]: Created slice kubepods-burstable-pod53b6f9bc_8c3a_43f6_9928_4c20ec50069b.slice - libcontainer container kubepods-burstable-pod53b6f9bc_8c3a_43f6_9928_4c20ec50069b.slice. Jul 10 00:41:40.953573 systemd[1]: Created slice kubepods-burstable-pod68ca08b6_846b_4e0c_9cf4_36b64a9f7446.slice - libcontainer container kubepods-burstable-pod68ca08b6_846b_4e0c_9cf4_36b64a9f7446.slice. Jul 10 00:41:41.028094 kubelet[2471]: I0710 00:41:41.028036 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w656\" (UniqueName: \"kubernetes.io/projected/53b6f9bc-8c3a-43f6-9928-4c20ec50069b-kube-api-access-6w656\") pod \"coredns-674b8bbfcf-4kjjt\" (UID: \"53b6f9bc-8c3a-43f6-9928-4c20ec50069b\") " pod="kube-system/coredns-674b8bbfcf-4kjjt" Jul 10 00:41:41.028094 kubelet[2471]: I0710 00:41:41.028098 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l966\" (UniqueName: \"kubernetes.io/projected/68ca08b6-846b-4e0c-9cf4-36b64a9f7446-kube-api-access-5l966\") pod \"coredns-674b8bbfcf-sl55c\" (UID: \"68ca08b6-846b-4e0c-9cf4-36b64a9f7446\") " pod="kube-system/coredns-674b8bbfcf-sl55c" Jul 10 00:41:41.028275 kubelet[2471]: I0710 00:41:41.028120 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/68ca08b6-846b-4e0c-9cf4-36b64a9f7446-config-volume\") pod \"coredns-674b8bbfcf-sl55c\" (UID: \"68ca08b6-846b-4e0c-9cf4-36b64a9f7446\") " pod="kube-system/coredns-674b8bbfcf-sl55c" Jul 10 00:41:41.028275 kubelet[2471]: I0710 00:41:41.028136 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53b6f9bc-8c3a-43f6-9928-4c20ec50069b-config-volume\") pod \"coredns-674b8bbfcf-4kjjt\" (UID: \"53b6f9bc-8c3a-43f6-9928-4c20ec50069b\") " pod="kube-system/coredns-674b8bbfcf-4kjjt" Jul 10 00:41:41.189835 systemd[1]: run-containerd-runc-k8s.io-e1a914d6f3cb7c9162b562d2db4a2b7f411c9947a4ee1be4e8c25dcfa1c3cb56-runc.tALVdF.mount: Deactivated successfully. Jul 10 00:41:41.254626 kubelet[2471]: E0710 00:41:41.253646 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:41.256080 containerd[1443]: time="2025-07-10T00:41:41.255177242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4kjjt,Uid:53b6f9bc-8c3a-43f6-9928-4c20ec50069b,Namespace:kube-system,Attempt:0,}" Jul 10 00:41:41.256251 kubelet[2471]: E0710 00:41:41.256232 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:41.256751 containerd[1443]: time="2025-07-10T00:41:41.256696279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sl55c,Uid:68ca08b6-846b-4e0c-9cf4-36b64a9f7446,Namespace:kube-system,Attempt:0,}" Jul 10 00:41:41.622259 systemd[1]: Started sshd@7-10.0.0.128:22-10.0.0.1:51506.service - OpenSSH per-connection server daemon (10.0.0.1:51506). Jul 10 00:41:41.653669 kubelet[2471]: E0710 00:41:41.652199 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:41.660929 sshd[3343]: Accepted publickey for core from 10.0.0.1 port 51506 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:41:41.662262 sshd[3343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:41:41.666536 systemd-logind[1428]: New session 8 of user core. Jul 10 00:41:41.675668 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 10 00:41:41.799128 sshd[3343]: pam_unix(sshd:session): session closed for user core Jul 10 00:41:41.803107 systemd[1]: sshd@7-10.0.0.128:22-10.0.0.1:51506.service: Deactivated successfully. Jul 10 00:41:41.806029 systemd[1]: session-8.scope: Deactivated successfully. Jul 10 00:41:41.806631 systemd-logind[1428]: Session 8 logged out. Waiting for processes to exit. Jul 10 00:41:41.807415 systemd-logind[1428]: Removed session 8. Jul 10 00:41:42.655350 kubelet[2471]: E0710 00:41:42.655316 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:42.979135 systemd-networkd[1385]: cilium_host: Link UP Jul 10 00:41:42.979275 systemd-networkd[1385]: cilium_net: Link UP Jul 10 00:41:42.979278 systemd-networkd[1385]: cilium_net: Gained carrier Jul 10 00:41:42.979438 systemd-networkd[1385]: cilium_host: Gained carrier Jul 10 00:41:42.981144 systemd-networkd[1385]: cilium_net: Gained IPv6LL Jul 10 00:41:43.070878 systemd-networkd[1385]: cilium_vxlan: Link UP Jul 10 00:41:43.070893 systemd-networkd[1385]: cilium_vxlan: Gained carrier Jul 10 00:41:43.202597 systemd-networkd[1385]: cilium_host: Gained IPv6LL Jul 10 00:41:43.391592 kernel: NET: Registered PF_ALG protocol family Jul 10 00:41:43.657127 kubelet[2471]: E0710 00:41:43.657085 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:43.968115 systemd-networkd[1385]: lxc_health: Link UP Jul 10 00:41:43.974664 systemd-networkd[1385]: lxc_health: Gained carrier Jul 10 00:41:44.370812 systemd-networkd[1385]: lxc7cb945ebf897: Link UP Jul 10 00:41:44.384508 kernel: eth0: renamed from tmp0da0a Jul 10 00:41:44.392692 systemd-networkd[1385]: lxc9b92771638fc: Link UP Jul 10 00:41:44.401449 systemd-networkd[1385]: lxc7cb945ebf897: Gained carrier Jul 10 00:41:44.405031 kernel: eth0: renamed from tmpbd86c Jul 10 00:41:44.411161 systemd-networkd[1385]: lxc9b92771638fc: Gained carrier Jul 10 00:41:44.586659 systemd-networkd[1385]: cilium_vxlan: Gained IPv6LL Jul 10 00:41:45.098637 systemd-networkd[1385]: lxc_health: Gained IPv6LL Jul 10 00:41:45.527833 kubelet[2471]: E0710 00:41:45.527654 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:45.543107 kubelet[2471]: I0710 00:41:45.543016 2471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4w6gv" podStartSLOduration=9.962123005 podStartE2EDuration="27.542997898s" podCreationTimestamp="2025-07-10 00:41:18 +0000 UTC" firstStartedPulling="2025-07-10 00:41:19.571051276 +0000 UTC m=+7.104921015" lastFinishedPulling="2025-07-10 00:41:37.151926169 +0000 UTC m=+24.685795908" observedRunningTime="2025-07-10 00:41:41.6698155 +0000 UTC m=+29.203685279" watchObservedRunningTime="2025-07-10 00:41:45.542997898 +0000 UTC m=+33.076867637" Jul 10 00:41:45.610623 systemd-networkd[1385]: lxc7cb945ebf897: Gained IPv6LL Jul 10 00:41:45.661148 kubelet[2471]: E0710 00:41:45.659578 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:45.674671 systemd-networkd[1385]: lxc9b92771638fc: Gained IPv6LL Jul 10 00:41:46.661864 kubelet[2471]: E0710 00:41:46.661819 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:46.812293 systemd[1]: Started sshd@8-10.0.0.128:22-10.0.0.1:47328.service - OpenSSH per-connection server daemon (10.0.0.1:47328). Jul 10 00:41:46.851573 sshd[3738]: Accepted publickey for core from 10.0.0.1 port 47328 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:41:46.852863 sshd[3738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:41:46.856793 systemd-logind[1428]: New session 9 of user core. Jul 10 00:41:46.867633 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 10 00:41:46.988416 sshd[3738]: pam_unix(sshd:session): session closed for user core Jul 10 00:41:46.991932 systemd[1]: sshd@8-10.0.0.128:22-10.0.0.1:47328.service: Deactivated successfully. Jul 10 00:41:46.994386 systemd[1]: session-9.scope: Deactivated successfully. Jul 10 00:41:46.995562 systemd-logind[1428]: Session 9 logged out. Waiting for processes to exit. Jul 10 00:41:46.996435 systemd-logind[1428]: Removed session 9. Jul 10 00:41:48.138231 containerd[1443]: time="2025-07-10T00:41:48.137252767Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:41:48.138231 containerd[1443]: time="2025-07-10T00:41:48.137313084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:41:48.138231 containerd[1443]: time="2025-07-10T00:41:48.137327123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:41:48.138231 containerd[1443]: time="2025-07-10T00:41:48.137414037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:41:48.138231 containerd[1443]: time="2025-07-10T00:41:48.137223369Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:41:48.138231 containerd[1443]: time="2025-07-10T00:41:48.137280526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:41:48.138231 containerd[1443]: time="2025-07-10T00:41:48.137292605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:41:48.138231 containerd[1443]: time="2025-07-10T00:41:48.137366120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:41:48.165670 systemd[1]: Started cri-containerd-0da0a70097951d601b394a0278da65e0e5c0e4523e1cb684b3dfe15db0ecbfa9.scope - libcontainer container 0da0a70097951d601b394a0278da65e0e5c0e4523e1cb684b3dfe15db0ecbfa9. Jul 10 00:41:48.166822 systemd[1]: Started cri-containerd-bd86ca4b69de5e14c23d8b6da658b62a206e9c64d18637e50bf2288c1cca8229.scope - libcontainer container bd86ca4b69de5e14c23d8b6da658b62a206e9c64d18637e50bf2288c1cca8229. Jul 10 00:41:48.178183 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:41:48.187153 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:41:48.201881 containerd[1443]: time="2025-07-10T00:41:48.201754009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sl55c,Uid:68ca08b6-846b-4e0c-9cf4-36b64a9f7446,Namespace:kube-system,Attempt:0,} returns sandbox id \"0da0a70097951d601b394a0278da65e0e5c0e4523e1cb684b3dfe15db0ecbfa9\"" Jul 10 00:41:48.202882 kubelet[2471]: E0710 00:41:48.202851 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:48.208783 containerd[1443]: time="2025-07-10T00:41:48.208501934Z" level=info msg="CreateContainer within sandbox \"0da0a70097951d601b394a0278da65e0e5c0e4523e1cb684b3dfe15db0ecbfa9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:41:48.210871 containerd[1443]: time="2025-07-10T00:41:48.210820745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4kjjt,Uid:53b6f9bc-8c3a-43f6-9928-4c20ec50069b,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd86ca4b69de5e14c23d8b6da658b62a206e9c64d18637e50bf2288c1cca8229\"" Jul 10 00:41:48.213050 kubelet[2471]: E0710 00:41:48.213019 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:48.217913 containerd[1443]: time="2025-07-10T00:41:48.217664224Z" level=info msg="CreateContainer within sandbox \"bd86ca4b69de5e14c23d8b6da658b62a206e9c64d18637e50bf2288c1cca8229\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:41:48.233125 containerd[1443]: time="2025-07-10T00:41:48.233073270Z" level=info msg="CreateContainer within sandbox \"0da0a70097951d601b394a0278da65e0e5c0e4523e1cb684b3dfe15db0ecbfa9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5bb3b77d1104b6abf381c26e2edd73f1f84b4e47620555343837f251bf1a16ac\"" Jul 10 00:41:48.233846 containerd[1443]: time="2025-07-10T00:41:48.233811903Z" level=info msg="StartContainer for \"5bb3b77d1104b6abf381c26e2edd73f1f84b4e47620555343837f251bf1a16ac\"" Jul 10 00:41:48.235419 containerd[1443]: time="2025-07-10T00:41:48.235387361Z" level=info msg="CreateContainer within sandbox \"bd86ca4b69de5e14c23d8b6da658b62a206e9c64d18637e50bf2288c1cca8229\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"57e0f0a81829e0c29a3de1d0713b065aff57b186c835a31b7e8a6e4dfc880ff0\"" Jul 10 00:41:48.236033 containerd[1443]: time="2025-07-10T00:41:48.236010081Z" level=info msg="StartContainer for \"57e0f0a81829e0c29a3de1d0713b065aff57b186c835a31b7e8a6e4dfc880ff0\"" Jul 10 00:41:48.267667 systemd[1]: Started cri-containerd-5bb3b77d1104b6abf381c26e2edd73f1f84b4e47620555343837f251bf1a16ac.scope - libcontainer container 5bb3b77d1104b6abf381c26e2edd73f1f84b4e47620555343837f251bf1a16ac. Jul 10 00:41:48.270537 systemd[1]: Started cri-containerd-57e0f0a81829e0c29a3de1d0713b065aff57b186c835a31b7e8a6e4dfc880ff0.scope - libcontainer container 57e0f0a81829e0c29a3de1d0713b065aff57b186c835a31b7e8a6e4dfc880ff0. Jul 10 00:41:48.313418 containerd[1443]: time="2025-07-10T00:41:48.301787121Z" level=info msg="StartContainer for \"5bb3b77d1104b6abf381c26e2edd73f1f84b4e47620555343837f251bf1a16ac\" returns successfully" Jul 10 00:41:48.315413 containerd[1443]: time="2025-07-10T00:41:48.315355726Z" level=info msg="StartContainer for \"57e0f0a81829e0c29a3de1d0713b065aff57b186c835a31b7e8a6e4dfc880ff0\" returns successfully" Jul 10 00:41:48.666842 kubelet[2471]: E0710 00:41:48.666594 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:48.669270 kubelet[2471]: E0710 00:41:48.669233 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:48.680707 kubelet[2471]: I0710 00:41:48.680339 2471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-4kjjt" podStartSLOduration=29.680324159 podStartE2EDuration="29.680324159s" podCreationTimestamp="2025-07-10 00:41:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:41:48.680063856 +0000 UTC m=+36.213933595" watchObservedRunningTime="2025-07-10 00:41:48.680324159 +0000 UTC m=+36.214193898" Jul 10 00:41:49.671413 kubelet[2471]: E0710 00:41:49.671382 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:49.671834 kubelet[2471]: E0710 00:41:49.671438 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:50.672768 kubelet[2471]: E0710 00:41:50.672740 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:50.673318 kubelet[2471]: E0710 00:41:50.672799 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:52.001555 systemd[1]: Started sshd@9-10.0.0.128:22-10.0.0.1:47336.service - OpenSSH per-connection server daemon (10.0.0.1:47336). Jul 10 00:41:52.050313 sshd[3928]: Accepted publickey for core from 10.0.0.1 port 47336 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:41:52.052087 sshd[3928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:41:52.055861 systemd-logind[1428]: New session 10 of user core. Jul 10 00:41:52.063629 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 10 00:41:52.183522 sshd[3928]: pam_unix(sshd:session): session closed for user core Jul 10 00:41:52.187426 systemd[1]: sshd@9-10.0.0.128:22-10.0.0.1:47336.service: Deactivated successfully. Jul 10 00:41:52.189162 systemd[1]: session-10.scope: Deactivated successfully. Jul 10 00:41:52.190911 systemd-logind[1428]: Session 10 logged out. Waiting for processes to exit. Jul 10 00:41:52.191839 systemd-logind[1428]: Removed session 10. Jul 10 00:41:57.197039 systemd[1]: Started sshd@10-10.0.0.128:22-10.0.0.1:58108.service - OpenSSH per-connection server daemon (10.0.0.1:58108). Jul 10 00:41:57.236453 sshd[3945]: Accepted publickey for core from 10.0.0.1 port 58108 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:41:57.237444 sshd[3945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:41:57.241458 systemd-logind[1428]: New session 11 of user core. Jul 10 00:41:57.247658 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 10 00:41:57.374719 sshd[3945]: pam_unix(sshd:session): session closed for user core Jul 10 00:41:57.393045 systemd[1]: sshd@10-10.0.0.128:22-10.0.0.1:58108.service: Deactivated successfully. Jul 10 00:41:57.394817 systemd[1]: session-11.scope: Deactivated successfully. Jul 10 00:41:57.396619 systemd-logind[1428]: Session 11 logged out. Waiting for processes to exit. Jul 10 00:41:57.409858 systemd[1]: Started sshd@11-10.0.0.128:22-10.0.0.1:58124.service - OpenSSH per-connection server daemon (10.0.0.1:58124). Jul 10 00:41:57.410978 systemd-logind[1428]: Removed session 11. Jul 10 00:41:57.444307 sshd[3960]: Accepted publickey for core from 10.0.0.1 port 58124 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:41:57.446029 sshd[3960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:41:57.451301 systemd-logind[1428]: New session 12 of user core. Jul 10 00:41:57.464734 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 10 00:41:57.626232 sshd[3960]: pam_unix(sshd:session): session closed for user core Jul 10 00:41:57.636385 systemd[1]: sshd@11-10.0.0.128:22-10.0.0.1:58124.service: Deactivated successfully. Jul 10 00:41:57.640984 systemd[1]: session-12.scope: Deactivated successfully. Jul 10 00:41:57.645742 systemd-logind[1428]: Session 12 logged out. Waiting for processes to exit. Jul 10 00:41:57.655834 systemd[1]: Started sshd@12-10.0.0.128:22-10.0.0.1:58136.service - OpenSSH per-connection server daemon (10.0.0.1:58136). Jul 10 00:41:57.658098 systemd-logind[1428]: Removed session 12. Jul 10 00:41:57.700521 sshd[3973]: Accepted publickey for core from 10.0.0.1 port 58136 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:41:57.701514 sshd[3973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:41:57.706044 systemd-logind[1428]: New session 13 of user core. Jul 10 00:41:57.717657 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 10 00:41:57.829938 sshd[3973]: pam_unix(sshd:session): session closed for user core Jul 10 00:41:57.833261 systemd[1]: sshd@12-10.0.0.128:22-10.0.0.1:58136.service: Deactivated successfully. Jul 10 00:41:57.835055 systemd[1]: session-13.scope: Deactivated successfully. Jul 10 00:41:57.835803 systemd-logind[1428]: Session 13 logged out. Waiting for processes to exit. Jul 10 00:41:57.836825 systemd-logind[1428]: Removed session 13. Jul 10 00:42:02.840552 systemd[1]: Started sshd@13-10.0.0.128:22-10.0.0.1:44182.service - OpenSSH per-connection server daemon (10.0.0.1:44182). Jul 10 00:42:02.882524 sshd[3987]: Accepted publickey for core from 10.0.0.1 port 44182 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:42:02.883900 sshd[3987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:42:02.887675 systemd-logind[1428]: New session 14 of user core. Jul 10 00:42:02.894679 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 10 00:42:03.001953 sshd[3987]: pam_unix(sshd:session): session closed for user core Jul 10 00:42:03.005912 systemd[1]: sshd@13-10.0.0.128:22-10.0.0.1:44182.service: Deactivated successfully. Jul 10 00:42:03.008700 systemd[1]: session-14.scope: Deactivated successfully. Jul 10 00:42:03.009852 systemd-logind[1428]: Session 14 logged out. Waiting for processes to exit. Jul 10 00:42:03.010721 systemd-logind[1428]: Removed session 14. Jul 10 00:42:08.012283 systemd[1]: Started sshd@14-10.0.0.128:22-10.0.0.1:44196.service - OpenSSH per-connection server daemon (10.0.0.1:44196). Jul 10 00:42:08.048626 sshd[4001]: Accepted publickey for core from 10.0.0.1 port 44196 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:42:08.049937 sshd[4001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:42:08.053519 systemd-logind[1428]: New session 15 of user core. Jul 10 00:42:08.061710 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 10 00:42:08.169939 sshd[4001]: pam_unix(sshd:session): session closed for user core Jul 10 00:42:08.181577 systemd[1]: sshd@14-10.0.0.128:22-10.0.0.1:44196.service: Deactivated successfully. Jul 10 00:42:08.183506 systemd[1]: session-15.scope: Deactivated successfully. Jul 10 00:42:08.185235 systemd-logind[1428]: Session 15 logged out. Waiting for processes to exit. Jul 10 00:42:08.194931 systemd[1]: Started sshd@15-10.0.0.128:22-10.0.0.1:44210.service - OpenSSH per-connection server daemon (10.0.0.1:44210). Jul 10 00:42:08.195711 systemd-logind[1428]: Removed session 15. Jul 10 00:42:08.228699 sshd[4016]: Accepted publickey for core from 10.0.0.1 port 44210 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:42:08.229984 sshd[4016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:42:08.234134 systemd-logind[1428]: New session 16 of user core. Jul 10 00:42:08.242616 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 10 00:42:08.452731 sshd[4016]: pam_unix(sshd:session): session closed for user core Jul 10 00:42:08.471194 systemd[1]: sshd@15-10.0.0.128:22-10.0.0.1:44210.service: Deactivated successfully. Jul 10 00:42:08.472836 systemd[1]: session-16.scope: Deactivated successfully. Jul 10 00:42:08.475658 systemd-logind[1428]: Session 16 logged out. Waiting for processes to exit. Jul 10 00:42:08.487764 systemd[1]: Started sshd@16-10.0.0.128:22-10.0.0.1:44214.service - OpenSSH per-connection server daemon (10.0.0.1:44214). Jul 10 00:42:08.488743 systemd-logind[1428]: Removed session 16. Jul 10 00:42:08.526192 sshd[4028]: Accepted publickey for core from 10.0.0.1 port 44214 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:42:08.527605 sshd[4028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:42:08.532055 systemd-logind[1428]: New session 17 of user core. Jul 10 00:42:08.541640 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 10 00:42:09.339336 sshd[4028]: pam_unix(sshd:session): session closed for user core Jul 10 00:42:09.351645 systemd[1]: sshd@16-10.0.0.128:22-10.0.0.1:44214.service: Deactivated successfully. Jul 10 00:42:09.354136 systemd[1]: session-17.scope: Deactivated successfully. Jul 10 00:42:09.363822 systemd-logind[1428]: Session 17 logged out. Waiting for processes to exit. Jul 10 00:42:09.374822 systemd[1]: Started sshd@17-10.0.0.128:22-10.0.0.1:44216.service - OpenSSH per-connection server daemon (10.0.0.1:44216). Jul 10 00:42:09.375953 systemd-logind[1428]: Removed session 17. Jul 10 00:42:09.408543 sshd[4048]: Accepted publickey for core from 10.0.0.1 port 44216 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:42:09.409884 sshd[4048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:42:09.414671 systemd-logind[1428]: New session 18 of user core. Jul 10 00:42:09.424649 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 10 00:42:09.648701 sshd[4048]: pam_unix(sshd:session): session closed for user core Jul 10 00:42:09.657517 systemd[1]: sshd@17-10.0.0.128:22-10.0.0.1:44216.service: Deactivated successfully. Jul 10 00:42:09.661131 systemd[1]: session-18.scope: Deactivated successfully. Jul 10 00:42:09.662890 systemd-logind[1428]: Session 18 logged out. Waiting for processes to exit. Jul 10 00:42:09.678115 systemd[1]: Started sshd@18-10.0.0.128:22-10.0.0.1:44220.service - OpenSSH per-connection server daemon (10.0.0.1:44220). Jul 10 00:42:09.679168 systemd-logind[1428]: Removed session 18. Jul 10 00:42:09.711102 sshd[4060]: Accepted publickey for core from 10.0.0.1 port 44220 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:42:09.712652 sshd[4060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:42:09.718920 systemd-logind[1428]: New session 19 of user core. Jul 10 00:42:09.730667 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 10 00:42:09.840709 sshd[4060]: pam_unix(sshd:session): session closed for user core Jul 10 00:42:09.844574 systemd[1]: sshd@18-10.0.0.128:22-10.0.0.1:44220.service: Deactivated successfully. Jul 10 00:42:09.846445 systemd[1]: session-19.scope: Deactivated successfully. Jul 10 00:42:09.847141 systemd-logind[1428]: Session 19 logged out. Waiting for processes to exit. Jul 10 00:42:09.847906 systemd-logind[1428]: Removed session 19. Jul 10 00:42:14.857905 systemd[1]: Started sshd@19-10.0.0.128:22-10.0.0.1:42836.service - OpenSSH per-connection server daemon (10.0.0.1:42836). Jul 10 00:42:14.892904 sshd[4078]: Accepted publickey for core from 10.0.0.1 port 42836 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:42:14.894326 sshd[4078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:42:14.899752 systemd-logind[1428]: New session 20 of user core. Jul 10 00:42:14.909668 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 10 00:42:15.029644 sshd[4078]: pam_unix(sshd:session): session closed for user core Jul 10 00:42:15.033442 systemd[1]: sshd@19-10.0.0.128:22-10.0.0.1:42836.service: Deactivated successfully. Jul 10 00:42:15.035346 systemd[1]: session-20.scope: Deactivated successfully. Jul 10 00:42:15.039362 systemd-logind[1428]: Session 20 logged out. Waiting for processes to exit. Jul 10 00:42:15.040510 systemd-logind[1428]: Removed session 20. Jul 10 00:42:20.040217 systemd[1]: Started sshd@20-10.0.0.128:22-10.0.0.1:42844.service - OpenSSH per-connection server daemon (10.0.0.1:42844). Jul 10 00:42:20.078027 sshd[4095]: Accepted publickey for core from 10.0.0.1 port 42844 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:42:20.079425 sshd[4095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:42:20.083341 systemd-logind[1428]: New session 21 of user core. Jul 10 00:42:20.094668 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 10 00:42:20.202993 sshd[4095]: pam_unix(sshd:session): session closed for user core Jul 10 00:42:20.206771 systemd[1]: sshd@20-10.0.0.128:22-10.0.0.1:42844.service: Deactivated successfully. Jul 10 00:42:20.208441 systemd[1]: session-21.scope: Deactivated successfully. Jul 10 00:42:20.210372 systemd-logind[1428]: Session 21 logged out. Waiting for processes to exit. Jul 10 00:42:20.211738 systemd-logind[1428]: Removed session 21. Jul 10 00:42:25.214703 systemd[1]: Started sshd@21-10.0.0.128:22-10.0.0.1:51340.service - OpenSSH per-connection server daemon (10.0.0.1:51340). Jul 10 00:42:25.258107 sshd[4109]: Accepted publickey for core from 10.0.0.1 port 51340 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:42:25.258623 sshd[4109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:42:25.263018 systemd-logind[1428]: New session 22 of user core. Jul 10 00:42:25.269652 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 10 00:42:25.393823 sshd[4109]: pam_unix(sshd:session): session closed for user core Jul 10 00:42:25.401040 systemd[1]: sshd@21-10.0.0.128:22-10.0.0.1:51340.service: Deactivated successfully. Jul 10 00:42:25.404689 systemd[1]: session-22.scope: Deactivated successfully. Jul 10 00:42:25.407203 systemd-logind[1428]: Session 22 logged out. Waiting for processes to exit. Jul 10 00:42:25.408598 systemd[1]: Started sshd@22-10.0.0.128:22-10.0.0.1:51352.service - OpenSSH per-connection server daemon (10.0.0.1:51352). Jul 10 00:42:25.409938 systemd-logind[1428]: Removed session 22. Jul 10 00:42:25.443596 sshd[4124]: Accepted publickey for core from 10.0.0.1 port 51352 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:42:25.444843 sshd[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:42:25.450928 systemd-logind[1428]: New session 23 of user core. Jul 10 00:42:25.457627 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 10 00:42:27.216012 kubelet[2471]: I0710 00:42:27.215936 2471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-sl55c" podStartSLOduration=68.215920267 podStartE2EDuration="1m8.215920267s" podCreationTimestamp="2025-07-10 00:41:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:41:48.712610757 +0000 UTC m=+36.246480496" watchObservedRunningTime="2025-07-10 00:42:27.215920267 +0000 UTC m=+74.749790006" Jul 10 00:42:27.233581 containerd[1443]: time="2025-07-10T00:42:27.233533858Z" level=info msg="StopContainer for \"f4667e955ace9287ebe17b231d3d12a5077909e3d4d4d4ac3f7c731442fee9b6\" with timeout 30 (s)" Jul 10 00:42:27.234275 containerd[1443]: time="2025-07-10T00:42:27.233990889Z" level=info msg="Stop container \"f4667e955ace9287ebe17b231d3d12a5077909e3d4d4d4ac3f7c731442fee9b6\" with signal terminated" Jul 10 00:42:27.249872 systemd[1]: cri-containerd-f4667e955ace9287ebe17b231d3d12a5077909e3d4d4d4ac3f7c731442fee9b6.scope: Deactivated successfully. Jul 10 00:42:27.273678 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4667e955ace9287ebe17b231d3d12a5077909e3d4d4d4ac3f7c731442fee9b6-rootfs.mount: Deactivated successfully. Jul 10 00:42:27.276356 containerd[1443]: time="2025-07-10T00:42:27.276319738Z" level=info msg="StopContainer for \"e1a914d6f3cb7c9162b562d2db4a2b7f411c9947a4ee1be4e8c25dcfa1c3cb56\" with timeout 2 (s)" Jul 10 00:42:27.276669 containerd[1443]: time="2025-07-10T00:42:27.276628652Z" level=info msg="shim disconnected" id=f4667e955ace9287ebe17b231d3d12a5077909e3d4d4d4ac3f7c731442fee9b6 namespace=k8s.io Jul 10 00:42:27.276711 containerd[1443]: time="2025-07-10T00:42:27.276671372Z" level=warning msg="cleaning up after shim disconnected" id=f4667e955ace9287ebe17b231d3d12a5077909e3d4d4d4ac3f7c731442fee9b6 namespace=k8s.io Jul 10 00:42:27.276711 containerd[1443]: time="2025-07-10T00:42:27.276679891Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:42:27.277028 containerd[1443]: time="2025-07-10T00:42:27.277007805Z" level=info msg="Stop container \"e1a914d6f3cb7c9162b562d2db4a2b7f411c9947a4ee1be4e8c25dcfa1c3cb56\" with signal terminated" Jul 10 00:42:27.277762 containerd[1443]: time="2025-07-10T00:42:27.277712272Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:42:27.283886 systemd-networkd[1385]: lxc_health: Link DOWN Jul 10 00:42:27.283893 systemd-networkd[1385]: lxc_health: Lost carrier Jul 10 00:42:27.306658 systemd[1]: cri-containerd-e1a914d6f3cb7c9162b562d2db4a2b7f411c9947a4ee1be4e8c25dcfa1c3cb56.scope: Deactivated successfully. Jul 10 00:42:27.306936 systemd[1]: cri-containerd-e1a914d6f3cb7c9162b562d2db4a2b7f411c9947a4ee1be4e8c25dcfa1c3cb56.scope: Consumed 6.683s CPU time. Jul 10 00:42:27.323106 containerd[1443]: time="2025-07-10T00:42:27.323062705Z" level=info msg="StopContainer for \"f4667e955ace9287ebe17b231d3d12a5077909e3d4d4d4ac3f7c731442fee9b6\" returns successfully" Jul 10 00:42:27.323804 containerd[1443]: time="2025-07-10T00:42:27.323771451Z" level=info msg="StopPodSandbox for \"3efb64fe18a30298b7931e252bbdfdb0053d2aa379ea0470dbf11418457cf7b0\"" Jul 10 00:42:27.323888 containerd[1443]: time="2025-07-10T00:42:27.323811371Z" level=info msg="Container to stop \"f4667e955ace9287ebe17b231d3d12a5077909e3d4d4d4ac3f7c731442fee9b6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:42:27.325936 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3efb64fe18a30298b7931e252bbdfdb0053d2aa379ea0470dbf11418457cf7b0-shm.mount: Deactivated successfully. Jul 10 00:42:27.328689 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1a914d6f3cb7c9162b562d2db4a2b7f411c9947a4ee1be4e8c25dcfa1c3cb56-rootfs.mount: Deactivated successfully. Jul 10 00:42:27.335639 containerd[1443]: time="2025-07-10T00:42:27.335575071Z" level=info msg="shim disconnected" id=e1a914d6f3cb7c9162b562d2db4a2b7f411c9947a4ee1be4e8c25dcfa1c3cb56 namespace=k8s.io Jul 10 00:42:27.335639 containerd[1443]: time="2025-07-10T00:42:27.335632310Z" level=warning msg="cleaning up after shim disconnected" id=e1a914d6f3cb7c9162b562d2db4a2b7f411c9947a4ee1be4e8c25dcfa1c3cb56 namespace=k8s.io Jul 10 00:42:27.335639 containerd[1443]: time="2025-07-10T00:42:27.335642110Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:42:27.335897 systemd[1]: cri-containerd-3efb64fe18a30298b7931e252bbdfdb0053d2aa379ea0470dbf11418457cf7b0.scope: Deactivated successfully. Jul 10 00:42:27.351442 containerd[1443]: time="2025-07-10T00:42:27.351398055Z" level=info msg="StopContainer for \"e1a914d6f3cb7c9162b562d2db4a2b7f411c9947a4ee1be4e8c25dcfa1c3cb56\" returns successfully" Jul 10 00:42:27.352464 containerd[1443]: time="2025-07-10T00:42:27.352244559Z" level=info msg="StopPodSandbox for \"ec2a2b9edf1062838bca482ee517fa136febb5b902765b6819ea530400f4fab8\"" Jul 10 00:42:27.352464 containerd[1443]: time="2025-07-10T00:42:27.352313598Z" level=info msg="Container to stop \"e1a914d6f3cb7c9162b562d2db4a2b7f411c9947a4ee1be4e8c25dcfa1c3cb56\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:42:27.352464 containerd[1443]: time="2025-07-10T00:42:27.352326398Z" level=info msg="Container to stop \"d23a808ae7dd1afd83da8fdc90ae9d5e86919e98bfa059ad5575fc585222f94f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:42:27.352464 containerd[1443]: time="2025-07-10T00:42:27.352344797Z" level=info msg="Container to stop \"e767e8ee8b9e8ebabdedcfc29c5c585867d55a4d1adac6e89678ffc9fa06ed04\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:42:27.352464 containerd[1443]: time="2025-07-10T00:42:27.352355357Z" level=info msg="Container to stop \"22a15ae55eeeb983f91ebe22bd5236d2efec813a0034dd55f87f820ef3089a2f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:42:27.352464 containerd[1443]: time="2025-07-10T00:42:27.352367957Z" level=info msg="Container to stop \"4c4b056b715ce1002ba2270ebcdcc78cc25113a49a5afd1fa4916bc83e452dbf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:42:27.355989 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ec2a2b9edf1062838bca482ee517fa136febb5b902765b6819ea530400f4fab8-shm.mount: Deactivated successfully. Jul 10 00:42:27.362883 systemd[1]: cri-containerd-ec2a2b9edf1062838bca482ee517fa136febb5b902765b6819ea530400f4fab8.scope: Deactivated successfully. Jul 10 00:42:27.378267 containerd[1443]: time="2025-07-10T00:42:27.378088676Z" level=info msg="shim disconnected" id=3efb64fe18a30298b7931e252bbdfdb0053d2aa379ea0470dbf11418457cf7b0 namespace=k8s.io Jul 10 00:42:27.378439 containerd[1443]: time="2025-07-10T00:42:27.378274673Z" level=warning msg="cleaning up after shim disconnected" id=3efb64fe18a30298b7931e252bbdfdb0053d2aa379ea0470dbf11418457cf7b0 namespace=k8s.io Jul 10 00:42:27.378439 containerd[1443]: time="2025-07-10T00:42:27.378287153Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:42:27.387803 containerd[1443]: time="2025-07-10T00:42:27.386161925Z" level=info msg="shim disconnected" id=ec2a2b9edf1062838bca482ee517fa136febb5b902765b6819ea530400f4fab8 namespace=k8s.io Jul 10 00:42:27.387803 containerd[1443]: time="2025-07-10T00:42:27.387531420Z" level=warning msg="cleaning up after shim disconnected" id=ec2a2b9edf1062838bca482ee517fa136febb5b902765b6819ea530400f4fab8 namespace=k8s.io Jul 10 00:42:27.387803 containerd[1443]: time="2025-07-10T00:42:27.387544460Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:42:27.394576 containerd[1443]: time="2025-07-10T00:42:27.394522289Z" level=info msg="TearDown network for sandbox \"3efb64fe18a30298b7931e252bbdfdb0053d2aa379ea0470dbf11418457cf7b0\" successfully" Jul 10 00:42:27.394576 containerd[1443]: time="2025-07-10T00:42:27.394573968Z" level=info msg="StopPodSandbox for \"3efb64fe18a30298b7931e252bbdfdb0053d2aa379ea0470dbf11418457cf7b0\" returns successfully" Jul 10 00:42:27.408957 containerd[1443]: time="2025-07-10T00:42:27.408915900Z" level=info msg="TearDown network for sandbox \"ec2a2b9edf1062838bca482ee517fa136febb5b902765b6819ea530400f4fab8\" successfully" Jul 10 00:42:27.408957 containerd[1443]: time="2025-07-10T00:42:27.408952220Z" level=info msg="StopPodSandbox for \"ec2a2b9edf1062838bca482ee517fa136febb5b902765b6819ea530400f4fab8\" returns successfully" Jul 10 00:42:27.414333 kubelet[2471]: I0710 00:42:27.414304 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/84241c7a-8eed-4a8c-87b2-38dd8cf6e250-cilium-config-path\") pod \"84241c7a-8eed-4a8c-87b2-38dd8cf6e250\" (UID: \"84241c7a-8eed-4a8c-87b2-38dd8cf6e250\") " Jul 10 00:42:27.414648 kubelet[2471]: I0710 00:42:27.414347 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vd2fx\" (UniqueName: \"kubernetes.io/projected/84241c7a-8eed-4a8c-87b2-38dd8cf6e250-kube-api-access-vd2fx\") pod \"84241c7a-8eed-4a8c-87b2-38dd8cf6e250\" (UID: \"84241c7a-8eed-4a8c-87b2-38dd8cf6e250\") " Jul 10 00:42:27.429150 kubelet[2471]: I0710 00:42:27.429100 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84241c7a-8eed-4a8c-87b2-38dd8cf6e250-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "84241c7a-8eed-4a8c-87b2-38dd8cf6e250" (UID: "84241c7a-8eed-4a8c-87b2-38dd8cf6e250"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 00:42:27.432544 kubelet[2471]: I0710 00:42:27.432453 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84241c7a-8eed-4a8c-87b2-38dd8cf6e250-kube-api-access-vd2fx" (OuterVolumeSpecName: "kube-api-access-vd2fx") pod "84241c7a-8eed-4a8c-87b2-38dd8cf6e250" (UID: "84241c7a-8eed-4a8c-87b2-38dd8cf6e250"). InnerVolumeSpecName "kube-api-access-vd2fx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:42:27.514644 kubelet[2471]: I0710 00:42:27.514524 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c1513086-0101-42a2-87e7-e5ce72618d17-cilium-run\") pod \"c1513086-0101-42a2-87e7-e5ce72618d17\" (UID: \"c1513086-0101-42a2-87e7-e5ce72618d17\") " Jul 10 00:42:27.514644 kubelet[2471]: I0710 00:42:27.514563 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1513086-0101-42a2-87e7-e5ce72618d17-lib-modules\") pod \"c1513086-0101-42a2-87e7-e5ce72618d17\" (UID: \"c1513086-0101-42a2-87e7-e5ce72618d17\") " Jul 10 00:42:27.514644 kubelet[2471]: I0710 00:42:27.514580 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c1513086-0101-42a2-87e7-e5ce72618d17-bpf-maps\") pod \"c1513086-0101-42a2-87e7-e5ce72618d17\" (UID: \"c1513086-0101-42a2-87e7-e5ce72618d17\") " Jul 10 00:42:27.514644 kubelet[2471]: I0710 00:42:27.514603 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c1513086-0101-42a2-87e7-e5ce72618d17-clustermesh-secrets\") pod \"c1513086-0101-42a2-87e7-e5ce72618d17\" (UID: \"c1513086-0101-42a2-87e7-e5ce72618d17\") " Jul 10 00:42:27.514644 kubelet[2471]: I0710 00:42:27.514620 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c1513086-0101-42a2-87e7-e5ce72618d17-cilium-config-path\") pod \"c1513086-0101-42a2-87e7-e5ce72618d17\" (UID: \"c1513086-0101-42a2-87e7-e5ce72618d17\") " Jul 10 00:42:27.514858 kubelet[2471]: I0710 00:42:27.514651 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1513086-0101-42a2-87e7-e5ce72618d17-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c1513086-0101-42a2-87e7-e5ce72618d17" (UID: "c1513086-0101-42a2-87e7-e5ce72618d17"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:42:27.514858 kubelet[2471]: I0710 00:42:27.514676 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1513086-0101-42a2-87e7-e5ce72618d17-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c1513086-0101-42a2-87e7-e5ce72618d17" (UID: "c1513086-0101-42a2-87e7-e5ce72618d17"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:42:27.514858 kubelet[2471]: I0710 00:42:27.514651 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1513086-0101-42a2-87e7-e5ce72618d17-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c1513086-0101-42a2-87e7-e5ce72618d17" (UID: "c1513086-0101-42a2-87e7-e5ce72618d17"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:42:27.515806 kubelet[2471]: I0710 00:42:27.515777 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c1513086-0101-42a2-87e7-e5ce72618d17-hostproc\") pod \"c1513086-0101-42a2-87e7-e5ce72618d17\" (UID: \"c1513086-0101-42a2-87e7-e5ce72618d17\") " Jul 10 00:42:27.515806 kubelet[2471]: I0710 00:42:27.515808 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c1513086-0101-42a2-87e7-e5ce72618d17-etc-cni-netd\") pod \"c1513086-0101-42a2-87e7-e5ce72618d17\" (UID: \"c1513086-0101-42a2-87e7-e5ce72618d17\") " Jul 10 00:42:27.515909 kubelet[2471]: I0710 00:42:27.515825 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1513086-0101-42a2-87e7-e5ce72618d17-xtables-lock\") pod \"c1513086-0101-42a2-87e7-e5ce72618d17\" (UID: \"c1513086-0101-42a2-87e7-e5ce72618d17\") " Jul 10 00:42:27.515909 kubelet[2471]: I0710 00:42:27.515839 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c1513086-0101-42a2-87e7-e5ce72618d17-cilium-cgroup\") pod \"c1513086-0101-42a2-87e7-e5ce72618d17\" (UID: \"c1513086-0101-42a2-87e7-e5ce72618d17\") " Jul 10 00:42:27.515909 kubelet[2471]: I0710 00:42:27.515859 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hqhh\" (UniqueName: \"kubernetes.io/projected/c1513086-0101-42a2-87e7-e5ce72618d17-kube-api-access-4hqhh\") pod \"c1513086-0101-42a2-87e7-e5ce72618d17\" (UID: \"c1513086-0101-42a2-87e7-e5ce72618d17\") " Jul 10 00:42:27.515909 kubelet[2471]: I0710 00:42:27.515876 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c1513086-0101-42a2-87e7-e5ce72618d17-hubble-tls\") pod \"c1513086-0101-42a2-87e7-e5ce72618d17\" (UID: \"c1513086-0101-42a2-87e7-e5ce72618d17\") " Jul 10 00:42:27.515909 kubelet[2471]: I0710 00:42:27.515894 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c1513086-0101-42a2-87e7-e5ce72618d17-cni-path\") pod \"c1513086-0101-42a2-87e7-e5ce72618d17\" (UID: \"c1513086-0101-42a2-87e7-e5ce72618d17\") " Jul 10 00:42:27.515909 kubelet[2471]: I0710 00:42:27.515911 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c1513086-0101-42a2-87e7-e5ce72618d17-host-proc-sys-net\") pod \"c1513086-0101-42a2-87e7-e5ce72618d17\" (UID: \"c1513086-0101-42a2-87e7-e5ce72618d17\") " Jul 10 00:42:27.516079 kubelet[2471]: I0710 00:42:27.515924 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c1513086-0101-42a2-87e7-e5ce72618d17-host-proc-sys-kernel\") pod \"c1513086-0101-42a2-87e7-e5ce72618d17\" (UID: \"c1513086-0101-42a2-87e7-e5ce72618d17\") " Jul 10 00:42:27.516079 kubelet[2471]: I0710 00:42:27.515962 2471 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/84241c7a-8eed-4a8c-87b2-38dd8cf6e250-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 10 00:42:27.516079 kubelet[2471]: I0710 00:42:27.515971 2471 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c1513086-0101-42a2-87e7-e5ce72618d17-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 10 00:42:27.516079 kubelet[2471]: I0710 00:42:27.515982 2471 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1513086-0101-42a2-87e7-e5ce72618d17-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 10 00:42:27.516079 kubelet[2471]: I0710 00:42:27.515990 2471 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c1513086-0101-42a2-87e7-e5ce72618d17-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 10 00:42:27.516079 kubelet[2471]: I0710 00:42:27.515999 2471 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vd2fx\" (UniqueName: \"kubernetes.io/projected/84241c7a-8eed-4a8c-87b2-38dd8cf6e250-kube-api-access-vd2fx\") on node \"localhost\" DevicePath \"\"" Jul 10 00:42:27.516079 kubelet[2471]: I0710 00:42:27.516028 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1513086-0101-42a2-87e7-e5ce72618d17-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c1513086-0101-42a2-87e7-e5ce72618d17" (UID: "c1513086-0101-42a2-87e7-e5ce72618d17"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:42:27.516285 kubelet[2471]: I0710 00:42:27.516047 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1513086-0101-42a2-87e7-e5ce72618d17-hostproc" (OuterVolumeSpecName: "hostproc") pod "c1513086-0101-42a2-87e7-e5ce72618d17" (UID: "c1513086-0101-42a2-87e7-e5ce72618d17"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:42:27.516285 kubelet[2471]: I0710 00:42:27.516062 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1513086-0101-42a2-87e7-e5ce72618d17-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c1513086-0101-42a2-87e7-e5ce72618d17" (UID: "c1513086-0101-42a2-87e7-e5ce72618d17"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:42:27.516285 kubelet[2471]: I0710 00:42:27.516073 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1513086-0101-42a2-87e7-e5ce72618d17-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c1513086-0101-42a2-87e7-e5ce72618d17" (UID: "c1513086-0101-42a2-87e7-e5ce72618d17"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:42:27.516285 kubelet[2471]: I0710 00:42:27.516101 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1513086-0101-42a2-87e7-e5ce72618d17-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c1513086-0101-42a2-87e7-e5ce72618d17" (UID: "c1513086-0101-42a2-87e7-e5ce72618d17"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:42:27.516694 kubelet[2471]: I0710 00:42:27.516672 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1513086-0101-42a2-87e7-e5ce72618d17-cni-path" (OuterVolumeSpecName: "cni-path") pod "c1513086-0101-42a2-87e7-e5ce72618d17" (UID: "c1513086-0101-42a2-87e7-e5ce72618d17"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:42:27.517187 kubelet[2471]: I0710 00:42:27.517154 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1513086-0101-42a2-87e7-e5ce72618d17-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c1513086-0101-42a2-87e7-e5ce72618d17" (UID: "c1513086-0101-42a2-87e7-e5ce72618d17"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 00:42:27.517286 kubelet[2471]: I0710 00:42:27.517268 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1513086-0101-42a2-87e7-e5ce72618d17-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c1513086-0101-42a2-87e7-e5ce72618d17" (UID: "c1513086-0101-42a2-87e7-e5ce72618d17"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 10 00:42:27.517327 kubelet[2471]: I0710 00:42:27.517306 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1513086-0101-42a2-87e7-e5ce72618d17-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c1513086-0101-42a2-87e7-e5ce72618d17" (UID: "c1513086-0101-42a2-87e7-e5ce72618d17"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:42:27.517975 kubelet[2471]: I0710 00:42:27.517952 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1513086-0101-42a2-87e7-e5ce72618d17-kube-api-access-4hqhh" (OuterVolumeSpecName: "kube-api-access-4hqhh") pod "c1513086-0101-42a2-87e7-e5ce72618d17" (UID: "c1513086-0101-42a2-87e7-e5ce72618d17"). InnerVolumeSpecName "kube-api-access-4hqhh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:42:27.518764 kubelet[2471]: I0710 00:42:27.518725 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1513086-0101-42a2-87e7-e5ce72618d17-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c1513086-0101-42a2-87e7-e5ce72618d17" (UID: "c1513086-0101-42a2-87e7-e5ce72618d17"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:42:27.567873 kubelet[2471]: E0710 00:42:27.567755 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:42:27.605243 kubelet[2471]: E0710 00:42:27.605202 2471 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 10 00:42:27.616424 kubelet[2471]: I0710 00:42:27.616302 2471 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c1513086-0101-42a2-87e7-e5ce72618d17-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 10 00:42:27.616424 kubelet[2471]: I0710 00:42:27.616329 2471 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c1513086-0101-42a2-87e7-e5ce72618d17-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 10 00:42:27.616424 kubelet[2471]: I0710 00:42:27.616339 2471 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c1513086-0101-42a2-87e7-e5ce72618d17-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 10 00:42:27.616424 kubelet[2471]: I0710 00:42:27.616348 2471 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c1513086-0101-42a2-87e7-e5ce72618d17-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 10 00:42:27.616424 kubelet[2471]: I0710 00:42:27.616357 2471 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c1513086-0101-42a2-87e7-e5ce72618d17-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 10 00:42:27.616424 kubelet[2471]: I0710 00:42:27.616367 2471 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c1513086-0101-42a2-87e7-e5ce72618d17-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 10 00:42:27.616424 kubelet[2471]: I0710 00:42:27.616375 2471 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c1513086-0101-42a2-87e7-e5ce72618d17-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 10 00:42:27.616424 kubelet[2471]: I0710 00:42:27.616383 2471 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1513086-0101-42a2-87e7-e5ce72618d17-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 10 00:42:27.616687 kubelet[2471]: I0710 00:42:27.616390 2471 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c1513086-0101-42a2-87e7-e5ce72618d17-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 10 00:42:27.616687 kubelet[2471]: I0710 00:42:27.616398 2471 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hqhh\" (UniqueName: \"kubernetes.io/projected/c1513086-0101-42a2-87e7-e5ce72618d17-kube-api-access-4hqhh\") on node \"localhost\" DevicePath \"\"" Jul 10 00:42:27.616687 kubelet[2471]: I0710 00:42:27.616406 2471 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c1513086-0101-42a2-87e7-e5ce72618d17-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 10 00:42:27.754633 kubelet[2471]: I0710 00:42:27.754581 2471 scope.go:117] "RemoveContainer" containerID="e1a914d6f3cb7c9162b562d2db4a2b7f411c9947a4ee1be4e8c25dcfa1c3cb56" Jul 10 00:42:27.757784 systemd[1]: Removed slice kubepods-burstable-podc1513086_0101_42a2_87e7_e5ce72618d17.slice - libcontainer container kubepods-burstable-podc1513086_0101_42a2_87e7_e5ce72618d17.slice. Jul 10 00:42:27.757883 systemd[1]: kubepods-burstable-podc1513086_0101_42a2_87e7_e5ce72618d17.slice: Consumed 6.824s CPU time. Jul 10 00:42:27.760655 containerd[1443]: time="2025-07-10T00:42:27.759206114Z" level=info msg="RemoveContainer for \"e1a914d6f3cb7c9162b562d2db4a2b7f411c9947a4ee1be4e8c25dcfa1c3cb56\"" Jul 10 00:42:27.760096 systemd[1]: Removed slice kubepods-besteffort-pod84241c7a_8eed_4a8c_87b2_38dd8cf6e250.slice - libcontainer container kubepods-besteffort-pod84241c7a_8eed_4a8c_87b2_38dd8cf6e250.slice. Jul 10 00:42:27.765355 containerd[1443]: time="2025-07-10T00:42:27.765255961Z" level=info msg="RemoveContainer for \"e1a914d6f3cb7c9162b562d2db4a2b7f411c9947a4ee1be4e8c25dcfa1c3cb56\" returns successfully" Jul 10 00:42:27.765547 kubelet[2471]: I0710 00:42:27.765521 2471 scope.go:117] "RemoveContainer" containerID="22a15ae55eeeb983f91ebe22bd5236d2efec813a0034dd55f87f820ef3089a2f" Jul 10 00:42:27.767873 containerd[1443]: time="2025-07-10T00:42:27.767826433Z" level=info msg="RemoveContainer for \"22a15ae55eeeb983f91ebe22bd5236d2efec813a0034dd55f87f820ef3089a2f\"" Jul 10 00:42:27.771427 containerd[1443]: time="2025-07-10T00:42:27.771395406Z" level=info msg="RemoveContainer for \"22a15ae55eeeb983f91ebe22bd5236d2efec813a0034dd55f87f820ef3089a2f\" returns successfully" Jul 10 00:42:27.772502 kubelet[2471]: I0710 00:42:27.771968 2471 scope.go:117] "RemoveContainer" containerID="4c4b056b715ce1002ba2270ebcdcc78cc25113a49a5afd1fa4916bc83e452dbf" Jul 10 00:42:27.773189 containerd[1443]: time="2025-07-10T00:42:27.773166333Z" level=info msg="RemoveContainer for \"4c4b056b715ce1002ba2270ebcdcc78cc25113a49a5afd1fa4916bc83e452dbf\"" Jul 10 00:42:27.776304 containerd[1443]: time="2025-07-10T00:42:27.776227996Z" level=info msg="RemoveContainer for \"4c4b056b715ce1002ba2270ebcdcc78cc25113a49a5afd1fa4916bc83e452dbf\" returns successfully" Jul 10 00:42:27.776535 kubelet[2471]: I0710 00:42:27.776509 2471 scope.go:117] "RemoveContainer" containerID="e767e8ee8b9e8ebabdedcfc29c5c585867d55a4d1adac6e89678ffc9fa06ed04" Jul 10 00:42:27.777761 containerd[1443]: time="2025-07-10T00:42:27.777697569Z" level=info msg="RemoveContainer for \"e767e8ee8b9e8ebabdedcfc29c5c585867d55a4d1adac6e89678ffc9fa06ed04\"" Jul 10 00:42:27.780994 containerd[1443]: time="2025-07-10T00:42:27.780799791Z" level=info msg="RemoveContainer for \"e767e8ee8b9e8ebabdedcfc29c5c585867d55a4d1adac6e89678ffc9fa06ed04\" returns successfully" Jul 10 00:42:27.781181 kubelet[2471]: I0710 00:42:27.781160 2471 scope.go:117] "RemoveContainer" containerID="d23a808ae7dd1afd83da8fdc90ae9d5e86919e98bfa059ad5575fc585222f94f" Jul 10 00:42:27.783188 containerd[1443]: time="2025-07-10T00:42:27.783158827Z" level=info msg="RemoveContainer for \"d23a808ae7dd1afd83da8fdc90ae9d5e86919e98bfa059ad5575fc585222f94f\"" Jul 10 00:42:27.785527 containerd[1443]: time="2025-07-10T00:42:27.785462903Z" level=info msg="RemoveContainer for \"d23a808ae7dd1afd83da8fdc90ae9d5e86919e98bfa059ad5575fc585222f94f\" returns successfully" Jul 10 00:42:27.785729 kubelet[2471]: I0710 00:42:27.785669 2471 scope.go:117] "RemoveContainer" containerID="e1a914d6f3cb7c9162b562d2db4a2b7f411c9947a4ee1be4e8c25dcfa1c3cb56" Jul 10 00:42:27.786414 containerd[1443]: time="2025-07-10T00:42:27.785894695Z" level=error msg="ContainerStatus for \"e1a914d6f3cb7c9162b562d2db4a2b7f411c9947a4ee1be4e8c25dcfa1c3cb56\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e1a914d6f3cb7c9162b562d2db4a2b7f411c9947a4ee1be4e8c25dcfa1c3cb56\": not found" Jul 10 00:42:27.792544 kubelet[2471]: E0710 00:42:27.792503 2471 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e1a914d6f3cb7c9162b562d2db4a2b7f411c9947a4ee1be4e8c25dcfa1c3cb56\": not found" containerID="e1a914d6f3cb7c9162b562d2db4a2b7f411c9947a4ee1be4e8c25dcfa1c3cb56" Jul 10 00:42:27.792630 kubelet[2471]: I0710 00:42:27.792542 2471 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e1a914d6f3cb7c9162b562d2db4a2b7f411c9947a4ee1be4e8c25dcfa1c3cb56"} err="failed to get container status \"e1a914d6f3cb7c9162b562d2db4a2b7f411c9947a4ee1be4e8c25dcfa1c3cb56\": rpc error: code = NotFound desc = an error occurred when try to find container \"e1a914d6f3cb7c9162b562d2db4a2b7f411c9947a4ee1be4e8c25dcfa1c3cb56\": not found" Jul 10 00:42:27.792630 kubelet[2471]: I0710 00:42:27.792582 2471 scope.go:117] "RemoveContainer" containerID="22a15ae55eeeb983f91ebe22bd5236d2efec813a0034dd55f87f820ef3089a2f" Jul 10 00:42:27.792851 containerd[1443]: time="2025-07-10T00:42:27.792793486Z" level=error msg="ContainerStatus for \"22a15ae55eeeb983f91ebe22bd5236d2efec813a0034dd55f87f820ef3089a2f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"22a15ae55eeeb983f91ebe22bd5236d2efec813a0034dd55f87f820ef3089a2f\": not found" Jul 10 00:42:27.792934 kubelet[2471]: E0710 00:42:27.792907 2471 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"22a15ae55eeeb983f91ebe22bd5236d2efec813a0034dd55f87f820ef3089a2f\": not found" containerID="22a15ae55eeeb983f91ebe22bd5236d2efec813a0034dd55f87f820ef3089a2f" Jul 10 00:42:27.792966 kubelet[2471]: I0710 00:42:27.792932 2471 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"22a15ae55eeeb983f91ebe22bd5236d2efec813a0034dd55f87f820ef3089a2f"} err="failed to get container status \"22a15ae55eeeb983f91ebe22bd5236d2efec813a0034dd55f87f820ef3089a2f\": rpc error: code = NotFound desc = an error occurred when try to find container \"22a15ae55eeeb983f91ebe22bd5236d2efec813a0034dd55f87f820ef3089a2f\": not found" Jul 10 00:42:27.792966 kubelet[2471]: I0710 00:42:27.792945 2471 scope.go:117] "RemoveContainer" containerID="4c4b056b715ce1002ba2270ebcdcc78cc25113a49a5afd1fa4916bc83e452dbf" Jul 10 00:42:27.793155 containerd[1443]: time="2025-07-10T00:42:27.793121160Z" level=error msg="ContainerStatus for \"4c4b056b715ce1002ba2270ebcdcc78cc25113a49a5afd1fa4916bc83e452dbf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4c4b056b715ce1002ba2270ebcdcc78cc25113a49a5afd1fa4916bc83e452dbf\": not found" Jul 10 00:42:27.793416 kubelet[2471]: E0710 00:42:27.793277 2471 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4c4b056b715ce1002ba2270ebcdcc78cc25113a49a5afd1fa4916bc83e452dbf\": not found" containerID="4c4b056b715ce1002ba2270ebcdcc78cc25113a49a5afd1fa4916bc83e452dbf" Jul 10 00:42:27.793416 kubelet[2471]: I0710 00:42:27.793306 2471 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4c4b056b715ce1002ba2270ebcdcc78cc25113a49a5afd1fa4916bc83e452dbf"} err="failed to get container status \"4c4b056b715ce1002ba2270ebcdcc78cc25113a49a5afd1fa4916bc83e452dbf\": rpc error: code = NotFound desc = an error occurred when try to find container \"4c4b056b715ce1002ba2270ebcdcc78cc25113a49a5afd1fa4916bc83e452dbf\": not found" Jul 10 00:42:27.793416 kubelet[2471]: I0710 00:42:27.793321 2471 scope.go:117] "RemoveContainer" containerID="e767e8ee8b9e8ebabdedcfc29c5c585867d55a4d1adac6e89678ffc9fa06ed04" Jul 10 00:42:27.793578 containerd[1443]: time="2025-07-10T00:42:27.793541592Z" level=error msg="ContainerStatus for \"e767e8ee8b9e8ebabdedcfc29c5c585867d55a4d1adac6e89678ffc9fa06ed04\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e767e8ee8b9e8ebabdedcfc29c5c585867d55a4d1adac6e89678ffc9fa06ed04\": not found" Jul 10 00:42:27.793698 kubelet[2471]: E0710 00:42:27.793661 2471 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e767e8ee8b9e8ebabdedcfc29c5c585867d55a4d1adac6e89678ffc9fa06ed04\": not found" containerID="e767e8ee8b9e8ebabdedcfc29c5c585867d55a4d1adac6e89678ffc9fa06ed04" Jul 10 00:42:27.793736 kubelet[2471]: I0710 00:42:27.793690 2471 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e767e8ee8b9e8ebabdedcfc29c5c585867d55a4d1adac6e89678ffc9fa06ed04"} err="failed to get container status \"e767e8ee8b9e8ebabdedcfc29c5c585867d55a4d1adac6e89678ffc9fa06ed04\": rpc error: code = NotFound desc = an error occurred when try to find container \"e767e8ee8b9e8ebabdedcfc29c5c585867d55a4d1adac6e89678ffc9fa06ed04\": not found" Jul 10 00:42:27.793736 kubelet[2471]: I0710 00:42:27.793709 2471 scope.go:117] "RemoveContainer" containerID="d23a808ae7dd1afd83da8fdc90ae9d5e86919e98bfa059ad5575fc585222f94f" Jul 10 00:42:27.793917 containerd[1443]: time="2025-07-10T00:42:27.793883426Z" level=error msg="ContainerStatus for \"d23a808ae7dd1afd83da8fdc90ae9d5e86919e98bfa059ad5575fc585222f94f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d23a808ae7dd1afd83da8fdc90ae9d5e86919e98bfa059ad5575fc585222f94f\": not found" Jul 10 00:42:27.794129 kubelet[2471]: E0710 00:42:27.794015 2471 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d23a808ae7dd1afd83da8fdc90ae9d5e86919e98bfa059ad5575fc585222f94f\": not found" containerID="d23a808ae7dd1afd83da8fdc90ae9d5e86919e98bfa059ad5575fc585222f94f" Jul 10 00:42:27.794129 kubelet[2471]: I0710 00:42:27.794042 2471 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d23a808ae7dd1afd83da8fdc90ae9d5e86919e98bfa059ad5575fc585222f94f"} err="failed to get container status \"d23a808ae7dd1afd83da8fdc90ae9d5e86919e98bfa059ad5575fc585222f94f\": rpc error: code = NotFound desc = an error occurred when try to find container \"d23a808ae7dd1afd83da8fdc90ae9d5e86919e98bfa059ad5575fc585222f94f\": not found" Jul 10 00:42:27.794129 kubelet[2471]: I0710 00:42:27.794057 2471 scope.go:117] "RemoveContainer" containerID="f4667e955ace9287ebe17b231d3d12a5077909e3d4d4d4ac3f7c731442fee9b6" Jul 10 00:42:27.794941 containerd[1443]: time="2025-07-10T00:42:27.794917927Z" level=info msg="RemoveContainer for \"f4667e955ace9287ebe17b231d3d12a5077909e3d4d4d4ac3f7c731442fee9b6\"" Jul 10 00:42:27.797097 containerd[1443]: time="2025-07-10T00:42:27.797058727Z" level=info msg="RemoveContainer for \"f4667e955ace9287ebe17b231d3d12a5077909e3d4d4d4ac3f7c731442fee9b6\" returns successfully" Jul 10 00:42:27.797310 kubelet[2471]: I0710 00:42:27.797229 2471 scope.go:117] "RemoveContainer" containerID="f4667e955ace9287ebe17b231d3d12a5077909e3d4d4d4ac3f7c731442fee9b6" Jul 10 00:42:27.797489 containerd[1443]: time="2025-07-10T00:42:27.797418600Z" level=error msg="ContainerStatus for \"f4667e955ace9287ebe17b231d3d12a5077909e3d4d4d4ac3f7c731442fee9b6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f4667e955ace9287ebe17b231d3d12a5077909e3d4d4d4ac3f7c731442fee9b6\": not found" Jul 10 00:42:27.797605 kubelet[2471]: E0710 00:42:27.797574 2471 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f4667e955ace9287ebe17b231d3d12a5077909e3d4d4d4ac3f7c731442fee9b6\": not found" containerID="f4667e955ace9287ebe17b231d3d12a5077909e3d4d4d4ac3f7c731442fee9b6" Jul 10 00:42:27.797655 kubelet[2471]: I0710 00:42:27.797603 2471 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f4667e955ace9287ebe17b231d3d12a5077909e3d4d4d4ac3f7c731442fee9b6"} err="failed to get container status \"f4667e955ace9287ebe17b231d3d12a5077909e3d4d4d4ac3f7c731442fee9b6\": rpc error: code = NotFound desc = an error occurred when try to find container \"f4667e955ace9287ebe17b231d3d12a5077909e3d4d4d4ac3f7c731442fee9b6\": not found" Jul 10 00:42:28.246946 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec2a2b9edf1062838bca482ee517fa136febb5b902765b6819ea530400f4fab8-rootfs.mount: Deactivated successfully. Jul 10 00:42:28.247039 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3efb64fe18a30298b7931e252bbdfdb0053d2aa379ea0470dbf11418457cf7b0-rootfs.mount: Deactivated successfully. Jul 10 00:42:28.247096 systemd[1]: var-lib-kubelet-pods-c1513086\x2d0101\x2d42a2\x2d87e7\x2de5ce72618d17-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4hqhh.mount: Deactivated successfully. Jul 10 00:42:28.247152 systemd[1]: var-lib-kubelet-pods-84241c7a\x2d8eed\x2d4a8c\x2d87b2\x2d38dd8cf6e250-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvd2fx.mount: Deactivated successfully. Jul 10 00:42:28.247215 systemd[1]: var-lib-kubelet-pods-c1513086\x2d0101\x2d42a2\x2d87e7\x2de5ce72618d17-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 10 00:42:28.247270 systemd[1]: var-lib-kubelet-pods-c1513086\x2d0101\x2d42a2\x2d87e7\x2de5ce72618d17-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 10 00:42:28.572631 kubelet[2471]: I0710 00:42:28.572523 2471 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84241c7a-8eed-4a8c-87b2-38dd8cf6e250" path="/var/lib/kubelet/pods/84241c7a-8eed-4a8c-87b2-38dd8cf6e250/volumes" Jul 10 00:42:28.572997 kubelet[2471]: I0710 00:42:28.572975 2471 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1513086-0101-42a2-87e7-e5ce72618d17" path="/var/lib/kubelet/pods/c1513086-0101-42a2-87e7-e5ce72618d17/volumes" Jul 10 00:42:29.187145 sshd[4124]: pam_unix(sshd:session): session closed for user core Jul 10 00:42:29.199289 systemd[1]: sshd@22-10.0.0.128:22-10.0.0.1:51352.service: Deactivated successfully. Jul 10 00:42:29.201300 systemd[1]: session-23.scope: Deactivated successfully. Jul 10 00:42:29.201540 systemd[1]: session-23.scope: Consumed 1.084s CPU time. Jul 10 00:42:29.202899 systemd-logind[1428]: Session 23 logged out. Waiting for processes to exit. Jul 10 00:42:29.212727 systemd[1]: Started sshd@23-10.0.0.128:22-10.0.0.1:51358.service - OpenSSH per-connection server daemon (10.0.0.1:51358). Jul 10 00:42:29.214354 systemd-logind[1428]: Removed session 23. Jul 10 00:42:29.250623 sshd[4285]: Accepted publickey for core from 10.0.0.1 port 51358 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:42:29.252006 sshd[4285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:42:29.256280 systemd-logind[1428]: New session 24 of user core. Jul 10 00:42:29.265607 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 10 00:42:30.221590 sshd[4285]: pam_unix(sshd:session): session closed for user core Jul 10 00:42:30.233137 systemd[1]: sshd@23-10.0.0.128:22-10.0.0.1:51358.service: Deactivated successfully. Jul 10 00:42:30.238085 systemd[1]: session-24.scope: Deactivated successfully. Jul 10 00:42:30.242556 systemd-logind[1428]: Session 24 logged out. Waiting for processes to exit. Jul 10 00:42:30.250827 systemd[1]: Started sshd@24-10.0.0.128:22-10.0.0.1:51368.service - OpenSSH per-connection server daemon (10.0.0.1:51368). Jul 10 00:42:30.255024 systemd-logind[1428]: Removed session 24. Jul 10 00:42:30.268612 systemd[1]: Created slice kubepods-burstable-pod92e7a2c5_f109_4818_b2e2_3351c844ea5a.slice - libcontainer container kubepods-burstable-pod92e7a2c5_f109_4818_b2e2_3351c844ea5a.slice. Jul 10 00:42:30.296101 sshd[4299]: Accepted publickey for core from 10.0.0.1 port 51368 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:42:30.297757 sshd[4299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:42:30.302079 systemd-logind[1428]: New session 25 of user core. Jul 10 00:42:30.316659 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 10 00:42:30.333721 kubelet[2471]: I0710 00:42:30.333683 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/92e7a2c5-f109-4818-b2e2-3351c844ea5a-hostproc\") pod \"cilium-g6kzt\" (UID: \"92e7a2c5-f109-4818-b2e2-3351c844ea5a\") " pod="kube-system/cilium-g6kzt" Jul 10 00:42:30.333721 kubelet[2471]: I0710 00:42:30.333723 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/92e7a2c5-f109-4818-b2e2-3351c844ea5a-cilium-cgroup\") pod \"cilium-g6kzt\" (UID: \"92e7a2c5-f109-4818-b2e2-3351c844ea5a\") " pod="kube-system/cilium-g6kzt" Jul 10 00:42:30.334045 kubelet[2471]: I0710 00:42:30.333739 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/92e7a2c5-f109-4818-b2e2-3351c844ea5a-host-proc-sys-net\") pod \"cilium-g6kzt\" (UID: \"92e7a2c5-f109-4818-b2e2-3351c844ea5a\") " pod="kube-system/cilium-g6kzt" Jul 10 00:42:30.334045 kubelet[2471]: I0710 00:42:30.333753 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/92e7a2c5-f109-4818-b2e2-3351c844ea5a-hubble-tls\") pod \"cilium-g6kzt\" (UID: \"92e7a2c5-f109-4818-b2e2-3351c844ea5a\") " pod="kube-system/cilium-g6kzt" Jul 10 00:42:30.334045 kubelet[2471]: I0710 00:42:30.333769 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/92e7a2c5-f109-4818-b2e2-3351c844ea5a-cni-path\") pod \"cilium-g6kzt\" (UID: \"92e7a2c5-f109-4818-b2e2-3351c844ea5a\") " pod="kube-system/cilium-g6kzt" Jul 10 00:42:30.334045 kubelet[2471]: I0710 00:42:30.333784 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/92e7a2c5-f109-4818-b2e2-3351c844ea5a-etc-cni-netd\") pod \"cilium-g6kzt\" (UID: \"92e7a2c5-f109-4818-b2e2-3351c844ea5a\") " pod="kube-system/cilium-g6kzt" Jul 10 00:42:30.334045 kubelet[2471]: I0710 00:42:30.333798 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/92e7a2c5-f109-4818-b2e2-3351c844ea5a-cilium-config-path\") pod \"cilium-g6kzt\" (UID: \"92e7a2c5-f109-4818-b2e2-3351c844ea5a\") " pod="kube-system/cilium-g6kzt" Jul 10 00:42:30.334045 kubelet[2471]: I0710 00:42:30.333817 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92e7a2c5-f109-4818-b2e2-3351c844ea5a-xtables-lock\") pod \"cilium-g6kzt\" (UID: \"92e7a2c5-f109-4818-b2e2-3351c844ea5a\") " pod="kube-system/cilium-g6kzt" Jul 10 00:42:30.334169 kubelet[2471]: I0710 00:42:30.333832 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/92e7a2c5-f109-4818-b2e2-3351c844ea5a-host-proc-sys-kernel\") pod \"cilium-g6kzt\" (UID: \"92e7a2c5-f109-4818-b2e2-3351c844ea5a\") " pod="kube-system/cilium-g6kzt" Jul 10 00:42:30.334169 kubelet[2471]: I0710 00:42:30.333848 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/92e7a2c5-f109-4818-b2e2-3351c844ea5a-cilium-run\") pod \"cilium-g6kzt\" (UID: \"92e7a2c5-f109-4818-b2e2-3351c844ea5a\") " pod="kube-system/cilium-g6kzt" Jul 10 00:42:30.334169 kubelet[2471]: I0710 00:42:30.333863 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/92e7a2c5-f109-4818-b2e2-3351c844ea5a-cilium-ipsec-secrets\") pod \"cilium-g6kzt\" (UID: \"92e7a2c5-f109-4818-b2e2-3351c844ea5a\") " pod="kube-system/cilium-g6kzt" Jul 10 00:42:30.334169 kubelet[2471]: I0710 00:42:30.333879 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6stn\" (UniqueName: \"kubernetes.io/projected/92e7a2c5-f109-4818-b2e2-3351c844ea5a-kube-api-access-k6stn\") pod \"cilium-g6kzt\" (UID: \"92e7a2c5-f109-4818-b2e2-3351c844ea5a\") " pod="kube-system/cilium-g6kzt" Jul 10 00:42:30.334169 kubelet[2471]: I0710 00:42:30.333894 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/92e7a2c5-f109-4818-b2e2-3351c844ea5a-bpf-maps\") pod \"cilium-g6kzt\" (UID: \"92e7a2c5-f109-4818-b2e2-3351c844ea5a\") " pod="kube-system/cilium-g6kzt" Jul 10 00:42:30.334268 kubelet[2471]: I0710 00:42:30.333908 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/92e7a2c5-f109-4818-b2e2-3351c844ea5a-clustermesh-secrets\") pod \"cilium-g6kzt\" (UID: \"92e7a2c5-f109-4818-b2e2-3351c844ea5a\") " pod="kube-system/cilium-g6kzt" Jul 10 00:42:30.334268 kubelet[2471]: I0710 00:42:30.333922 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92e7a2c5-f109-4818-b2e2-3351c844ea5a-lib-modules\") pod \"cilium-g6kzt\" (UID: \"92e7a2c5-f109-4818-b2e2-3351c844ea5a\") " pod="kube-system/cilium-g6kzt" Jul 10 00:42:30.372500 sshd[4299]: pam_unix(sshd:session): session closed for user core Jul 10 00:42:30.386401 systemd[1]: sshd@24-10.0.0.128:22-10.0.0.1:51368.service: Deactivated successfully. Jul 10 00:42:30.388348 systemd[1]: session-25.scope: Deactivated successfully. Jul 10 00:42:30.389948 systemd-logind[1428]: Session 25 logged out. Waiting for processes to exit. Jul 10 00:42:30.402765 systemd[1]: Started sshd@25-10.0.0.128:22-10.0.0.1:51380.service - OpenSSH per-connection server daemon (10.0.0.1:51380). Jul 10 00:42:30.403674 systemd-logind[1428]: Removed session 25. Jul 10 00:42:30.436814 sshd[4307]: Accepted publickey for core from 10.0.0.1 port 51380 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:42:30.443108 sshd[4307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:42:30.454037 systemd-logind[1428]: New session 26 of user core. Jul 10 00:42:30.463638 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 10 00:42:30.574851 kubelet[2471]: E0710 00:42:30.574246 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:42:30.574943 containerd[1443]: time="2025-07-10T00:42:30.574660327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g6kzt,Uid:92e7a2c5-f109-4818-b2e2-3351c844ea5a,Namespace:kube-system,Attempt:0,}" Jul 10 00:42:30.595662 containerd[1443]: time="2025-07-10T00:42:30.595329896Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:42:30.595662 containerd[1443]: time="2025-07-10T00:42:30.595398375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:42:30.595662 containerd[1443]: time="2025-07-10T00:42:30.595417095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:42:30.595662 containerd[1443]: time="2025-07-10T00:42:30.595517333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:42:30.615876 systemd[1]: Started cri-containerd-7eaa095b89d79fba5b5dbfe6273b8d335557a8d3ec3f3c62a786e549adcb0ce6.scope - libcontainer container 7eaa095b89d79fba5b5dbfe6273b8d335557a8d3ec3f3c62a786e549adcb0ce6. Jul 10 00:42:30.643175 containerd[1443]: time="2025-07-10T00:42:30.643137404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g6kzt,Uid:92e7a2c5-f109-4818-b2e2-3351c844ea5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"7eaa095b89d79fba5b5dbfe6273b8d335557a8d3ec3f3c62a786e549adcb0ce6\"" Jul 10 00:42:30.648038 kubelet[2471]: E0710 00:42:30.647652 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:42:30.653862 containerd[1443]: time="2025-07-10T00:42:30.653811943Z" level=info msg="CreateContainer within sandbox \"7eaa095b89d79fba5b5dbfe6273b8d335557a8d3ec3f3c62a786e549adcb0ce6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 00:42:30.664376 containerd[1443]: time="2025-07-10T00:42:30.664322204Z" level=info msg="CreateContainer within sandbox \"7eaa095b89d79fba5b5dbfe6273b8d335557a8d3ec3f3c62a786e549adcb0ce6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0f896c7079208e10ba68085e67dacf3eea13b9aabccacbb0b43ed99014501732\"" Jul 10 00:42:30.664941 containerd[1443]: time="2025-07-10T00:42:30.664909314Z" level=info msg="StartContainer for \"0f896c7079208e10ba68085e67dacf3eea13b9aabccacbb0b43ed99014501732\"" Jul 10 00:42:30.710659 systemd[1]: Started cri-containerd-0f896c7079208e10ba68085e67dacf3eea13b9aabccacbb0b43ed99014501732.scope - libcontainer container 0f896c7079208e10ba68085e67dacf3eea13b9aabccacbb0b43ed99014501732. Jul 10 00:42:30.733998 containerd[1443]: time="2025-07-10T00:42:30.733959541Z" level=info msg="StartContainer for \"0f896c7079208e10ba68085e67dacf3eea13b9aabccacbb0b43ed99014501732\" returns successfully" Jul 10 00:42:30.749840 systemd[1]: cri-containerd-0f896c7079208e10ba68085e67dacf3eea13b9aabccacbb0b43ed99014501732.scope: Deactivated successfully. Jul 10 00:42:30.763537 kubelet[2471]: E0710 00:42:30.762869 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:42:30.787964 containerd[1443]: time="2025-07-10T00:42:30.787752147Z" level=info msg="shim disconnected" id=0f896c7079208e10ba68085e67dacf3eea13b9aabccacbb0b43ed99014501732 namespace=k8s.io Jul 10 00:42:30.787964 containerd[1443]: time="2025-07-10T00:42:30.787805906Z" level=warning msg="cleaning up after shim disconnected" id=0f896c7079208e10ba68085e67dacf3eea13b9aabccacbb0b43ed99014501732 namespace=k8s.io Jul 10 00:42:30.787964 containerd[1443]: time="2025-07-10T00:42:30.787814066Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:42:31.765731 kubelet[2471]: E0710 00:42:31.765584 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:42:31.773022 containerd[1443]: time="2025-07-10T00:42:31.772985618Z" level=info msg="CreateContainer within sandbox \"7eaa095b89d79fba5b5dbfe6273b8d335557a8d3ec3f3c62a786e549adcb0ce6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 00:42:31.797168 containerd[1443]: time="2025-07-10T00:42:31.797045542Z" level=info msg="CreateContainer within sandbox \"7eaa095b89d79fba5b5dbfe6273b8d335557a8d3ec3f3c62a786e549adcb0ce6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d389739f11322728cb1a82ffb7241aac39840359a094d1cba26b113546db6c06\"" Jul 10 00:42:31.797917 containerd[1443]: time="2025-07-10T00:42:31.797633892Z" level=info msg="StartContainer for \"d389739f11322728cb1a82ffb7241aac39840359a094d1cba26b113546db6c06\"" Jul 10 00:42:31.798045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2537280860.mount: Deactivated successfully. Jul 10 00:42:31.826663 systemd[1]: Started cri-containerd-d389739f11322728cb1a82ffb7241aac39840359a094d1cba26b113546db6c06.scope - libcontainer container d389739f11322728cb1a82ffb7241aac39840359a094d1cba26b113546db6c06. Jul 10 00:42:31.851684 systemd[1]: cri-containerd-d389739f11322728cb1a82ffb7241aac39840359a094d1cba26b113546db6c06.scope: Deactivated successfully. Jul 10 00:42:31.873122 containerd[1443]: time="2025-07-10T00:42:31.873081050Z" level=info msg="StartContainer for \"d389739f11322728cb1a82ffb7241aac39840359a094d1cba26b113546db6c06\" returns successfully" Jul 10 00:42:31.876033 containerd[1443]: time="2025-07-10T00:42:31.875844725Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92e7a2c5_f109_4818_b2e2_3351c844ea5a.slice/cri-containerd-d389739f11322728cb1a82ffb7241aac39840359a094d1cba26b113546db6c06.scope/memory.events\": no such file or directory" Jul 10 00:42:31.899092 containerd[1443]: time="2025-07-10T00:42:31.899026543Z" level=info msg="shim disconnected" id=d389739f11322728cb1a82ffb7241aac39840359a094d1cba26b113546db6c06 namespace=k8s.io Jul 10 00:42:31.899092 containerd[1443]: time="2025-07-10T00:42:31.899089302Z" level=warning msg="cleaning up after shim disconnected" id=d389739f11322728cb1a82ffb7241aac39840359a094d1cba26b113546db6c06 namespace=k8s.io Jul 10 00:42:31.899092 containerd[1443]: time="2025-07-10T00:42:31.899099302Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:42:32.438599 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d389739f11322728cb1a82ffb7241aac39840359a094d1cba26b113546db6c06-rootfs.mount: Deactivated successfully. Jul 10 00:42:32.606812 kubelet[2471]: E0710 00:42:32.606772 2471 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 10 00:42:32.768295 kubelet[2471]: E0710 00:42:32.768166 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:42:32.772157 containerd[1443]: time="2025-07-10T00:42:32.772063850Z" level=info msg="CreateContainer within sandbox \"7eaa095b89d79fba5b5dbfe6273b8d335557a8d3ec3f3c62a786e549adcb0ce6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 00:42:32.784757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3522596885.mount: Deactivated successfully. Jul 10 00:42:32.788693 containerd[1443]: time="2025-07-10T00:42:32.788576347Z" level=info msg="CreateContainer within sandbox \"7eaa095b89d79fba5b5dbfe6273b8d335557a8d3ec3f3c62a786e549adcb0ce6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"493b29cff58a04d15243ba7dbabc8ef1ea3041214ccd3b7aa3846cb2aabf9ede\"" Jul 10 00:42:32.789042 containerd[1443]: time="2025-07-10T00:42:32.789006940Z" level=info msg="StartContainer for \"493b29cff58a04d15243ba7dbabc8ef1ea3041214ccd3b7aa3846cb2aabf9ede\"" Jul 10 00:42:32.815642 systemd[1]: Started cri-containerd-493b29cff58a04d15243ba7dbabc8ef1ea3041214ccd3b7aa3846cb2aabf9ede.scope - libcontainer container 493b29cff58a04d15243ba7dbabc8ef1ea3041214ccd3b7aa3846cb2aabf9ede. Jul 10 00:42:32.838849 systemd[1]: cri-containerd-493b29cff58a04d15243ba7dbabc8ef1ea3041214ccd3b7aa3846cb2aabf9ede.scope: Deactivated successfully. Jul 10 00:42:32.839855 containerd[1443]: time="2025-07-10T00:42:32.839776090Z" level=info msg="StartContainer for \"493b29cff58a04d15243ba7dbabc8ef1ea3041214ccd3b7aa3846cb2aabf9ede\" returns successfully" Jul 10 00:42:32.871124 containerd[1443]: time="2025-07-10T00:42:32.871069671Z" level=info msg="shim disconnected" id=493b29cff58a04d15243ba7dbabc8ef1ea3041214ccd3b7aa3846cb2aabf9ede namespace=k8s.io Jul 10 00:42:32.871124 containerd[1443]: time="2025-07-10T00:42:32.871121271Z" level=warning msg="cleaning up after shim disconnected" id=493b29cff58a04d15243ba7dbabc8ef1ea3041214ccd3b7aa3846cb2aabf9ede namespace=k8s.io Jul 10 00:42:32.871124 containerd[1443]: time="2025-07-10T00:42:32.871129831Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:42:33.438662 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-493b29cff58a04d15243ba7dbabc8ef1ea3041214ccd3b7aa3846cb2aabf9ede-rootfs.mount: Deactivated successfully. Jul 10 00:42:33.771699 kubelet[2471]: E0710 00:42:33.771594 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:42:33.776503 containerd[1443]: time="2025-07-10T00:42:33.776326464Z" level=info msg="CreateContainer within sandbox \"7eaa095b89d79fba5b5dbfe6273b8d335557a8d3ec3f3c62a786e549adcb0ce6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 00:42:33.795082 containerd[1443]: time="2025-07-10T00:42:33.795039895Z" level=info msg="CreateContainer within sandbox \"7eaa095b89d79fba5b5dbfe6273b8d335557a8d3ec3f3c62a786e549adcb0ce6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9201dee71eee7bd14915d7fa82b473ce0a8ef1d2fc01eb7bb1091a4851fdeef4\"" Jul 10 00:42:33.796651 containerd[1443]: time="2025-07-10T00:42:33.795817803Z" level=info msg="StartContainer for \"9201dee71eee7bd14915d7fa82b473ce0a8ef1d2fc01eb7bb1091a4851fdeef4\"" Jul 10 00:42:33.825664 systemd[1]: Started cri-containerd-9201dee71eee7bd14915d7fa82b473ce0a8ef1d2fc01eb7bb1091a4851fdeef4.scope - libcontainer container 9201dee71eee7bd14915d7fa82b473ce0a8ef1d2fc01eb7bb1091a4851fdeef4. Jul 10 00:42:33.845461 systemd[1]: cri-containerd-9201dee71eee7bd14915d7fa82b473ce0a8ef1d2fc01eb7bb1091a4851fdeef4.scope: Deactivated successfully. Jul 10 00:42:33.847416 containerd[1443]: time="2025-07-10T00:42:33.847182369Z" level=info msg="StartContainer for \"9201dee71eee7bd14915d7fa82b473ce0a8ef1d2fc01eb7bb1091a4851fdeef4\" returns successfully" Jul 10 00:42:33.867172 containerd[1443]: time="2025-07-10T00:42:33.867093702Z" level=info msg="shim disconnected" id=9201dee71eee7bd14915d7fa82b473ce0a8ef1d2fc01eb7bb1091a4851fdeef4 namespace=k8s.io Jul 10 00:42:33.867172 containerd[1443]: time="2025-07-10T00:42:33.867164381Z" level=warning msg="cleaning up after shim disconnected" id=9201dee71eee7bd14915d7fa82b473ce0a8ef1d2fc01eb7bb1091a4851fdeef4 namespace=k8s.io Jul 10 00:42:33.867172 containerd[1443]: time="2025-07-10T00:42:33.867174021Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:42:34.438730 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9201dee71eee7bd14915d7fa82b473ce0a8ef1d2fc01eb7bb1091a4851fdeef4-rootfs.mount: Deactivated successfully. Jul 10 00:42:34.775807 kubelet[2471]: E0710 00:42:34.775695 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:42:34.781032 containerd[1443]: time="2025-07-10T00:42:34.779918098Z" level=info msg="CreateContainer within sandbox \"7eaa095b89d79fba5b5dbfe6273b8d335557a8d3ec3f3c62a786e549adcb0ce6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 00:42:34.801556 containerd[1443]: time="2025-07-10T00:42:34.801505575Z" level=info msg="CreateContainer within sandbox \"7eaa095b89d79fba5b5dbfe6273b8d335557a8d3ec3f3c62a786e549adcb0ce6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"93cefb6833fa3cd46ca203172561153ec729dfe9bf6b48aa8de24a78b6cba071\"" Jul 10 00:42:34.803188 containerd[1443]: time="2025-07-10T00:42:34.802179645Z" level=info msg="StartContainer for \"93cefb6833fa3cd46ca203172561153ec729dfe9bf6b48aa8de24a78b6cba071\"" Jul 10 00:42:34.828665 systemd[1]: Started cri-containerd-93cefb6833fa3cd46ca203172561153ec729dfe9bf6b48aa8de24a78b6cba071.scope - libcontainer container 93cefb6833fa3cd46ca203172561153ec729dfe9bf6b48aa8de24a78b6cba071. Jul 10 00:42:34.853664 containerd[1443]: time="2025-07-10T00:42:34.853040444Z" level=info msg="StartContainer for \"93cefb6833fa3cd46ca203172561153ec729dfe9bf6b48aa8de24a78b6cba071\" returns successfully" Jul 10 00:42:34.999000 kubelet[2471]: I0710 00:42:34.998952 2471 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-10T00:42:34Z","lastTransitionTime":"2025-07-10T00:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 10 00:42:35.125502 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 10 00:42:35.779831 kubelet[2471]: E0710 00:42:35.779790 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:42:35.795035 kubelet[2471]: I0710 00:42:35.794962 2471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-g6kzt" podStartSLOduration=5.7949478800000005 podStartE2EDuration="5.79494788s" podCreationTimestamp="2025-07-10 00:42:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:42:35.79358314 +0000 UTC m=+83.327452879" watchObservedRunningTime="2025-07-10 00:42:35.79494788 +0000 UTC m=+83.328817619" Jul 10 00:42:36.783293 kubelet[2471]: E0710 00:42:36.783229 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:42:37.984673 systemd-networkd[1385]: lxc_health: Link UP Jul 10 00:42:37.995506 systemd-networkd[1385]: lxc_health: Gained carrier Jul 10 00:42:38.576691 kubelet[2471]: E0710 00:42:38.576336 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:42:38.786894 kubelet[2471]: E0710 00:42:38.785665 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:42:39.436627 systemd-networkd[1385]: lxc_health: Gained IPv6LL Jul 10 00:42:39.788300 kubelet[2471]: E0710 00:42:39.787814 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:42:43.314218 sshd[4307]: pam_unix(sshd:session): session closed for user core Jul 10 00:42:43.318342 systemd[1]: sshd@25-10.0.0.128:22-10.0.0.1:51380.service: Deactivated successfully. Jul 10 00:42:43.320265 systemd[1]: session-26.scope: Deactivated successfully. Jul 10 00:42:43.321347 systemd-logind[1428]: Session 26 logged out. Waiting for processes to exit. Jul 10 00:42:43.322754 systemd-logind[1428]: Removed session 26.