Jul 11 00:06:28.922019 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 11 00:06:28.922053 kernel: Linux version 6.6.96-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Jul 10 22:41:52 -00 2025 Jul 11 00:06:28.922065 kernel: KASLR enabled Jul 11 00:06:28.922071 kernel: efi: EFI v2.7 by EDK II Jul 11 00:06:28.922077 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jul 11 00:06:28.922082 kernel: random: crng init done Jul 11 00:06:28.922090 kernel: ACPI: Early table checksum verification disabled Jul 11 00:06:28.922095 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jul 11 00:06:28.922125 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 11 00:06:28.922140 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:06:28.922147 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:06:28.922153 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:06:28.922159 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:06:28.922166 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:06:28.922173 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:06:28.922181 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:06:28.922188 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:06:28.922194 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:06:28.922201 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 11 00:06:28.922207 kernel: NUMA: Failed to initialise from firmware Jul 11 00:06:28.922213 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 11 00:06:28.922220 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jul 11 00:06:28.922226 kernel: Zone ranges: Jul 11 00:06:28.922232 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 11 00:06:28.922239 kernel: DMA32 empty Jul 11 00:06:28.922246 kernel: Normal empty Jul 11 00:06:28.922253 kernel: Movable zone start for each node Jul 11 00:06:28.922260 kernel: Early memory node ranges Jul 11 00:06:28.922266 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jul 11 00:06:28.922272 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 11 00:06:28.922279 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 11 00:06:28.922285 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 11 00:06:28.922291 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 11 00:06:28.922297 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 11 00:06:28.922304 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 11 00:06:28.922310 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 11 00:06:28.922316 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 11 00:06:28.922324 kernel: psci: probing for conduit method from ACPI. Jul 11 00:06:28.922331 kernel: psci: PSCIv1.1 detected in firmware. Jul 11 00:06:28.922337 kernel: psci: Using standard PSCI v0.2 function IDs Jul 11 00:06:28.922346 kernel: psci: Trusted OS migration not required Jul 11 00:06:28.922353 kernel: psci: SMC Calling Convention v1.1 Jul 11 00:06:28.922360 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 11 00:06:28.922368 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 11 00:06:28.922375 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 11 00:06:28.922382 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 11 00:06:28.922388 kernel: Detected PIPT I-cache on CPU0 Jul 11 00:06:28.922395 kernel: CPU features: detected: GIC system register CPU interface Jul 11 00:06:28.922402 kernel: CPU features: detected: Hardware dirty bit management Jul 11 00:06:28.922408 kernel: CPU features: detected: Spectre-v4 Jul 11 00:06:28.922415 kernel: CPU features: detected: Spectre-BHB Jul 11 00:06:28.922422 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 11 00:06:28.922428 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 11 00:06:28.922436 kernel: CPU features: detected: ARM erratum 1418040 Jul 11 00:06:28.922443 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 11 00:06:28.922450 kernel: alternatives: applying boot alternatives Jul 11 00:06:28.922458 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1479f76954ab5eb3c0ce800eb2a80ad04b273ff773a5af5c1fe82fb8feef2990 Jul 11 00:06:28.922465 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 11 00:06:28.922472 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 11 00:06:28.922479 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 11 00:06:28.922485 kernel: Fallback order for Node 0: 0 Jul 11 00:06:28.922492 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 11 00:06:28.922498 kernel: Policy zone: DMA Jul 11 00:06:28.922505 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 11 00:06:28.922513 kernel: software IO TLB: area num 4. Jul 11 00:06:28.922520 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 11 00:06:28.922527 kernel: Memory: 2386404K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185884K reserved, 0K cma-reserved) Jul 11 00:06:28.922533 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 11 00:06:28.922540 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 11 00:06:28.922547 kernel: rcu: RCU event tracing is enabled. Jul 11 00:06:28.922554 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 11 00:06:28.922561 kernel: Trampoline variant of Tasks RCU enabled. Jul 11 00:06:28.922568 kernel: Tracing variant of Tasks RCU enabled. Jul 11 00:06:28.922575 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 11 00:06:28.922581 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 11 00:06:28.922588 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 11 00:06:28.922596 kernel: GICv3: 256 SPIs implemented Jul 11 00:06:28.922602 kernel: GICv3: 0 Extended SPIs implemented Jul 11 00:06:28.922609 kernel: Root IRQ handler: gic_handle_irq Jul 11 00:06:28.922616 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 11 00:06:28.922626 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 11 00:06:28.922632 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 11 00:06:28.922639 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jul 11 00:06:28.922646 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jul 11 00:06:28.922653 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 11 00:06:28.922660 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 11 00:06:28.922667 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 11 00:06:28.922675 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 00:06:28.922682 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 11 00:06:28.922689 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 11 00:06:28.922696 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 11 00:06:28.922703 kernel: arm-pv: using stolen time PV Jul 11 00:06:28.922710 kernel: Console: colour dummy device 80x25 Jul 11 00:06:28.922716 kernel: ACPI: Core revision 20230628 Jul 11 00:06:28.922723 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 11 00:06:28.922730 kernel: pid_max: default: 32768 minimum: 301 Jul 11 00:06:28.922737 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 11 00:06:28.922746 kernel: landlock: Up and running. Jul 11 00:06:28.922752 kernel: SELinux: Initializing. Jul 11 00:06:28.922759 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 00:06:28.922766 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 00:06:28.922773 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:06:28.922780 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:06:28.922787 kernel: rcu: Hierarchical SRCU implementation. Jul 11 00:06:28.922794 kernel: rcu: Max phase no-delay instances is 400. Jul 11 00:06:28.922801 kernel: Platform MSI: ITS@0x8080000 domain created Jul 11 00:06:28.922809 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 11 00:06:28.922815 kernel: Remapping and enabling EFI services. Jul 11 00:06:28.922822 kernel: smp: Bringing up secondary CPUs ... Jul 11 00:06:28.922829 kernel: Detected PIPT I-cache on CPU1 Jul 11 00:06:28.922836 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 11 00:06:28.922843 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 11 00:06:28.922850 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 00:06:28.922857 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 11 00:06:28.922864 kernel: Detected PIPT I-cache on CPU2 Jul 11 00:06:28.922873 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 11 00:06:28.922882 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 11 00:06:28.922889 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 00:06:28.922904 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 11 00:06:28.922915 kernel: Detected PIPT I-cache on CPU3 Jul 11 00:06:28.922924 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 11 00:06:28.922932 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 11 00:06:28.922963 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 00:06:28.922971 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 11 00:06:28.922978 kernel: smp: Brought up 1 node, 4 CPUs Jul 11 00:06:28.922987 kernel: SMP: Total of 4 processors activated. Jul 11 00:06:28.922995 kernel: CPU features: detected: 32-bit EL0 Support Jul 11 00:06:28.923002 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 11 00:06:28.923009 kernel: CPU features: detected: Common not Private translations Jul 11 00:06:28.923017 kernel: CPU features: detected: CRC32 instructions Jul 11 00:06:28.923024 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 11 00:06:28.923031 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 11 00:06:28.923042 kernel: CPU features: detected: LSE atomic instructions Jul 11 00:06:28.923052 kernel: CPU features: detected: Privileged Access Never Jul 11 00:06:28.923059 kernel: CPU features: detected: RAS Extension Support Jul 11 00:06:28.923067 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 11 00:06:28.923074 kernel: CPU: All CPU(s) started at EL1 Jul 11 00:06:28.923084 kernel: alternatives: applying system-wide alternatives Jul 11 00:06:28.923091 kernel: devtmpfs: initialized Jul 11 00:06:28.923099 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 11 00:06:28.923106 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 11 00:06:28.923124 kernel: pinctrl core: initialized pinctrl subsystem Jul 11 00:06:28.923134 kernel: SMBIOS 3.0.0 present. Jul 11 00:06:28.923142 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jul 11 00:06:28.923149 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 11 00:06:28.923157 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 11 00:06:28.923164 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 11 00:06:28.923172 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 11 00:06:28.923179 kernel: audit: initializing netlink subsys (disabled) Jul 11 00:06:28.923187 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Jul 11 00:06:28.923194 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 11 00:06:28.923203 kernel: cpuidle: using governor menu Jul 11 00:06:28.923211 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 11 00:06:28.923218 kernel: ASID allocator initialised with 32768 entries Jul 11 00:06:28.923226 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 11 00:06:28.923233 kernel: Serial: AMBA PL011 UART driver Jul 11 00:06:28.923241 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 11 00:06:28.923248 kernel: Modules: 0 pages in range for non-PLT usage Jul 11 00:06:28.923256 kernel: Modules: 509008 pages in range for PLT usage Jul 11 00:06:28.923263 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 11 00:06:28.923272 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 11 00:06:28.923280 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 11 00:06:28.923287 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 11 00:06:28.923294 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 11 00:06:28.923302 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 11 00:06:28.923309 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 11 00:06:28.923317 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 11 00:06:28.923324 kernel: ACPI: Added _OSI(Module Device) Jul 11 00:06:28.923332 kernel: ACPI: Added _OSI(Processor Device) Jul 11 00:06:28.923341 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 11 00:06:28.923349 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 11 00:06:28.923356 kernel: ACPI: Interpreter enabled Jul 11 00:06:28.923363 kernel: ACPI: Using GIC for interrupt routing Jul 11 00:06:28.923370 kernel: ACPI: MCFG table detected, 1 entries Jul 11 00:06:28.923378 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 11 00:06:28.923386 kernel: printk: console [ttyAMA0] enabled Jul 11 00:06:28.923393 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 11 00:06:28.923543 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 11 00:06:28.923623 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 11 00:06:28.923691 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 11 00:06:28.923756 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 11 00:06:28.923822 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 11 00:06:28.923833 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 11 00:06:28.923840 kernel: PCI host bridge to bus 0000:00 Jul 11 00:06:28.923913 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 11 00:06:28.923979 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 11 00:06:28.924049 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 11 00:06:28.924123 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 11 00:06:28.924226 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 11 00:06:28.924314 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 11 00:06:28.924385 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 11 00:06:28.924458 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 11 00:06:28.924526 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 11 00:06:28.924593 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 11 00:06:28.924660 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 11 00:06:28.924727 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 11 00:06:28.924794 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 11 00:06:28.924853 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 11 00:06:28.924914 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 11 00:06:28.924924 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 11 00:06:28.924931 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 11 00:06:28.924939 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 11 00:06:28.924946 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 11 00:06:28.924954 kernel: iommu: Default domain type: Translated Jul 11 00:06:28.924961 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 11 00:06:28.924969 kernel: efivars: Registered efivars operations Jul 11 00:06:28.924976 kernel: vgaarb: loaded Jul 11 00:06:28.924985 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 11 00:06:28.924992 kernel: VFS: Disk quotas dquot_6.6.0 Jul 11 00:06:28.925000 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 11 00:06:28.925007 kernel: pnp: PnP ACPI init Jul 11 00:06:28.925090 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 11 00:06:28.925102 kernel: pnp: PnP ACPI: found 1 devices Jul 11 00:06:28.925109 kernel: NET: Registered PF_INET protocol family Jul 11 00:06:28.925128 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 11 00:06:28.925139 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 11 00:06:28.925146 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 11 00:06:28.925154 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 11 00:06:28.925161 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 11 00:06:28.925168 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 11 00:06:28.925176 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 00:06:28.925183 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 00:06:28.925191 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 11 00:06:28.925198 kernel: PCI: CLS 0 bytes, default 64 Jul 11 00:06:28.925207 kernel: kvm [1]: HYP mode not available Jul 11 00:06:28.925215 kernel: Initialise system trusted keyrings Jul 11 00:06:28.925222 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 11 00:06:28.925229 kernel: Key type asymmetric registered Jul 11 00:06:28.925237 kernel: Asymmetric key parser 'x509' registered Jul 11 00:06:28.925244 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 11 00:06:28.925252 kernel: io scheduler mq-deadline registered Jul 11 00:06:28.925259 kernel: io scheduler kyber registered Jul 11 00:06:28.925266 kernel: io scheduler bfq registered Jul 11 00:06:28.925275 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 11 00:06:28.925282 kernel: ACPI: button: Power Button [PWRB] Jul 11 00:06:28.925290 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 11 00:06:28.925366 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 11 00:06:28.925377 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 11 00:06:28.925384 kernel: thunder_xcv, ver 1.0 Jul 11 00:06:28.925392 kernel: thunder_bgx, ver 1.0 Jul 11 00:06:28.925399 kernel: nicpf, ver 1.0 Jul 11 00:06:28.925406 kernel: nicvf, ver 1.0 Jul 11 00:06:28.925506 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 11 00:06:28.925571 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-11T00:06:28 UTC (1752192388) Jul 11 00:06:28.925581 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 11 00:06:28.925589 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 11 00:06:28.925597 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 11 00:06:28.925604 kernel: watchdog: Hard watchdog permanently disabled Jul 11 00:06:28.925612 kernel: NET: Registered PF_INET6 protocol family Jul 11 00:06:28.925619 kernel: Segment Routing with IPv6 Jul 11 00:06:28.925629 kernel: In-situ OAM (IOAM) with IPv6 Jul 11 00:06:28.925636 kernel: NET: Registered PF_PACKET protocol family Jul 11 00:06:28.925643 kernel: Key type dns_resolver registered Jul 11 00:06:28.925651 kernel: registered taskstats version 1 Jul 11 00:06:28.925658 kernel: Loading compiled-in X.509 certificates Jul 11 00:06:28.925665 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.96-flatcar: 9d58afa0c1753353480d5539f26f662c9ce000cb' Jul 11 00:06:28.925672 kernel: Key type .fscrypt registered Jul 11 00:06:28.925679 kernel: Key type fscrypt-provisioning registered Jul 11 00:06:28.925687 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 11 00:06:28.925695 kernel: ima: Allocated hash algorithm: sha1 Jul 11 00:06:28.925703 kernel: ima: No architecture policies found Jul 11 00:06:28.925710 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 11 00:06:28.925717 kernel: clk: Disabling unused clocks Jul 11 00:06:28.925724 kernel: Freeing unused kernel memory: 39424K Jul 11 00:06:28.925731 kernel: Run /init as init process Jul 11 00:06:28.925739 kernel: with arguments: Jul 11 00:06:28.925746 kernel: /init Jul 11 00:06:28.925753 kernel: with environment: Jul 11 00:06:28.925761 kernel: HOME=/ Jul 11 00:06:28.925769 kernel: TERM=linux Jul 11 00:06:28.925776 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 11 00:06:28.925785 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 11 00:06:28.925794 systemd[1]: Detected virtualization kvm. Jul 11 00:06:28.925803 systemd[1]: Detected architecture arm64. Jul 11 00:06:28.925810 systemd[1]: Running in initrd. Jul 11 00:06:28.925819 systemd[1]: No hostname configured, using default hostname. Jul 11 00:06:28.925827 systemd[1]: Hostname set to . Jul 11 00:06:28.925835 systemd[1]: Initializing machine ID from VM UUID. Jul 11 00:06:28.925843 systemd[1]: Queued start job for default target initrd.target. Jul 11 00:06:28.925851 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:06:28.925859 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:06:28.925867 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 11 00:06:28.925875 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 00:06:28.925885 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 11 00:06:28.925893 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 11 00:06:28.925902 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 11 00:06:28.925911 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 11 00:06:28.925919 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:06:28.925927 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:06:28.925935 systemd[1]: Reached target paths.target - Path Units. Jul 11 00:06:28.925944 systemd[1]: Reached target slices.target - Slice Units. Jul 11 00:06:28.925952 systemd[1]: Reached target swap.target - Swaps. Jul 11 00:06:28.925960 systemd[1]: Reached target timers.target - Timer Units. Jul 11 00:06:28.925968 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 00:06:28.925976 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 00:06:28.925984 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 11 00:06:28.925993 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 11 00:06:28.926006 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:06:28.926014 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 00:06:28.926024 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:06:28.926032 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 00:06:28.926045 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 11 00:06:28.926055 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 00:06:28.926063 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 11 00:06:28.926071 systemd[1]: Starting systemd-fsck-usr.service... Jul 11 00:06:28.926079 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 00:06:28.926087 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 00:06:28.926097 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:06:28.926134 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 11 00:06:28.926147 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:06:28.926155 systemd[1]: Finished systemd-fsck-usr.service. Jul 11 00:06:28.926164 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 11 00:06:28.926197 systemd-journald[237]: Collecting audit messages is disabled. Jul 11 00:06:28.926217 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 11 00:06:28.926225 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:06:28.926234 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:06:28.926243 kernel: Bridge firewalling registered Jul 11 00:06:28.926253 systemd-journald[237]: Journal started Jul 11 00:06:28.926271 systemd-journald[237]: Runtime Journal (/run/log/journal/65a8ea70d1e5460dbd1e161d92ec69db) is 5.9M, max 47.3M, 41.4M free. Jul 11 00:06:28.907020 systemd-modules-load[238]: Inserted module 'overlay' Jul 11 00:06:28.928338 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 00:06:28.925951 systemd-modules-load[238]: Inserted module 'br_netfilter' Jul 11 00:06:28.929288 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 00:06:28.930605 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 00:06:28.935310 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:06:28.936738 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 00:06:28.940156 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 00:06:28.945243 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:06:28.949196 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 11 00:06:28.950207 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:06:28.953475 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:06:28.954934 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:06:28.958397 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 00:06:28.967252 dracut-cmdline[270]: dracut-dracut-053 Jul 11 00:06:28.969919 dracut-cmdline[270]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1479f76954ab5eb3c0ce800eb2a80ad04b273ff773a5af5c1fe82fb8feef2990 Jul 11 00:06:28.987620 systemd-resolved[277]: Positive Trust Anchors: Jul 11 00:06:28.987638 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:06:28.987669 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 00:06:28.992596 systemd-resolved[277]: Defaulting to hostname 'linux'. Jul 11 00:06:28.995765 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 00:06:28.997380 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:06:29.044138 kernel: SCSI subsystem initialized Jul 11 00:06:29.049131 kernel: Loading iSCSI transport class v2.0-870. Jul 11 00:06:29.056142 kernel: iscsi: registered transport (tcp) Jul 11 00:06:29.070163 kernel: iscsi: registered transport (qla4xxx) Jul 11 00:06:29.070217 kernel: QLogic iSCSI HBA Driver Jul 11 00:06:29.113730 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 11 00:06:29.122296 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 11 00:06:29.139224 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 11 00:06:29.139285 kernel: device-mapper: uevent: version 1.0.3 Jul 11 00:06:29.140856 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 11 00:06:29.189152 kernel: raid6: neonx8 gen() 15786 MB/s Jul 11 00:06:29.206141 kernel: raid6: neonx4 gen() 15657 MB/s Jul 11 00:06:29.223143 kernel: raid6: neonx2 gen() 13268 MB/s Jul 11 00:06:29.240136 kernel: raid6: neonx1 gen() 10479 MB/s Jul 11 00:06:29.257156 kernel: raid6: int64x8 gen() 6960 MB/s Jul 11 00:06:29.274138 kernel: raid6: int64x4 gen() 7337 MB/s Jul 11 00:06:29.291138 kernel: raid6: int64x2 gen() 6128 MB/s Jul 11 00:06:29.308198 kernel: raid6: int64x1 gen() 5053 MB/s Jul 11 00:06:29.308220 kernel: raid6: using algorithm neonx8 gen() 15786 MB/s Jul 11 00:06:29.326216 kernel: raid6: .... xor() 11939 MB/s, rmw enabled Jul 11 00:06:29.326249 kernel: raid6: using neon recovery algorithm Jul 11 00:06:29.331150 kernel: xor: measuring software checksum speed Jul 11 00:06:29.332434 kernel: 8regs : 17451 MB/sec Jul 11 00:06:29.332451 kernel: 32regs : 19585 MB/sec Jul 11 00:06:29.333631 kernel: arm64_neon : 26901 MB/sec Jul 11 00:06:29.333646 kernel: xor: using function: arm64_neon (26901 MB/sec) Jul 11 00:06:29.384143 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 11 00:06:29.394555 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 11 00:06:29.403310 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:06:29.414954 systemd-udevd[461]: Using default interface naming scheme 'v255'. Jul 11 00:06:29.418110 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:06:29.429291 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 11 00:06:29.440934 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Jul 11 00:06:29.470538 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 00:06:29.480282 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 00:06:29.522176 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:06:29.529328 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 11 00:06:29.545180 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 11 00:06:29.546205 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 00:06:29.549214 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:06:29.549996 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 00:06:29.560362 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 11 00:06:29.567702 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 11 00:06:29.567962 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 11 00:06:29.571779 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 00:06:29.571849 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:06:29.577550 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 11 00:06:29.577570 kernel: GPT:9289727 != 19775487 Jul 11 00:06:29.577580 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 11 00:06:29.575325 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:06:29.583225 kernel: GPT:9289727 != 19775487 Jul 11 00:06:29.583246 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 11 00:06:29.583256 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:06:29.582242 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:06:29.582334 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:06:29.584154 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:06:29.593638 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:06:29.598386 kernel: BTRFS: device fsid f5d5cad7-cb7a-4b07-bec7-847b84711ad7 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (515) Jul 11 00:06:29.598414 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (507) Jul 11 00:06:29.597600 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 11 00:06:29.609617 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 11 00:06:29.610888 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:06:29.621771 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 11 00:06:29.625549 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 11 00:06:29.626546 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 11 00:06:29.632100 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 00:06:29.652313 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 11 00:06:29.653914 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:06:29.658117 disk-uuid[551]: Primary Header is updated. Jul 11 00:06:29.658117 disk-uuid[551]: Secondary Entries is updated. Jul 11 00:06:29.658117 disk-uuid[551]: Secondary Header is updated. Jul 11 00:06:29.660611 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:06:29.680720 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:06:30.673146 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:06:30.674124 disk-uuid[552]: The operation has completed successfully. Jul 11 00:06:30.694649 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 11 00:06:30.695709 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 11 00:06:30.720352 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 11 00:06:30.723312 sh[574]: Success Jul 11 00:06:30.737154 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 11 00:06:30.766470 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 11 00:06:30.774546 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 11 00:06:30.776155 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 11 00:06:30.787148 kernel: BTRFS info (device dm-0): first mount of filesystem f5d5cad7-cb7a-4b07-bec7-847b84711ad7 Jul 11 00:06:30.787216 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 11 00:06:30.787227 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 11 00:06:30.788739 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 11 00:06:30.788758 kernel: BTRFS info (device dm-0): using free space tree Jul 11 00:06:30.792698 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 11 00:06:30.793939 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 11 00:06:30.803347 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 11 00:06:30.804811 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 11 00:06:30.812807 kernel: BTRFS info (device vda6): first mount of filesystem 183e1727-cabf-4be9-ba6e-b2af88e10184 Jul 11 00:06:30.812865 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 11 00:06:30.812876 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:06:30.816137 kernel: BTRFS info (device vda6): auto enabling async discard Jul 11 00:06:30.823543 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 11 00:06:30.826223 kernel: BTRFS info (device vda6): last unmount of filesystem 183e1727-cabf-4be9-ba6e-b2af88e10184 Jul 11 00:06:30.832010 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 11 00:06:30.838450 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 11 00:06:30.907588 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 00:06:30.924345 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 00:06:30.933785 ignition[663]: Ignition 2.19.0 Jul 11 00:06:30.933796 ignition[663]: Stage: fetch-offline Jul 11 00:06:30.933835 ignition[663]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:06:30.933843 ignition[663]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:06:30.933998 ignition[663]: parsed url from cmdline: "" Jul 11 00:06:30.934001 ignition[663]: no config URL provided Jul 11 00:06:30.934006 ignition[663]: reading system config file "/usr/lib/ignition/user.ign" Jul 11 00:06:30.934013 ignition[663]: no config at "/usr/lib/ignition/user.ign" Jul 11 00:06:30.934046 ignition[663]: op(1): [started] loading QEMU firmware config module Jul 11 00:06:30.934051 ignition[663]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 11 00:06:30.941896 ignition[663]: op(1): [finished] loading QEMU firmware config module Jul 11 00:06:30.950430 systemd-networkd[767]: lo: Link UP Jul 11 00:06:30.950445 systemd-networkd[767]: lo: Gained carrier Jul 11 00:06:30.951135 systemd-networkd[767]: Enumeration completed Jul 11 00:06:30.951277 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 00:06:30.951567 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:06:30.951570 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 00:06:30.953163 systemd-networkd[767]: eth0: Link UP Jul 11 00:06:30.953167 systemd[1]: Reached target network.target - Network. Jul 11 00:06:30.953168 systemd-networkd[767]: eth0: Gained carrier Jul 11 00:06:30.953177 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:06:30.970169 systemd-networkd[767]: eth0: DHCPv4 address 10.0.0.37/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 00:06:30.986675 ignition[663]: parsing config with SHA512: eec40249b68c86878dac156c579db9cb4e976ad90f5e94b13769dca2eafa6d67161acc043190489ade0da0d519f64113165ba2e4efe93e680a19941775667e6e Jul 11 00:06:30.993081 unknown[663]: fetched base config from "system" Jul 11 00:06:30.993093 unknown[663]: fetched user config from "qemu" Jul 11 00:06:30.993604 ignition[663]: fetch-offline: fetch-offline passed Jul 11 00:06:30.993674 ignition[663]: Ignition finished successfully Jul 11 00:06:30.994631 systemd-resolved[277]: Detected conflict on linux IN A 10.0.0.37 Jul 11 00:06:30.994640 systemd-resolved[277]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. Jul 11 00:06:30.995194 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 00:06:30.996576 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 11 00:06:31.004321 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 11 00:06:31.015679 ignition[773]: Ignition 2.19.0 Jul 11 00:06:31.015690 ignition[773]: Stage: kargs Jul 11 00:06:31.015870 ignition[773]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:06:31.015880 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:06:31.016813 ignition[773]: kargs: kargs passed Jul 11 00:06:31.016864 ignition[773]: Ignition finished successfully Jul 11 00:06:31.018977 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 11 00:06:31.021082 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 11 00:06:31.035839 ignition[781]: Ignition 2.19.0 Jul 11 00:06:31.035851 ignition[781]: Stage: disks Jul 11 00:06:31.036025 ignition[781]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:06:31.036046 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:06:31.037070 ignition[781]: disks: disks passed Jul 11 00:06:31.037143 ignition[781]: Ignition finished successfully Jul 11 00:06:31.039390 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 11 00:06:31.041049 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 11 00:06:31.047929 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 11 00:06:31.049526 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 00:06:31.050978 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 00:06:31.051783 systemd[1]: Reached target basic.target - Basic System. Jul 11 00:06:31.059289 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 11 00:06:31.070516 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 11 00:06:31.074197 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 11 00:06:31.084328 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 11 00:06:31.127165 kernel: EXT4-fs (vda9): mounted filesystem a2a437d1-0a8e-46b9-88bf-4a47ff29fe90 r/w with ordered data mode. Quota mode: none. Jul 11 00:06:31.127904 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 11 00:06:31.129056 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 11 00:06:31.143231 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 00:06:31.144863 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 11 00:06:31.146003 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 11 00:06:31.146053 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 11 00:06:31.151733 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (798) Jul 11 00:06:31.151758 kernel: BTRFS info (device vda6): first mount of filesystem 183e1727-cabf-4be9-ba6e-b2af88e10184 Jul 11 00:06:31.146078 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 00:06:31.155078 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 11 00:06:31.155099 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:06:31.152081 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 11 00:06:31.157183 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 11 00:06:31.159530 kernel: BTRFS info (device vda6): auto enabling async discard Jul 11 00:06:31.160865 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 00:06:31.200733 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Jul 11 00:06:31.205137 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Jul 11 00:06:31.208555 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Jul 11 00:06:31.211679 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Jul 11 00:06:31.290070 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 11 00:06:31.302249 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 11 00:06:31.303735 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 11 00:06:31.319169 kernel: BTRFS info (device vda6): last unmount of filesystem 183e1727-cabf-4be9-ba6e-b2af88e10184 Jul 11 00:06:31.334609 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 11 00:06:31.350753 ignition[912]: INFO : Ignition 2.19.0 Jul 11 00:06:31.351685 ignition[912]: INFO : Stage: mount Jul 11 00:06:31.353465 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:06:31.353465 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:06:31.353465 ignition[912]: INFO : mount: mount passed Jul 11 00:06:31.353465 ignition[912]: INFO : Ignition finished successfully Jul 11 00:06:31.355812 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 11 00:06:31.367289 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 11 00:06:31.785551 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 11 00:06:31.794289 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 00:06:31.800956 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (925) Jul 11 00:06:31.800993 kernel: BTRFS info (device vda6): first mount of filesystem 183e1727-cabf-4be9-ba6e-b2af88e10184 Jul 11 00:06:31.801005 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 11 00:06:31.802676 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:06:31.805148 kernel: BTRFS info (device vda6): auto enabling async discard Jul 11 00:06:31.805982 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 00:06:31.821932 ignition[942]: INFO : Ignition 2.19.0 Jul 11 00:06:31.821932 ignition[942]: INFO : Stage: files Jul 11 00:06:31.823333 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:06:31.823333 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:06:31.823333 ignition[942]: DEBUG : files: compiled without relabeling support, skipping Jul 11 00:06:31.826427 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 11 00:06:31.826427 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 11 00:06:31.826427 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 11 00:06:31.829859 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 11 00:06:31.829859 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 11 00:06:31.829859 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 11 00:06:31.829859 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jul 11 00:06:31.826831 unknown[942]: wrote ssh authorized keys file for user: core Jul 11 00:06:31.934374 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 11 00:06:32.344213 systemd-networkd[767]: eth0: Gained IPv6LL Jul 11 00:06:32.719813 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 11 00:06:32.719813 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 11 00:06:32.722674 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 11 00:06:33.058291 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 11 00:06:33.183352 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 11 00:06:33.183352 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 11 00:06:33.186041 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 11 00:06:33.186041 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 11 00:06:33.186041 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 11 00:06:33.186041 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 00:06:33.186041 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 00:06:33.186041 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 00:06:33.186041 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 00:06:33.186041 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:06:33.186041 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:06:33.186041 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 11 00:06:33.186041 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 11 00:06:33.186041 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 11 00:06:33.186041 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jul 11 00:06:33.603379 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 11 00:06:34.083148 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 11 00:06:34.083148 ignition[942]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 11 00:06:34.086228 ignition[942]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 00:06:34.086228 ignition[942]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 00:06:34.086228 ignition[942]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 11 00:06:34.086228 ignition[942]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 11 00:06:34.086228 ignition[942]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:06:34.086228 ignition[942]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:06:34.086228 ignition[942]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 11 00:06:34.086228 ignition[942]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 11 00:06:34.105987 ignition[942]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:06:34.109427 ignition[942]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:06:34.111639 ignition[942]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 11 00:06:34.111639 ignition[942]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 11 00:06:34.111639 ignition[942]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 11 00:06:34.111639 ignition[942]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:06:34.111639 ignition[942]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:06:34.111639 ignition[942]: INFO : files: files passed Jul 11 00:06:34.111639 ignition[942]: INFO : Ignition finished successfully Jul 11 00:06:34.112405 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 11 00:06:34.123291 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 11 00:06:34.125496 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 11 00:06:34.127444 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 11 00:06:34.127529 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 11 00:06:34.133084 initrd-setup-root-after-ignition[970]: grep: /sysroot/oem/oem-release: No such file or directory Jul 11 00:06:34.136266 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:06:34.136266 initrd-setup-root-after-ignition[972]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:06:34.138496 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:06:34.142298 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 00:06:34.143400 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 11 00:06:34.153287 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 11 00:06:34.175177 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 11 00:06:34.175304 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 11 00:06:34.176957 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 11 00:06:34.178259 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 11 00:06:34.179581 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 11 00:06:34.180283 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 11 00:06:34.195170 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 00:06:34.200247 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 11 00:06:34.208105 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:06:34.209750 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:06:34.210665 systemd[1]: Stopped target timers.target - Timer Units. Jul 11 00:06:34.211917 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 11 00:06:34.212036 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 00:06:34.213807 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 11 00:06:34.215375 systemd[1]: Stopped target basic.target - Basic System. Jul 11 00:06:34.216622 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 11 00:06:34.217866 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 00:06:34.219243 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 11 00:06:34.220637 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 11 00:06:34.221912 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 00:06:34.223286 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 11 00:06:34.224665 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 11 00:06:34.225977 systemd[1]: Stopped target swap.target - Swaps. Jul 11 00:06:34.227137 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 11 00:06:34.227268 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 11 00:06:34.229099 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:06:34.230556 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:06:34.231940 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 11 00:06:34.235173 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:06:34.236072 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 11 00:06:34.236214 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 11 00:06:34.238405 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 11 00:06:34.238520 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 00:06:34.240005 systemd[1]: Stopped target paths.target - Path Units. Jul 11 00:06:34.241236 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 11 00:06:34.245178 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:06:34.247251 systemd[1]: Stopped target slices.target - Slice Units. Jul 11 00:06:34.247999 systemd[1]: Stopped target sockets.target - Socket Units. Jul 11 00:06:34.249424 systemd[1]: iscsid.socket: Deactivated successfully. Jul 11 00:06:34.249563 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 00:06:34.250671 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 11 00:06:34.250809 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 00:06:34.251899 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 11 00:06:34.252072 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 00:06:34.253361 systemd[1]: ignition-files.service: Deactivated successfully. Jul 11 00:06:34.253511 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 11 00:06:34.261400 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 11 00:06:34.262161 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 11 00:06:34.262350 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:06:34.265457 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 11 00:06:34.266810 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 11 00:06:34.267719 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:06:34.268750 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 11 00:06:34.268853 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 00:06:34.273798 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 11 00:06:34.275017 ignition[997]: INFO : Ignition 2.19.0 Jul 11 00:06:34.275017 ignition[997]: INFO : Stage: umount Jul 11 00:06:34.275017 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:06:34.275017 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:06:34.274588 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 11 00:06:34.281081 ignition[997]: INFO : umount: umount passed Jul 11 00:06:34.281081 ignition[997]: INFO : Ignition finished successfully Jul 11 00:06:34.280376 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 11 00:06:34.280993 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 11 00:06:34.281146 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 11 00:06:34.283485 systemd[1]: Stopped target network.target - Network. Jul 11 00:06:34.284499 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 11 00:06:34.284555 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 11 00:06:34.285818 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 11 00:06:34.285857 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 11 00:06:34.287196 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 11 00:06:34.287241 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 11 00:06:34.288610 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 11 00:06:34.288651 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 11 00:06:34.290009 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 11 00:06:34.291556 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 11 00:06:34.293042 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 11 00:06:34.293185 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 11 00:06:34.294550 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 11 00:06:34.294646 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 11 00:06:34.298823 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 11 00:06:34.298962 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 11 00:06:34.299436 systemd-networkd[767]: eth0: DHCPv6 lease lost Jul 11 00:06:34.301922 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 11 00:06:34.302064 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 11 00:06:34.304077 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 11 00:06:34.304152 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:06:34.311270 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 11 00:06:34.311931 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 11 00:06:34.311997 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 00:06:34.313506 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 11 00:06:34.313549 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:06:34.314844 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 11 00:06:34.314885 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 11 00:06:34.316412 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 11 00:06:34.316453 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:06:34.317916 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:06:34.326984 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 11 00:06:34.327138 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 11 00:06:34.332836 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 11 00:06:34.332982 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:06:34.334998 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 11 00:06:34.335055 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 11 00:06:34.336456 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 11 00:06:34.336496 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:06:34.337933 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 11 00:06:34.337981 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 11 00:06:34.340215 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 11 00:06:34.340262 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 11 00:06:34.342419 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 00:06:34.342465 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:06:34.357327 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 11 00:06:34.358162 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 11 00:06:34.358221 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:06:34.359870 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:06:34.359915 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:06:34.362064 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 11 00:06:34.362163 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 11 00:06:34.364172 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 11 00:06:34.365905 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 11 00:06:34.375425 systemd[1]: Switching root. Jul 11 00:06:34.410371 systemd-journald[237]: Journal stopped Jul 11 00:06:35.142190 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jul 11 00:06:35.142276 kernel: SELinux: policy capability network_peer_controls=1 Jul 11 00:06:35.142297 kernel: SELinux: policy capability open_perms=1 Jul 11 00:06:35.142307 kernel: SELinux: policy capability extended_socket_class=1 Jul 11 00:06:35.142321 kernel: SELinux: policy capability always_check_network=0 Jul 11 00:06:35.142333 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 11 00:06:35.142345 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 11 00:06:35.142374 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 11 00:06:35.142388 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 11 00:06:35.142398 kernel: audit: type=1403 audit(1752192394.583:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 11 00:06:35.142409 systemd[1]: Successfully loaded SELinux policy in 32.431ms. Jul 11 00:06:35.142443 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.591ms. Jul 11 00:06:35.142479 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 11 00:06:35.142492 systemd[1]: Detected virtualization kvm. Jul 11 00:06:35.142502 systemd[1]: Detected architecture arm64. Jul 11 00:06:35.142513 systemd[1]: Detected first boot. Jul 11 00:06:35.142524 systemd[1]: Initializing machine ID from VM UUID. Jul 11 00:06:35.142534 zram_generator::config[1042]: No configuration found. Jul 11 00:06:35.142546 systemd[1]: Populated /etc with preset unit settings. Jul 11 00:06:35.142573 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 11 00:06:35.142589 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 11 00:06:35.142600 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 11 00:06:35.142612 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 11 00:06:35.142623 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 11 00:06:35.142634 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 11 00:06:35.142644 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 11 00:06:35.142655 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 11 00:06:35.142666 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 11 00:06:35.142679 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 11 00:06:35.142690 systemd[1]: Created slice user.slice - User and Session Slice. Jul 11 00:06:35.142700 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:06:35.142711 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:06:35.142722 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 11 00:06:35.142734 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 11 00:06:35.142745 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 11 00:06:35.142756 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 00:06:35.142766 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 11 00:06:35.142782 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:06:35.142793 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 11 00:06:35.142803 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 11 00:06:35.142814 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 11 00:06:35.142824 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 11 00:06:35.142835 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:06:35.142845 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 00:06:35.142857 systemd[1]: Reached target slices.target - Slice Units. Jul 11 00:06:35.142868 systemd[1]: Reached target swap.target - Swaps. Jul 11 00:06:35.142879 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 11 00:06:35.142889 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 11 00:06:35.142900 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:06:35.142911 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 00:06:35.142922 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:06:35.142933 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 11 00:06:35.142943 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 11 00:06:35.142954 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 11 00:06:35.142967 systemd[1]: Mounting media.mount - External Media Directory... Jul 11 00:06:35.142978 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 11 00:06:35.142988 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 11 00:06:35.143003 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 11 00:06:35.143014 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 11 00:06:35.143031 systemd[1]: Reached target machines.target - Containers. Jul 11 00:06:35.143043 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 11 00:06:35.143054 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:06:35.143068 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 00:06:35.143079 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 11 00:06:35.143089 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:06:35.143100 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 00:06:35.143111 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:06:35.143233 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 11 00:06:35.143244 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:06:35.143256 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 11 00:06:35.143269 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 11 00:06:35.143280 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 11 00:06:35.143291 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 11 00:06:35.143301 systemd[1]: Stopped systemd-fsck-usr.service. Jul 11 00:06:35.143312 kernel: fuse: init (API version 7.39) Jul 11 00:06:35.143322 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 00:06:35.143332 kernel: loop: module loaded Jul 11 00:06:35.143342 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 00:06:35.143353 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 11 00:06:35.143365 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 11 00:06:35.143376 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 00:06:35.143387 systemd[1]: verity-setup.service: Deactivated successfully. Jul 11 00:06:35.143397 systemd[1]: Stopped verity-setup.service. Jul 11 00:06:35.143430 systemd-journald[1109]: Collecting audit messages is disabled. Jul 11 00:06:35.143454 kernel: ACPI: bus type drm_connector registered Jul 11 00:06:35.143466 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 11 00:06:35.143478 systemd-journald[1109]: Journal started Jul 11 00:06:35.143501 systemd-journald[1109]: Runtime Journal (/run/log/journal/65a8ea70d1e5460dbd1e161d92ec69db) is 5.9M, max 47.3M, 41.4M free. Jul 11 00:06:34.957401 systemd[1]: Queued start job for default target multi-user.target. Jul 11 00:06:34.982144 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 11 00:06:34.982493 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 11 00:06:35.147185 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 00:06:35.147562 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 11 00:06:35.148579 systemd[1]: Mounted media.mount - External Media Directory. Jul 11 00:06:35.149464 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 11 00:06:35.150482 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 11 00:06:35.151503 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 11 00:06:35.152600 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:06:35.154047 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 11 00:06:35.155213 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 11 00:06:35.156381 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:06:35.156592 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:06:35.157734 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 00:06:35.157954 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 00:06:35.160625 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 11 00:06:35.161685 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:06:35.161830 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:06:35.162951 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 11 00:06:35.163098 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 11 00:06:35.164098 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:06:35.164277 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:06:35.165555 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 00:06:35.166630 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 00:06:35.167783 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 11 00:06:35.179057 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 11 00:06:35.193213 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 11 00:06:35.195033 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 11 00:06:35.195888 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 11 00:06:35.195924 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 00:06:35.197627 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 11 00:06:35.199558 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 11 00:06:35.201408 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 11 00:06:35.202265 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:06:35.203725 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 11 00:06:35.205792 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 11 00:06:35.206647 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:06:35.211672 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 11 00:06:35.212613 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 00:06:35.215724 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:06:35.218208 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 11 00:06:35.222189 systemd-journald[1109]: Time spent on flushing to /var/log/journal/65a8ea70d1e5460dbd1e161d92ec69db is 23.204ms for 858 entries. Jul 11 00:06:35.222189 systemd-journald[1109]: System Journal (/var/log/journal/65a8ea70d1e5460dbd1e161d92ec69db) is 8.0M, max 195.6M, 187.6M free. Jul 11 00:06:35.249839 systemd-journald[1109]: Received client request to flush runtime journal. Jul 11 00:06:35.249995 kernel: loop0: detected capacity change from 0 to 211168 Jul 11 00:06:35.250038 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 11 00:06:35.223331 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 11 00:06:35.225744 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:06:35.226923 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 11 00:06:35.228634 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 11 00:06:35.230106 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 11 00:06:35.231482 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 11 00:06:35.235623 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 11 00:06:35.249857 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 11 00:06:35.254699 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 11 00:06:35.259159 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 11 00:06:35.264135 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:06:35.267978 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 11 00:06:35.268617 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 11 00:06:35.269779 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 11 00:06:35.272976 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 11 00:06:35.277392 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 00:06:35.280219 kernel: loop1: detected capacity change from 0 to 114432 Jul 11 00:06:35.298032 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Jul 11 00:06:35.298468 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Jul 11 00:06:35.303892 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:06:35.328322 kernel: loop2: detected capacity change from 0 to 114328 Jul 11 00:06:35.369145 kernel: loop3: detected capacity change from 0 to 211168 Jul 11 00:06:35.375135 kernel: loop4: detected capacity change from 0 to 114432 Jul 11 00:06:35.379141 kernel: loop5: detected capacity change from 0 to 114328 Jul 11 00:06:35.382096 (sd-merge)[1178]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 11 00:06:35.382511 (sd-merge)[1178]: Merged extensions into '/usr'. Jul 11 00:06:35.386882 systemd[1]: Reloading requested from client PID 1153 ('systemd-sysext') (unit systemd-sysext.service)... Jul 11 00:06:35.386897 systemd[1]: Reloading... Jul 11 00:06:35.454267 zram_generator::config[1204]: No configuration found. Jul 11 00:06:35.482031 ldconfig[1148]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 11 00:06:35.553985 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:06:35.594612 systemd[1]: Reloading finished in 207 ms. Jul 11 00:06:35.625040 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 11 00:06:35.628337 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 11 00:06:35.640296 systemd[1]: Starting ensure-sysext.service... Jul 11 00:06:35.642682 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 00:06:35.657372 systemd[1]: Reloading requested from client PID 1238 ('systemctl') (unit ensure-sysext.service)... Jul 11 00:06:35.657388 systemd[1]: Reloading... Jul 11 00:06:35.664492 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 11 00:06:35.664757 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 11 00:06:35.665437 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 11 00:06:35.665654 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Jul 11 00:06:35.665705 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Jul 11 00:06:35.668052 systemd-tmpfiles[1239]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 00:06:35.668063 systemd-tmpfiles[1239]: Skipping /boot Jul 11 00:06:35.675417 systemd-tmpfiles[1239]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 00:06:35.675430 systemd-tmpfiles[1239]: Skipping /boot Jul 11 00:06:35.706138 zram_generator::config[1269]: No configuration found. Jul 11 00:06:35.794280 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:06:35.835220 systemd[1]: Reloading finished in 177 ms. Jul 11 00:06:35.851678 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 11 00:06:35.864833 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:06:35.874039 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 11 00:06:35.876665 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 11 00:06:35.878897 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 11 00:06:35.884432 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 00:06:35.887473 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:06:35.890595 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 11 00:06:35.896373 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:06:35.897719 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:06:35.900367 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:06:35.907508 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:06:35.909399 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:06:35.910349 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 11 00:06:35.911985 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:06:35.912160 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:06:35.918435 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:06:35.918588 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:06:35.922893 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:06:35.929050 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:06:35.931000 systemd-udevd[1308]: Using default interface naming scheme 'v255'. Jul 11 00:06:35.935417 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:06:35.936528 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:06:35.938611 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 11 00:06:35.943445 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 11 00:06:35.946906 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 11 00:06:35.948549 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:06:35.948779 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:06:35.950421 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:06:35.950559 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:06:35.952163 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:06:35.952303 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:06:35.954527 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:06:35.957959 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 11 00:06:35.962297 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 11 00:06:35.970789 systemd[1]: Finished ensure-sysext.service. Jul 11 00:06:35.971028 augenrules[1345]: No rules Jul 11 00:06:35.971896 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 11 00:06:35.978696 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:06:35.989346 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:06:35.993037 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 00:06:35.996236 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:06:35.999422 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:06:36.001553 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:06:36.003351 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 00:06:36.006248 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 11 00:06:36.007067 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 11 00:06:36.008385 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 11 00:06:36.009813 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:06:36.009985 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:06:36.011633 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 00:06:36.011779 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 00:06:36.013634 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:06:36.013777 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:06:36.016143 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1350) Jul 11 00:06:36.019738 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 11 00:06:36.022150 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:06:36.029562 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:06:36.029729 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:06:36.031978 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 00:06:36.086926 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 00:06:36.103311 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 11 00:06:36.110766 systemd-networkd[1374]: lo: Link UP Jul 11 00:06:36.110775 systemd-networkd[1374]: lo: Gained carrier Jul 11 00:06:36.111600 systemd-networkd[1374]: Enumeration completed Jul 11 00:06:36.112037 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 00:06:36.114923 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 11 00:06:36.116162 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:06:36.116171 systemd-networkd[1374]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 00:06:36.117275 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:06:36.117310 systemd-networkd[1374]: eth0: Link UP Jul 11 00:06:36.117313 systemd-networkd[1374]: eth0: Gained carrier Jul 11 00:06:36.117321 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:06:36.128433 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 11 00:06:36.129578 systemd[1]: Reached target time-set.target - System Time Set. Jul 11 00:06:36.131935 systemd-resolved[1307]: Positive Trust Anchors: Jul 11 00:06:36.132270 systemd-resolved[1307]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:06:36.132371 systemd-resolved[1307]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 00:06:36.137636 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 11 00:06:36.139086 systemd-networkd[1374]: eth0: DHCPv4 address 10.0.0.37/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 00:06:36.142218 systemd-timesyncd[1375]: Network configuration changed, trying to establish connection. Jul 11 00:06:36.146272 systemd-timesyncd[1375]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 11 00:06:36.146342 systemd-timesyncd[1375]: Initial clock synchronization to Fri 2025-07-11 00:06:36.203660 UTC. Jul 11 00:06:36.149833 systemd-resolved[1307]: Defaulting to hostname 'linux'. Jul 11 00:06:36.159984 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:06:36.161230 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 00:06:36.166543 systemd[1]: Reached target network.target - Network. Jul 11 00:06:36.167345 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:06:36.169758 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 11 00:06:36.179378 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 11 00:06:36.202141 lvm[1397]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 11 00:06:36.207175 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:06:36.230690 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 11 00:06:36.231875 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:06:36.234200 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 00:06:36.235080 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 11 00:06:36.236056 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 11 00:06:36.237237 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 11 00:06:36.238103 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 11 00:06:36.239199 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 11 00:06:36.240074 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 11 00:06:36.240109 systemd[1]: Reached target paths.target - Path Units. Jul 11 00:06:36.240731 systemd[1]: Reached target timers.target - Timer Units. Jul 11 00:06:36.242491 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 11 00:06:36.244733 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 11 00:06:36.254263 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 11 00:06:36.256412 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 11 00:06:36.257797 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 11 00:06:36.258775 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 00:06:36.259519 systemd[1]: Reached target basic.target - Basic System. Jul 11 00:06:36.260276 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 11 00:06:36.260306 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 11 00:06:36.262270 systemd[1]: Starting containerd.service - containerd container runtime... Jul 11 00:06:36.264188 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 11 00:06:36.265268 lvm[1405]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 11 00:06:36.267137 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 11 00:06:36.270507 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 11 00:06:36.271623 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 11 00:06:36.275381 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 11 00:06:36.278351 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 11 00:06:36.283456 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 11 00:06:36.289385 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 11 00:06:36.291668 jq[1408]: false Jul 11 00:06:36.295637 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 11 00:06:36.302852 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 11 00:06:36.303388 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 11 00:06:36.305794 extend-filesystems[1409]: Found loop3 Jul 11 00:06:36.306637 extend-filesystems[1409]: Found loop4 Jul 11 00:06:36.306637 extend-filesystems[1409]: Found loop5 Jul 11 00:06:36.306637 extend-filesystems[1409]: Found vda Jul 11 00:06:36.306637 extend-filesystems[1409]: Found vda1 Jul 11 00:06:36.306637 extend-filesystems[1409]: Found vda2 Jul 11 00:06:36.306637 extend-filesystems[1409]: Found vda3 Jul 11 00:06:36.306637 extend-filesystems[1409]: Found usr Jul 11 00:06:36.306637 extend-filesystems[1409]: Found vda4 Jul 11 00:06:36.306637 extend-filesystems[1409]: Found vda6 Jul 11 00:06:36.306637 extend-filesystems[1409]: Found vda7 Jul 11 00:06:36.306637 extend-filesystems[1409]: Found vda9 Jul 11 00:06:36.306637 extend-filesystems[1409]: Checking size of /dev/vda9 Jul 11 00:06:36.306334 systemd[1]: Starting update-engine.service - Update Engine... Jul 11 00:06:36.311739 dbus-daemon[1407]: [system] SELinux support is enabled Jul 11 00:06:36.310389 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 11 00:06:36.328606 extend-filesystems[1409]: Resized partition /dev/vda9 Jul 11 00:06:36.312275 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 11 00:06:36.332252 jq[1424]: true Jul 11 00:06:36.315301 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 11 00:06:36.328610 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 11 00:06:36.328780 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 11 00:06:36.329101 systemd[1]: motdgen.service: Deactivated successfully. Jul 11 00:06:36.329296 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 11 00:06:36.336135 extend-filesystems[1431]: resize2fs 1.47.1 (20-May-2024) Jul 11 00:06:36.338681 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 11 00:06:36.338848 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 11 00:06:36.352950 (ntainerd)[1434]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 11 00:06:36.356169 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1346) Jul 11 00:06:36.363787 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 11 00:06:36.363829 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 11 00:06:36.366063 systemd-logind[1420]: Watching system buttons on /dev/input/event0 (Power Button) Jul 11 00:06:36.366272 systemd-logind[1420]: New seat seat0. Jul 11 00:06:36.367096 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 11 00:06:36.370395 jq[1433]: true Jul 11 00:06:36.367143 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 11 00:06:36.368265 systemd[1]: Started systemd-logind.service - User Login Management. Jul 11 00:06:36.372147 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 11 00:06:36.392215 update_engine[1421]: I20250711 00:06:36.391731 1421 main.cc:92] Flatcar Update Engine starting Jul 11 00:06:36.396722 update_engine[1421]: I20250711 00:06:36.396641 1421 update_check_scheduler.cc:74] Next update check in 10m57s Jul 11 00:06:36.400348 systemd[1]: Started update-engine.service - Update Engine. Jul 11 00:06:36.402129 tar[1432]: linux-arm64/LICENSE Jul 11 00:06:36.402406 tar[1432]: linux-arm64/helm Jul 11 00:06:36.403561 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 11 00:06:36.404138 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 11 00:06:36.415474 extend-filesystems[1431]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 11 00:06:36.415474 extend-filesystems[1431]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 11 00:06:36.415474 extend-filesystems[1431]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 11 00:06:36.419010 extend-filesystems[1409]: Resized filesystem in /dev/vda9 Jul 11 00:06:36.428316 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 11 00:06:36.428525 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 11 00:06:36.446661 bash[1462]: Updated "/home/core/.ssh/authorized_keys" Jul 11 00:06:36.448674 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 11 00:06:36.452492 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 11 00:06:36.482967 locksmithd[1448]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 11 00:06:36.596177 containerd[1434]: time="2025-07-11T00:06:36.595990920Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 11 00:06:36.611814 sshd_keygen[1428]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 11 00:06:36.628108 containerd[1434]: time="2025-07-11T00:06:36.628026840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:06:36.630768 containerd[1434]: time="2025-07-11T00:06:36.630711600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:06:36.630768 containerd[1434]: time="2025-07-11T00:06:36.630762600Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 11 00:06:36.630852 containerd[1434]: time="2025-07-11T00:06:36.630781600Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 11 00:06:36.631686 containerd[1434]: time="2025-07-11T00:06:36.630999760Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 11 00:06:36.631686 containerd[1434]: time="2025-07-11T00:06:36.631037640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 11 00:06:36.631686 containerd[1434]: time="2025-07-11T00:06:36.631104120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:06:36.631686 containerd[1434]: time="2025-07-11T00:06:36.631132120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:06:36.631686 containerd[1434]: time="2025-07-11T00:06:36.631368360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:06:36.631686 containerd[1434]: time="2025-07-11T00:06:36.631387880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 11 00:06:36.631686 containerd[1434]: time="2025-07-11T00:06:36.631417960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:06:36.631686 containerd[1434]: time="2025-07-11T00:06:36.631428680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 11 00:06:36.631686 containerd[1434]: time="2025-07-11T00:06:36.631518240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:06:36.632766 containerd[1434]: time="2025-07-11T00:06:36.632514840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:06:36.632766 containerd[1434]: time="2025-07-11T00:06:36.632681280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:06:36.632766 containerd[1434]: time="2025-07-11T00:06:36.632698680Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 11 00:06:36.632905 containerd[1434]: time="2025-07-11T00:06:36.632789880Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 11 00:06:36.632905 containerd[1434]: time="2025-07-11T00:06:36.632832240Z" level=info msg="metadata content store policy set" policy=shared Jul 11 00:06:36.632856 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 11 00:06:36.641208 containerd[1434]: time="2025-07-11T00:06:36.639449680Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 11 00:06:36.641208 containerd[1434]: time="2025-07-11T00:06:36.639526600Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 11 00:06:36.641208 containerd[1434]: time="2025-07-11T00:06:36.639544640Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 11 00:06:36.641208 containerd[1434]: time="2025-07-11T00:06:36.639559520Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 11 00:06:36.641208 containerd[1434]: time="2025-07-11T00:06:36.639583240Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 11 00:06:36.641208 containerd[1434]: time="2025-07-11T00:06:36.639764640Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 11 00:06:36.641208 containerd[1434]: time="2025-07-11T00:06:36.640005600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 11 00:06:36.641208 containerd[1434]: time="2025-07-11T00:06:36.640154600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 11 00:06:36.641208 containerd[1434]: time="2025-07-11T00:06:36.640174880Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 11 00:06:36.641208 containerd[1434]: time="2025-07-11T00:06:36.640189600Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 11 00:06:36.641208 containerd[1434]: time="2025-07-11T00:06:36.640205000Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 11 00:06:36.641208 containerd[1434]: time="2025-07-11T00:06:36.640220640Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 11 00:06:36.641208 containerd[1434]: time="2025-07-11T00:06:36.640235880Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 11 00:06:36.641208 containerd[1434]: time="2025-07-11T00:06:36.640251320Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 11 00:06:36.641555 containerd[1434]: time="2025-07-11T00:06:36.640266240Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 11 00:06:36.641555 containerd[1434]: time="2025-07-11T00:06:36.640280120Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 11 00:06:36.641555 containerd[1434]: time="2025-07-11T00:06:36.640292480Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 11 00:06:36.641555 containerd[1434]: time="2025-07-11T00:06:36.640305840Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 11 00:06:36.641555 containerd[1434]: time="2025-07-11T00:06:36.640326280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 11 00:06:36.641555 containerd[1434]: time="2025-07-11T00:06:36.640340680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 11 00:06:36.641555 containerd[1434]: time="2025-07-11T00:06:36.640355000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 11 00:06:36.641555 containerd[1434]: time="2025-07-11T00:06:36.640367480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 11 00:06:36.641555 containerd[1434]: time="2025-07-11T00:06:36.640380280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 11 00:06:36.641555 containerd[1434]: time="2025-07-11T00:06:36.640397000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 11 00:06:36.641555 containerd[1434]: time="2025-07-11T00:06:36.640409800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 11 00:06:36.641555 containerd[1434]: time="2025-07-11T00:06:36.640423320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 11 00:06:36.641555 containerd[1434]: time="2025-07-11T00:06:36.640441240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 11 00:06:36.641555 containerd[1434]: time="2025-07-11T00:06:36.640456240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 11 00:06:36.641794 containerd[1434]: time="2025-07-11T00:06:36.640468280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 11 00:06:36.641794 containerd[1434]: time="2025-07-11T00:06:36.640480520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 11 00:06:36.641794 containerd[1434]: time="2025-07-11T00:06:36.640494680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 11 00:06:36.641794 containerd[1434]: time="2025-07-11T00:06:36.640512920Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 11 00:06:36.641794 containerd[1434]: time="2025-07-11T00:06:36.640535320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 11 00:06:36.641794 containerd[1434]: time="2025-07-11T00:06:36.640547840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 11 00:06:36.641794 containerd[1434]: time="2025-07-11T00:06:36.640559240Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 11 00:06:36.641794 containerd[1434]: time="2025-07-11T00:06:36.640691120Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 11 00:06:36.641794 containerd[1434]: time="2025-07-11T00:06:36.640710480Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 11 00:06:36.641794 containerd[1434]: time="2025-07-11T00:06:36.640722200Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 11 00:06:36.641794 containerd[1434]: time="2025-07-11T00:06:36.640735600Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 11 00:06:36.641794 containerd[1434]: time="2025-07-11T00:06:36.640745600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 11 00:06:36.641794 containerd[1434]: time="2025-07-11T00:06:36.640758400Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 11 00:06:36.641794 containerd[1434]: time="2025-07-11T00:06:36.640777400Z" level=info msg="NRI interface is disabled by configuration." Jul 11 00:06:36.642037 containerd[1434]: time="2025-07-11T00:06:36.640789600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 11 00:06:36.642060 containerd[1434]: time="2025-07-11T00:06:36.641209760Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 11 00:06:36.642060 containerd[1434]: time="2025-07-11T00:06:36.641273880Z" level=info msg="Connect containerd service" Jul 11 00:06:36.642060 containerd[1434]: time="2025-07-11T00:06:36.641302680Z" level=info msg="using legacy CRI server" Jul 11 00:06:36.642060 containerd[1434]: time="2025-07-11T00:06:36.641309320Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 11 00:06:36.642060 containerd[1434]: time="2025-07-11T00:06:36.641397720Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 11 00:06:36.642267 containerd[1434]: time="2025-07-11T00:06:36.642070400Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 00:06:36.642331 containerd[1434]: time="2025-07-11T00:06:36.642274760Z" level=info msg="Start subscribing containerd event" Jul 11 00:06:36.642359 containerd[1434]: time="2025-07-11T00:06:36.642351240Z" level=info msg="Start recovering state" Jul 11 00:06:36.642626 containerd[1434]: time="2025-07-11T00:06:36.642599360Z" level=info msg="Start event monitor" Jul 11 00:06:36.642664 containerd[1434]: time="2025-07-11T00:06:36.642626600Z" level=info msg="Start snapshots syncer" Jul 11 00:06:36.642664 containerd[1434]: time="2025-07-11T00:06:36.642638880Z" level=info msg="Start cni network conf syncer for default" Jul 11 00:06:36.642664 containerd[1434]: time="2025-07-11T00:06:36.642647120Z" level=info msg="Start streaming server" Jul 11 00:06:36.644204 containerd[1434]: time="2025-07-11T00:06:36.644174560Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 11 00:06:36.644254 containerd[1434]: time="2025-07-11T00:06:36.644231080Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 11 00:06:36.644298 containerd[1434]: time="2025-07-11T00:06:36.644282840Z" level=info msg="containerd successfully booted in 0.049661s" Jul 11 00:06:36.646506 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 11 00:06:36.647396 systemd[1]: Started containerd.service - containerd container runtime. Jul 11 00:06:36.654464 systemd[1]: issuegen.service: Deactivated successfully. Jul 11 00:06:36.654674 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 11 00:06:36.659836 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 11 00:06:36.675506 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 11 00:06:36.683490 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 11 00:06:36.685666 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 11 00:06:36.686785 systemd[1]: Reached target getty.target - Login Prompts. Jul 11 00:06:36.802248 tar[1432]: linux-arm64/README.md Jul 11 00:06:36.815814 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 11 00:06:37.975314 systemd-networkd[1374]: eth0: Gained IPv6LL Jul 11 00:06:37.977853 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 11 00:06:37.979420 systemd[1]: Reached target network-online.target - Network is Online. Jul 11 00:06:37.994373 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 11 00:06:37.996540 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:06:37.998452 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 11 00:06:38.013975 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 11 00:06:38.014801 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 11 00:06:38.016750 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 11 00:06:38.019731 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 11 00:06:38.537629 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:06:38.538963 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 11 00:06:38.539978 systemd[1]: Startup finished in 587ms (kernel) + 5.865s (initrd) + 4.000s (userspace) = 10.452s. Jul 11 00:06:38.541532 (kubelet)[1520]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:06:38.966649 kubelet[1520]: E0711 00:06:38.966534 1520 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:06:38.969364 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:06:38.969516 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:06:41.786783 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 11 00:06:41.787882 systemd[1]: Started sshd@0-10.0.0.37:22-10.0.0.1:56040.service - OpenSSH per-connection server daemon (10.0.0.1:56040). Jul 11 00:06:41.835893 sshd[1533]: Accepted publickey for core from 10.0.0.1 port 56040 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:06:41.837897 sshd[1533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:06:41.849771 systemd-logind[1420]: New session 1 of user core. Jul 11 00:06:41.850814 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 11 00:06:41.858332 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 11 00:06:41.867399 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 11 00:06:41.869557 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 11 00:06:41.876089 (systemd)[1537]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:06:41.950999 systemd[1537]: Queued start job for default target default.target. Jul 11 00:06:41.962059 systemd[1537]: Created slice app.slice - User Application Slice. Jul 11 00:06:41.962091 systemd[1537]: Reached target paths.target - Paths. Jul 11 00:06:41.962105 systemd[1537]: Reached target timers.target - Timers. Jul 11 00:06:41.963376 systemd[1537]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 11 00:06:41.973643 systemd[1537]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 11 00:06:41.973709 systemd[1537]: Reached target sockets.target - Sockets. Jul 11 00:06:41.973721 systemd[1537]: Reached target basic.target - Basic System. Jul 11 00:06:41.973757 systemd[1537]: Reached target default.target - Main User Target. Jul 11 00:06:41.973784 systemd[1537]: Startup finished in 92ms. Jul 11 00:06:41.974051 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 11 00:06:41.975460 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 11 00:06:42.055610 systemd[1]: Started sshd@1-10.0.0.37:22-10.0.0.1:56050.service - OpenSSH per-connection server daemon (10.0.0.1:56050). Jul 11 00:06:42.085584 sshd[1548]: Accepted publickey for core from 10.0.0.1 port 56050 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:06:42.086904 sshd[1548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:06:42.091183 systemd-logind[1420]: New session 2 of user core. Jul 11 00:06:42.102324 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 11 00:06:42.155876 sshd[1548]: pam_unix(sshd:session): session closed for user core Jul 11 00:06:42.167330 systemd[1]: sshd@1-10.0.0.37:22-10.0.0.1:56050.service: Deactivated successfully. Jul 11 00:06:42.169372 systemd[1]: session-2.scope: Deactivated successfully. Jul 11 00:06:42.171276 systemd-logind[1420]: Session 2 logged out. Waiting for processes to exit. Jul 11 00:06:42.171634 systemd[1]: Started sshd@2-10.0.0.37:22-10.0.0.1:56066.service - OpenSSH per-connection server daemon (10.0.0.1:56066). Jul 11 00:06:42.172747 systemd-logind[1420]: Removed session 2. Jul 11 00:06:42.217799 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 56066 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:06:42.218853 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:06:42.222503 systemd-logind[1420]: New session 3 of user core. Jul 11 00:06:42.232261 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 11 00:06:42.279283 sshd[1555]: pam_unix(sshd:session): session closed for user core Jul 11 00:06:42.290593 systemd[1]: sshd@2-10.0.0.37:22-10.0.0.1:56066.service: Deactivated successfully. Jul 11 00:06:42.294164 systemd[1]: session-3.scope: Deactivated successfully. Jul 11 00:06:42.295463 systemd-logind[1420]: Session 3 logged out. Waiting for processes to exit. Jul 11 00:06:42.306452 systemd[1]: Started sshd@3-10.0.0.37:22-10.0.0.1:56080.service - OpenSSH per-connection server daemon (10.0.0.1:56080). Jul 11 00:06:42.307626 systemd-logind[1420]: Removed session 3. Jul 11 00:06:42.336019 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 56080 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:06:42.337369 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:06:42.341017 systemd-logind[1420]: New session 4 of user core. Jul 11 00:06:42.353344 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 11 00:06:42.404468 sshd[1562]: pam_unix(sshd:session): session closed for user core Jul 11 00:06:42.417547 systemd[1]: sshd@3-10.0.0.37:22-10.0.0.1:56080.service: Deactivated successfully. Jul 11 00:06:42.418960 systemd[1]: session-4.scope: Deactivated successfully. Jul 11 00:06:42.420145 systemd-logind[1420]: Session 4 logged out. Waiting for processes to exit. Jul 11 00:06:42.421294 systemd[1]: Started sshd@4-10.0.0.37:22-10.0.0.1:56086.service - OpenSSH per-connection server daemon (10.0.0.1:56086). Jul 11 00:06:42.422084 systemd-logind[1420]: Removed session 4. Jul 11 00:06:42.454545 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 56086 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:06:42.455899 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:06:42.459919 systemd-logind[1420]: New session 5 of user core. Jul 11 00:06:42.468255 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 11 00:06:42.527096 sudo[1572]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 11 00:06:42.527462 sudo[1572]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:06:42.541946 sudo[1572]: pam_unix(sudo:session): session closed for user root Jul 11 00:06:42.544300 sshd[1569]: pam_unix(sshd:session): session closed for user core Jul 11 00:06:42.552745 systemd[1]: sshd@4-10.0.0.37:22-10.0.0.1:56086.service: Deactivated successfully. Jul 11 00:06:42.554104 systemd[1]: session-5.scope: Deactivated successfully. Jul 11 00:06:42.556041 systemd-logind[1420]: Session 5 logged out. Waiting for processes to exit. Jul 11 00:06:42.557448 systemd[1]: Started sshd@5-10.0.0.37:22-10.0.0.1:56380.service - OpenSSH per-connection server daemon (10.0.0.1:56380). Jul 11 00:06:42.558691 systemd-logind[1420]: Removed session 5. Jul 11 00:06:42.591767 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 56380 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:06:42.593163 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:06:42.596878 systemd-logind[1420]: New session 6 of user core. Jul 11 00:06:42.608277 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 11 00:06:42.659885 sudo[1581]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 11 00:06:42.660176 sudo[1581]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:06:42.663304 sudo[1581]: pam_unix(sudo:session): session closed for user root Jul 11 00:06:42.667728 sudo[1580]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 11 00:06:42.668276 sudo[1580]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:06:42.688363 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 11 00:06:42.689540 auditctl[1584]: No rules Jul 11 00:06:42.690373 systemd[1]: audit-rules.service: Deactivated successfully. Jul 11 00:06:42.690588 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 11 00:06:42.695431 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 11 00:06:42.715254 augenrules[1602]: No rules Jul 11 00:06:42.717221 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 11 00:06:42.718439 sudo[1580]: pam_unix(sudo:session): session closed for user root Jul 11 00:06:42.719823 sshd[1577]: pam_unix(sshd:session): session closed for user core Jul 11 00:06:42.726540 systemd[1]: sshd@5-10.0.0.37:22-10.0.0.1:56380.service: Deactivated successfully. Jul 11 00:06:42.727913 systemd[1]: session-6.scope: Deactivated successfully. Jul 11 00:06:42.730135 systemd-logind[1420]: Session 6 logged out. Waiting for processes to exit. Jul 11 00:06:42.740479 systemd[1]: Started sshd@6-10.0.0.37:22-10.0.0.1:56382.service - OpenSSH per-connection server daemon (10.0.0.1:56382). Jul 11 00:06:42.741693 systemd-logind[1420]: Removed session 6. Jul 11 00:06:42.770058 sshd[1610]: Accepted publickey for core from 10.0.0.1 port 56382 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:06:42.771518 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:06:42.775745 systemd-logind[1420]: New session 7 of user core. Jul 11 00:06:42.787269 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 11 00:06:42.837985 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 11 00:06:42.838684 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:06:43.147340 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 11 00:06:43.147510 (dockerd)[1633]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 11 00:06:43.403864 dockerd[1633]: time="2025-07-11T00:06:43.403750590Z" level=info msg="Starting up" Jul 11 00:06:43.581446 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport207523496-merged.mount: Deactivated successfully. Jul 11 00:06:43.599436 dockerd[1633]: time="2025-07-11T00:06:43.599385895Z" level=info msg="Loading containers: start." Jul 11 00:06:43.679146 kernel: Initializing XFRM netlink socket Jul 11 00:06:43.746940 systemd-networkd[1374]: docker0: Link UP Jul 11 00:06:43.768561 dockerd[1633]: time="2025-07-11T00:06:43.768499914Z" level=info msg="Loading containers: done." Jul 11 00:06:43.782025 dockerd[1633]: time="2025-07-11T00:06:43.781970693Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 11 00:06:43.782179 dockerd[1633]: time="2025-07-11T00:06:43.782083375Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 11 00:06:43.782248 dockerd[1633]: time="2025-07-11T00:06:43.782217928Z" level=info msg="Daemon has completed initialization" Jul 11 00:06:43.809387 dockerd[1633]: time="2025-07-11T00:06:43.809259045Z" level=info msg="API listen on /run/docker.sock" Jul 11 00:06:43.809674 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 11 00:06:44.248711 containerd[1434]: time="2025-07-11T00:06:44.248662992Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 11 00:06:44.916856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2499186905.mount: Deactivated successfully. Jul 11 00:06:45.827296 containerd[1434]: time="2025-07-11T00:06:45.827245256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:06:45.828270 containerd[1434]: time="2025-07-11T00:06:45.828025057Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=27351718" Jul 11 00:06:45.829082 containerd[1434]: time="2025-07-11T00:06:45.829035104Z" level=info msg="ImageCreate event name:\"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:06:45.832030 containerd[1434]: time="2025-07-11T00:06:45.831981761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:06:45.833281 containerd[1434]: time="2025-07-11T00:06:45.833249924Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"27348516\" in 1.584540324s" Jul 11 00:06:45.833328 containerd[1434]: time="2025-07-11T00:06:45.833290624Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\"" Jul 11 00:06:45.836375 containerd[1434]: time="2025-07-11T00:06:45.836340375Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 11 00:06:47.026228 containerd[1434]: time="2025-07-11T00:06:47.026169955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:06:47.026880 containerd[1434]: time="2025-07-11T00:06:47.026851681Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=23537625" Jul 11 00:06:47.027355 containerd[1434]: time="2025-07-11T00:06:47.027321407Z" level=info msg="ImageCreate event name:\"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:06:47.031069 containerd[1434]: time="2025-07-11T00:06:47.031018222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:06:47.032289 containerd[1434]: time="2025-07-11T00:06:47.032151961Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"25092541\" in 1.195774781s" Jul 11 00:06:47.032289 containerd[1434]: time="2025-07-11T00:06:47.032189472Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\"" Jul 11 00:06:47.032824 containerd[1434]: time="2025-07-11T00:06:47.032704764Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 11 00:06:48.120157 containerd[1434]: time="2025-07-11T00:06:48.120018118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:06:48.121545 containerd[1434]: time="2025-07-11T00:06:48.121520639Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=18293517" Jul 11 00:06:48.122445 containerd[1434]: time="2025-07-11T00:06:48.122415877Z" level=info msg="ImageCreate event name:\"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:06:48.125185 containerd[1434]: time="2025-07-11T00:06:48.125152115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:06:48.126355 containerd[1434]: time="2025-07-11T00:06:48.126317039Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"19848451\" in 1.093575888s" Jul 11 00:06:48.126355 containerd[1434]: time="2025-07-11T00:06:48.126351215Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\"" Jul 11 00:06:48.126879 containerd[1434]: time="2025-07-11T00:06:48.126842747Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 11 00:06:49.074338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount334742992.mount: Deactivated successfully. Jul 11 00:06:49.075254 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 11 00:06:49.084606 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:06:49.187458 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:06:49.191545 (kubelet)[1854]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:06:49.225491 kubelet[1854]: E0711 00:06:49.225421 1854 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:06:49.228363 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:06:49.228542 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:06:49.589289 containerd[1434]: time="2025-07-11T00:06:49.589194039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:06:49.590270 containerd[1434]: time="2025-07-11T00:06:49.590043266Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=28199474" Jul 11 00:06:49.590861 containerd[1434]: time="2025-07-11T00:06:49.590829442Z" level=info msg="ImageCreate event name:\"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:06:49.593009 containerd[1434]: time="2025-07-11T00:06:49.592971898Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:06:49.594048 containerd[1434]: time="2025-07-11T00:06:49.593967176Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"28198491\" in 1.467090815s" Jul 11 00:06:49.594048 containerd[1434]: time="2025-07-11T00:06:49.594008797Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\"" Jul 11 00:06:49.594525 containerd[1434]: time="2025-07-11T00:06:49.594485245Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 11 00:06:50.364540 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2555301463.mount: Deactivated successfully. Jul 11 00:06:51.070600 containerd[1434]: time="2025-07-11T00:06:51.070551229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:06:51.071495 containerd[1434]: time="2025-07-11T00:06:51.071348271Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Jul 11 00:06:51.072097 containerd[1434]: time="2025-07-11T00:06:51.072068228Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:06:51.077144 containerd[1434]: time="2025-07-11T00:06:51.075055254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:06:51.077255 containerd[1434]: time="2025-07-11T00:06:51.077108286Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.482586992s" Jul 11 00:06:51.077320 containerd[1434]: time="2025-07-11T00:06:51.077304263Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jul 11 00:06:51.077866 containerd[1434]: time="2025-07-11T00:06:51.077847864Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 11 00:06:51.553034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1387073466.mount: Deactivated successfully. Jul 11 00:06:51.557931 containerd[1434]: time="2025-07-11T00:06:51.557890010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:06:51.558722 containerd[1434]: time="2025-07-11T00:06:51.558693099Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 11 00:06:51.559454 containerd[1434]: time="2025-07-11T00:06:51.559393874Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:06:51.561841 containerd[1434]: time="2025-07-11T00:06:51.561768542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:06:51.562539 containerd[1434]: time="2025-07-11T00:06:51.562449056Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 484.483621ms" Jul 11 00:06:51.562539 containerd[1434]: time="2025-07-11T00:06:51.562479930Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 11 00:06:51.563522 containerd[1434]: time="2025-07-11T00:06:51.562959781Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 11 00:06:52.054498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3605056012.mount: Deactivated successfully. Jul 11 00:06:54.085824 containerd[1434]: time="2025-07-11T00:06:54.085512732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:06:54.086776 containerd[1434]: time="2025-07-11T00:06:54.086510552Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334601" Jul 11 00:06:54.087654 containerd[1434]: time="2025-07-11T00:06:54.087618373Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:06:54.090866 containerd[1434]: time="2025-07-11T00:06:54.090825792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:06:54.092287 containerd[1434]: time="2025-07-11T00:06:54.092248127Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.529260519s" Jul 11 00:06:54.092287 containerd[1434]: time="2025-07-11T00:06:54.092284314Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jul 11 00:06:59.380362 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 11 00:06:59.394319 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:06:59.504586 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:06:59.508736 (kubelet)[2009]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:06:59.540047 kubelet[2009]: E0711 00:06:59.539982 2009 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:06:59.542650 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:06:59.542799 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:06:59.873643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:06:59.885388 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:06:59.909096 systemd[1]: Reloading requested from client PID 2025 ('systemctl') (unit session-7.scope)... Jul 11 00:06:59.909135 systemd[1]: Reloading... Jul 11 00:06:59.977196 zram_generator::config[2064]: No configuration found. Jul 11 00:07:00.231153 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:07:00.290926 systemd[1]: Reloading finished in 381 ms. Jul 11 00:07:00.337251 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 11 00:07:00.337321 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 11 00:07:00.337560 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:07:00.339849 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:07:00.447867 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:07:00.452303 (kubelet)[2110]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 00:07:00.488561 kubelet[2110]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:07:00.488561 kubelet[2110]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 11 00:07:00.488561 kubelet[2110]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:07:00.488561 kubelet[2110]: I0711 00:07:00.487069 2110 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 00:07:01.697921 kubelet[2110]: I0711 00:07:01.697864 2110 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 11 00:07:01.697921 kubelet[2110]: I0711 00:07:01.697899 2110 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 00:07:01.698285 kubelet[2110]: I0711 00:07:01.698143 2110 server.go:956] "Client rotation is on, will bootstrap in background" Jul 11 00:07:01.736847 kubelet[2110]: E0711 00:07:01.736790 2110 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 11 00:07:01.737297 kubelet[2110]: I0711 00:07:01.737273 2110 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:07:01.743603 kubelet[2110]: E0711 00:07:01.743561 2110 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 11 00:07:01.743603 kubelet[2110]: I0711 00:07:01.743605 2110 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 11 00:07:01.746130 kubelet[2110]: I0711 00:07:01.746097 2110 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 00:07:01.747230 kubelet[2110]: I0711 00:07:01.747167 2110 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 00:07:01.747382 kubelet[2110]: I0711 00:07:01.747211 2110 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 11 00:07:01.747523 kubelet[2110]: I0711 00:07:01.747434 2110 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 00:07:01.747523 kubelet[2110]: I0711 00:07:01.747443 2110 container_manager_linux.go:303] "Creating device plugin manager" Jul 11 00:07:01.747668 kubelet[2110]: I0711 00:07:01.747644 2110 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:07:01.750292 kubelet[2110]: I0711 00:07:01.750264 2110 kubelet.go:480] "Attempting to sync node with API server" Jul 11 00:07:01.750317 kubelet[2110]: I0711 00:07:01.750295 2110 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 00:07:01.750345 kubelet[2110]: I0711 00:07:01.750319 2110 kubelet.go:386] "Adding apiserver pod source" Jul 11 00:07:01.750345 kubelet[2110]: I0711 00:07:01.750333 2110 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 00:07:01.751944 kubelet[2110]: I0711 00:07:01.751669 2110 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 11 00:07:01.753208 kubelet[2110]: E0711 00:07:01.753158 2110 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 11 00:07:01.754327 kubelet[2110]: E0711 00:07:01.753107 2110 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 11 00:07:01.754327 kubelet[2110]: I0711 00:07:01.752500 2110 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 11 00:07:01.754327 kubelet[2110]: W0711 00:07:01.753461 2110 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 11 00:07:01.757699 kubelet[2110]: I0711 00:07:01.757663 2110 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 11 00:07:01.757791 kubelet[2110]: I0711 00:07:01.757710 2110 server.go:1289] "Started kubelet" Jul 11 00:07:01.757935 kubelet[2110]: I0711 00:07:01.757894 2110 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 00:07:01.758830 kubelet[2110]: I0711 00:07:01.758727 2110 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 00:07:01.759157 kubelet[2110]: I0711 00:07:01.759136 2110 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 00:07:01.759569 kubelet[2110]: I0711 00:07:01.759541 2110 server.go:317] "Adding debug handlers to kubelet server" Jul 11 00:07:01.760526 kubelet[2110]: I0711 00:07:01.760499 2110 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 00:07:01.762279 kubelet[2110]: I0711 00:07:01.762250 2110 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 00:07:01.763541 kubelet[2110]: E0711 00:07:01.763508 2110 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:07:01.763541 kubelet[2110]: I0711 00:07:01.763548 2110 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 11 00:07:01.763775 kubelet[2110]: I0711 00:07:01.763748 2110 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 11 00:07:01.763839 kubelet[2110]: I0711 00:07:01.763823 2110 reconciler.go:26] "Reconciler: start to sync state" Jul 11 00:07:01.764371 kubelet[2110]: E0711 00:07:01.764296 2110 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 11 00:07:01.764898 kubelet[2110]: E0711 00:07:01.764782 2110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.37:6443: connect: connection refused" interval="200ms" Jul 11 00:07:01.765331 kubelet[2110]: I0711 00:07:01.765305 2110 factory.go:223] Registration of the systemd container factory successfully Jul 11 00:07:01.765444 kubelet[2110]: I0711 00:07:01.765422 2110 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 00:07:01.766242 kubelet[2110]: E0711 00:07:01.766163 2110 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 00:07:01.766313 kubelet[2110]: E0711 00:07:01.763011 2110 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.37:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.37:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185109b95cff8b14 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-11 00:07:01.7576845 +0000 UTC m=+1.301890114,LastTimestamp:2025-07-11 00:07:01.7576845 +0000 UTC m=+1.301890114,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 11 00:07:01.766760 kubelet[2110]: I0711 00:07:01.766728 2110 factory.go:223] Registration of the containerd container factory successfully Jul 11 00:07:01.779297 kubelet[2110]: I0711 00:07:01.779274 2110 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 11 00:07:01.779297 kubelet[2110]: I0711 00:07:01.779292 2110 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 11 00:07:01.779420 kubelet[2110]: I0711 00:07:01.779311 2110 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:07:01.781637 kubelet[2110]: I0711 00:07:01.781499 2110 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 11 00:07:01.782836 kubelet[2110]: I0711 00:07:01.782522 2110 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 11 00:07:01.782836 kubelet[2110]: I0711 00:07:01.782543 2110 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 11 00:07:01.782836 kubelet[2110]: I0711 00:07:01.782562 2110 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 11 00:07:01.782836 kubelet[2110]: I0711 00:07:01.782571 2110 kubelet.go:2436] "Starting kubelet main sync loop" Jul 11 00:07:01.782836 kubelet[2110]: E0711 00:07:01.782619 2110 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 00:07:01.788145 kubelet[2110]: E0711 00:07:01.788100 2110 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 11 00:07:01.850899 kubelet[2110]: I0711 00:07:01.850856 2110 policy_none.go:49] "None policy: Start" Jul 11 00:07:01.850899 kubelet[2110]: I0711 00:07:01.850894 2110 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 11 00:07:01.851014 kubelet[2110]: I0711 00:07:01.850909 2110 state_mem.go:35] "Initializing new in-memory state store" Jul 11 00:07:01.856209 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 11 00:07:01.863635 kubelet[2110]: E0711 00:07:01.863596 2110 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:07:01.867763 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 11 00:07:01.871578 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 11 00:07:01.881202 kubelet[2110]: E0711 00:07:01.881137 2110 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 11 00:07:01.881566 kubelet[2110]: I0711 00:07:01.881351 2110 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 00:07:01.881566 kubelet[2110]: I0711 00:07:01.881363 2110 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 00:07:01.881622 kubelet[2110]: I0711 00:07:01.881608 2110 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 00:07:01.882539 kubelet[2110]: E0711 00:07:01.882460 2110 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 11 00:07:01.882539 kubelet[2110]: E0711 00:07:01.882517 2110 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 11 00:07:01.892099 systemd[1]: Created slice kubepods-burstable-podb356079f0a6166284dec23cf97c880c2.slice - libcontainer container kubepods-burstable-podb356079f0a6166284dec23cf97c880c2.slice. Jul 11 00:07:01.907848 kubelet[2110]: E0711 00:07:01.907554 2110 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:07:01.908961 systemd[1]: Created slice kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice - libcontainer container kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice. Jul 11 00:07:01.910631 kubelet[2110]: E0711 00:07:01.910608 2110 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:07:01.920603 systemd[1]: Created slice kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice - libcontainer container kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice. Jul 11 00:07:01.922048 kubelet[2110]: E0711 00:07:01.922018 2110 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:07:01.965880 kubelet[2110]: E0711 00:07:01.965764 2110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.37:6443: connect: connection refused" interval="400ms" Jul 11 00:07:01.983019 kubelet[2110]: I0711 00:07:01.982987 2110 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:07:01.983600 kubelet[2110]: E0711 00:07:01.983575 2110 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.37:6443/api/v1/nodes\": dial tcp 10.0.0.37:6443: connect: connection refused" node="localhost" Jul 11 00:07:02.065284 kubelet[2110]: I0711 00:07:02.065205 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:07:02.065284 kubelet[2110]: I0711 00:07:02.065256 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 11 00:07:02.065698 kubelet[2110]: I0711 00:07:02.065486 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b356079f0a6166284dec23cf97c880c2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b356079f0a6166284dec23cf97c880c2\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:07:02.065698 kubelet[2110]: I0711 00:07:02.065513 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b356079f0a6166284dec23cf97c880c2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b356079f0a6166284dec23cf97c880c2\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:07:02.065698 kubelet[2110]: I0711 00:07:02.065545 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b356079f0a6166284dec23cf97c880c2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b356079f0a6166284dec23cf97c880c2\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:07:02.065698 kubelet[2110]: I0711 00:07:02.065564 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:07:02.065698 kubelet[2110]: I0711 00:07:02.065603 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:07:02.065835 kubelet[2110]: I0711 00:07:02.065634 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:07:02.065835 kubelet[2110]: I0711 00:07:02.065652 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:07:02.187023 kubelet[2110]: I0711 00:07:02.186980 2110 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:07:02.187344 kubelet[2110]: E0711 00:07:02.187313 2110 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.37:6443/api/v1/nodes\": dial tcp 10.0.0.37:6443: connect: connection refused" node="localhost" Jul 11 00:07:02.208782 kubelet[2110]: E0711 00:07:02.208688 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:02.209337 containerd[1434]: time="2025-07-11T00:07:02.209300162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b356079f0a6166284dec23cf97c880c2,Namespace:kube-system,Attempt:0,}" Jul 11 00:07:02.211547 kubelet[2110]: E0711 00:07:02.211524 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:02.211969 containerd[1434]: time="2025-07-11T00:07:02.211926072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,}" Jul 11 00:07:02.223801 kubelet[2110]: E0711 00:07:02.223327 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:02.223906 containerd[1434]: time="2025-07-11T00:07:02.223808262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,}" Jul 11 00:07:02.366624 kubelet[2110]: E0711 00:07:02.366573 2110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.37:6443: connect: connection refused" interval="800ms" Jul 11 00:07:02.588932 kubelet[2110]: I0711 00:07:02.588829 2110 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:07:02.589621 kubelet[2110]: E0711 00:07:02.589571 2110 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.37:6443/api/v1/nodes\": dial tcp 10.0.0.37:6443: connect: connection refused" node="localhost" Jul 11 00:07:02.708583 kubelet[2110]: E0711 00:07:02.708524 2110 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 11 00:07:02.771475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount299284192.mount: Deactivated successfully. Jul 11 00:07:02.776711 containerd[1434]: time="2025-07-11T00:07:02.776640469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:07:02.778144 containerd[1434]: time="2025-07-11T00:07:02.778089438Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 11 00:07:02.778968 containerd[1434]: time="2025-07-11T00:07:02.778929692Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:07:02.779927 containerd[1434]: time="2025-07-11T00:07:02.779881895Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:07:02.780856 containerd[1434]: time="2025-07-11T00:07:02.780782205Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:07:02.780856 containerd[1434]: time="2025-07-11T00:07:02.780790887Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 11 00:07:02.781511 containerd[1434]: time="2025-07-11T00:07:02.781364393Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 11 00:07:02.782075 kubelet[2110]: E0711 00:07:02.782021 2110 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 11 00:07:02.783323 containerd[1434]: time="2025-07-11T00:07:02.783289404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:07:02.785659 containerd[1434]: time="2025-07-11T00:07:02.785599593Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 561.711271ms" Jul 11 00:07:02.788697 containerd[1434]: time="2025-07-11T00:07:02.788593436Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 576.591505ms" Jul 11 00:07:02.789390 containerd[1434]: time="2025-07-11T00:07:02.789359952Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 579.97921ms" Jul 11 00:07:02.810600 kubelet[2110]: E0711 00:07:02.810544 2110 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 11 00:07:02.920705 containerd[1434]: time="2025-07-11T00:07:02.920594775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:07:02.920705 containerd[1434]: time="2025-07-11T00:07:02.920671315Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:07:02.920705 containerd[1434]: time="2025-07-11T00:07:02.920688319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:07:02.921570 containerd[1434]: time="2025-07-11T00:07:02.921484922Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:07:02.921570 containerd[1434]: time="2025-07-11T00:07:02.921550019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:07:02.921797 containerd[1434]: time="2025-07-11T00:07:02.921565823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:07:02.921797 containerd[1434]: time="2025-07-11T00:07:02.921560062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:07:02.921797 containerd[1434]: time="2025-07-11T00:07:02.921653245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:07:02.923642 containerd[1434]: time="2025-07-11T00:07:02.923554730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:07:02.923642 containerd[1434]: time="2025-07-11T00:07:02.923601622Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:07:02.923642 containerd[1434]: time="2025-07-11T00:07:02.923611745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:07:02.924336 containerd[1434]: time="2025-07-11T00:07:02.923689685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:07:02.947556 kubelet[2110]: E0711 00:07:02.947510 2110 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 11 00:07:02.948353 systemd[1]: Started cri-containerd-10f0b11a545fd38376538cc26b53d0254f0db95ef4a2c2523d25adb7803ba41b.scope - libcontainer container 10f0b11a545fd38376538cc26b53d0254f0db95ef4a2c2523d25adb7803ba41b. Jul 11 00:07:02.949698 systemd[1]: Started cri-containerd-5cd1d1406ed96b32a48183ca8f3f584095977b7d2ce0cff70f26e08fb63fc581.scope - libcontainer container 5cd1d1406ed96b32a48183ca8f3f584095977b7d2ce0cff70f26e08fb63fc581. Jul 11 00:07:02.951094 systemd[1]: Started cri-containerd-69f50599811bcba4f3983be3f3b5c01dd1fa52f4f5db427e0027badf9f6fa805.scope - libcontainer container 69f50599811bcba4f3983be3f3b5c01dd1fa52f4f5db427e0027badf9f6fa805. Jul 11 00:07:02.979504 containerd[1434]: time="2025-07-11T00:07:02.979463546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"5cd1d1406ed96b32a48183ca8f3f584095977b7d2ce0cff70f26e08fb63fc581\"" Jul 11 00:07:02.980767 kubelet[2110]: E0711 00:07:02.980719 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:02.984831 containerd[1434]: time="2025-07-11T00:07:02.984718006Z" level=info msg="CreateContainer within sandbox \"5cd1d1406ed96b32a48183ca8f3f584095977b7d2ce0cff70f26e08fb63fc581\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 11 00:07:02.987758 containerd[1434]: time="2025-07-11T00:07:02.987711610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"69f50599811bcba4f3983be3f3b5c01dd1fa52f4f5db427e0027badf9f6fa805\"" Jul 11 00:07:02.988856 kubelet[2110]: E0711 00:07:02.988828 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:02.992575 containerd[1434]: time="2025-07-11T00:07:02.992456259Z" level=info msg="CreateContainer within sandbox \"69f50599811bcba4f3983be3f3b5c01dd1fa52f4f5db427e0027badf9f6fa805\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 11 00:07:02.993007 containerd[1434]: time="2025-07-11T00:07:02.992853281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b356079f0a6166284dec23cf97c880c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"10f0b11a545fd38376538cc26b53d0254f0db95ef4a2c2523d25adb7803ba41b\"" Jul 11 00:07:02.993684 kubelet[2110]: E0711 00:07:02.993627 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:02.996813 containerd[1434]: time="2025-07-11T00:07:02.996763198Z" level=info msg="CreateContainer within sandbox \"10f0b11a545fd38376538cc26b53d0254f0db95ef4a2c2523d25adb7803ba41b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 11 00:07:02.999425 containerd[1434]: time="2025-07-11T00:07:02.999363821Z" level=info msg="CreateContainer within sandbox \"5cd1d1406ed96b32a48183ca8f3f584095977b7d2ce0cff70f26e08fb63fc581\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"703b9e3e8bd0c3d36681738fd7e3b46d27e09681bcd37763efaf34226753c526\"" Jul 11 00:07:03.000046 containerd[1434]: time="2025-07-11T00:07:03.000019388Z" level=info msg="StartContainer for \"703b9e3e8bd0c3d36681738fd7e3b46d27e09681bcd37763efaf34226753c526\"" Jul 11 00:07:03.011986 containerd[1434]: time="2025-07-11T00:07:03.011899834Z" level=info msg="CreateContainer within sandbox \"10f0b11a545fd38376538cc26b53d0254f0db95ef4a2c2523d25adb7803ba41b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f7baa284a8123b787c0b1d04870d37f91ba0ff6c8a01f0b861e7baec1728e7ec\"" Jul 11 00:07:03.012816 containerd[1434]: time="2025-07-11T00:07:03.012532215Z" level=info msg="StartContainer for \"f7baa284a8123b787c0b1d04870d37f91ba0ff6c8a01f0b861e7baec1728e7ec\"" Jul 11 00:07:03.012816 containerd[1434]: time="2025-07-11T00:07:03.012595349Z" level=info msg="CreateContainer within sandbox \"69f50599811bcba4f3983be3f3b5c01dd1fa52f4f5db427e0027badf9f6fa805\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f76eca9eeaaa094640aab561714e6883b4abe5d52b783d20d9b57d66dedab06e\"" Jul 11 00:07:03.013187 containerd[1434]: time="2025-07-11T00:07:03.013164076Z" level=info msg="StartContainer for \"f76eca9eeaaa094640aab561714e6883b4abe5d52b783d20d9b57d66dedab06e\"" Jul 11 00:07:03.027429 systemd[1]: Started cri-containerd-703b9e3e8bd0c3d36681738fd7e3b46d27e09681bcd37763efaf34226753c526.scope - libcontainer container 703b9e3e8bd0c3d36681738fd7e3b46d27e09681bcd37763efaf34226753c526. Jul 11 00:07:03.042320 systemd[1]: Started cri-containerd-f7baa284a8123b787c0b1d04870d37f91ba0ff6c8a01f0b861e7baec1728e7ec.scope - libcontainer container f7baa284a8123b787c0b1d04870d37f91ba0ff6c8a01f0b861e7baec1728e7ec. Jul 11 00:07:03.045096 systemd[1]: Started cri-containerd-f76eca9eeaaa094640aab561714e6883b4abe5d52b783d20d9b57d66dedab06e.scope - libcontainer container f76eca9eeaaa094640aab561714e6883b4abe5d52b783d20d9b57d66dedab06e. Jul 11 00:07:03.083992 containerd[1434]: time="2025-07-11T00:07:03.081645236Z" level=info msg="StartContainer for \"703b9e3e8bd0c3d36681738fd7e3b46d27e09681bcd37763efaf34226753c526\" returns successfully" Jul 11 00:07:03.083992 containerd[1434]: time="2025-07-11T00:07:03.081758781Z" level=info msg="StartContainer for \"f7baa284a8123b787c0b1d04870d37f91ba0ff6c8a01f0b861e7baec1728e7ec\" returns successfully" Jul 11 00:07:03.117495 containerd[1434]: time="2025-07-11T00:07:03.113104255Z" level=info msg="StartContainer for \"f76eca9eeaaa094640aab561714e6883b4abe5d52b783d20d9b57d66dedab06e\" returns successfully" Jul 11 00:07:03.167276 kubelet[2110]: E0711 00:07:03.167206 2110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.37:6443: connect: connection refused" interval="1.6s" Jul 11 00:07:03.391237 kubelet[2110]: I0711 00:07:03.390909 2110 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:07:03.793382 kubelet[2110]: E0711 00:07:03.793293 2110 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:07:03.793651 kubelet[2110]: E0711 00:07:03.793428 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:03.795001 kubelet[2110]: E0711 00:07:03.794764 2110 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:07:03.795001 kubelet[2110]: E0711 00:07:03.794865 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:03.796201 kubelet[2110]: E0711 00:07:03.796181 2110 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:07:03.796289 kubelet[2110]: E0711 00:07:03.796273 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:04.751965 kubelet[2110]: I0711 00:07:04.751905 2110 apiserver.go:52] "Watching apiserver" Jul 11 00:07:04.764450 kubelet[2110]: I0711 00:07:04.764404 2110 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 11 00:07:04.783445 kubelet[2110]: E0711 00:07:04.783408 2110 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 11 00:07:04.799219 kubelet[2110]: E0711 00:07:04.799188 2110 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:07:04.799591 kubelet[2110]: E0711 00:07:04.799247 2110 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:07:04.799591 kubelet[2110]: E0711 00:07:04.799316 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:04.799591 kubelet[2110]: E0711 00:07:04.799355 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:04.801941 kubelet[2110]: I0711 00:07:04.801911 2110 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 11 00:07:04.842442 kubelet[2110]: E0711 00:07:04.842205 2110 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.185109b95cff8b14 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-11 00:07:01.7576845 +0000 UTC m=+1.301890114,LastTimestamp:2025-07-11 00:07:01.7576845 +0000 UTC m=+1.301890114,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 11 00:07:04.865088 kubelet[2110]: I0711 00:07:04.864619 2110 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 11 00:07:04.919737 kubelet[2110]: E0711 00:07:04.919704 2110 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 11 00:07:04.920740 kubelet[2110]: I0711 00:07:04.920721 2110 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 00:07:04.922872 kubelet[2110]: E0711 00:07:04.922760 2110 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 11 00:07:04.922872 kubelet[2110]: I0711 00:07:04.922796 2110 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 00:07:04.925287 kubelet[2110]: E0711 00:07:04.925260 2110 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 11 00:07:05.800763 kubelet[2110]: I0711 00:07:05.800717 2110 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 00:07:05.804959 kubelet[2110]: E0711 00:07:05.804928 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:06.802081 kubelet[2110]: E0711 00:07:06.801967 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:06.925318 systemd[1]: Reloading requested from client PID 2401 ('systemctl') (unit session-7.scope)... Jul 11 00:07:06.925334 systemd[1]: Reloading... Jul 11 00:07:06.992218 zram_generator::config[2443]: No configuration found. Jul 11 00:07:07.075438 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:07:07.147811 systemd[1]: Reloading finished in 222 ms. Jul 11 00:07:07.176998 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:07:07.184076 systemd[1]: kubelet.service: Deactivated successfully. Jul 11 00:07:07.184325 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:07:07.184374 systemd[1]: kubelet.service: Consumed 1.715s CPU time, 132.0M memory peak, 0B memory swap peak. Jul 11 00:07:07.197338 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:07:07.292580 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:07:07.297162 (kubelet)[2482]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 00:07:07.337106 kubelet[2482]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:07:07.337106 kubelet[2482]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 11 00:07:07.337106 kubelet[2482]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:07:07.337106 kubelet[2482]: I0711 00:07:07.337082 2482 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 00:07:07.345156 kubelet[2482]: I0711 00:07:07.344455 2482 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 11 00:07:07.345156 kubelet[2482]: I0711 00:07:07.344485 2482 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 00:07:07.345156 kubelet[2482]: I0711 00:07:07.344694 2482 server.go:956] "Client rotation is on, will bootstrap in background" Jul 11 00:07:07.345955 kubelet[2482]: I0711 00:07:07.345923 2482 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 11 00:07:07.348104 kubelet[2482]: I0711 00:07:07.348075 2482 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:07:07.351619 kubelet[2482]: E0711 00:07:07.351585 2482 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 11 00:07:07.351619 kubelet[2482]: I0711 00:07:07.351616 2482 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 11 00:07:07.354010 kubelet[2482]: I0711 00:07:07.353977 2482 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 00:07:07.354902 kubelet[2482]: I0711 00:07:07.354230 2482 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 00:07:07.354902 kubelet[2482]: I0711 00:07:07.354265 2482 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 11 00:07:07.354902 kubelet[2482]: I0711 00:07:07.354506 2482 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 00:07:07.354902 kubelet[2482]: I0711 00:07:07.354515 2482 container_manager_linux.go:303] "Creating device plugin manager" Jul 11 00:07:07.354902 kubelet[2482]: I0711 00:07:07.354558 2482 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:07:07.355107 kubelet[2482]: I0711 00:07:07.354710 2482 kubelet.go:480] "Attempting to sync node with API server" Jul 11 00:07:07.355107 kubelet[2482]: I0711 00:07:07.354726 2482 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 00:07:07.355107 kubelet[2482]: I0711 00:07:07.354746 2482 kubelet.go:386] "Adding apiserver pod source" Jul 11 00:07:07.355107 kubelet[2482]: I0711 00:07:07.354759 2482 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 00:07:07.356092 kubelet[2482]: I0711 00:07:07.356067 2482 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 11 00:07:07.356992 kubelet[2482]: I0711 00:07:07.356970 2482 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 11 00:07:07.360466 kubelet[2482]: I0711 00:07:07.360391 2482 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 11 00:07:07.360466 kubelet[2482]: I0711 00:07:07.360428 2482 server.go:1289] "Started kubelet" Jul 11 00:07:07.361741 kubelet[2482]: I0711 00:07:07.361715 2482 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 00:07:07.364965 kubelet[2482]: I0711 00:07:07.362062 2482 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 11 00:07:07.364965 kubelet[2482]: I0711 00:07:07.362170 2482 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 11 00:07:07.364965 kubelet[2482]: I0711 00:07:07.362277 2482 reconciler.go:26] "Reconciler: start to sync state" Jul 11 00:07:07.364965 kubelet[2482]: E0711 00:07:07.362470 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:07:07.364965 kubelet[2482]: I0711 00:07:07.362770 2482 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 00:07:07.364965 kubelet[2482]: I0711 00:07:07.362953 2482 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 00:07:07.364965 kubelet[2482]: I0711 00:07:07.363344 2482 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 00:07:07.364965 kubelet[2482]: I0711 00:07:07.363536 2482 server.go:317] "Adding debug handlers to kubelet server" Jul 11 00:07:07.364965 kubelet[2482]: I0711 00:07:07.363552 2482 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 00:07:07.370129 kubelet[2482]: I0711 00:07:07.369414 2482 factory.go:223] Registration of the systemd container factory successfully Jul 11 00:07:07.370316 kubelet[2482]: I0711 00:07:07.370295 2482 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 00:07:07.379380 kubelet[2482]: E0711 00:07:07.379351 2482 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 00:07:07.382991 kubelet[2482]: I0711 00:07:07.382962 2482 factory.go:223] Registration of the containerd container factory successfully Jul 11 00:07:07.387565 kubelet[2482]: I0711 00:07:07.387444 2482 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 11 00:07:07.388480 kubelet[2482]: I0711 00:07:07.388457 2482 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 11 00:07:07.388480 kubelet[2482]: I0711 00:07:07.388479 2482 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 11 00:07:07.388565 kubelet[2482]: I0711 00:07:07.388495 2482 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 11 00:07:07.388565 kubelet[2482]: I0711 00:07:07.388503 2482 kubelet.go:2436] "Starting kubelet main sync loop" Jul 11 00:07:07.388565 kubelet[2482]: E0711 00:07:07.388543 2482 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 00:07:07.414185 kubelet[2482]: I0711 00:07:07.414158 2482 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 11 00:07:07.414872 kubelet[2482]: I0711 00:07:07.414337 2482 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 11 00:07:07.414872 kubelet[2482]: I0711 00:07:07.414362 2482 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:07:07.414872 kubelet[2482]: I0711 00:07:07.414481 2482 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 11 00:07:07.414872 kubelet[2482]: I0711 00:07:07.414490 2482 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 11 00:07:07.414872 kubelet[2482]: I0711 00:07:07.414505 2482 policy_none.go:49] "None policy: Start" Jul 11 00:07:07.414872 kubelet[2482]: I0711 00:07:07.414514 2482 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 11 00:07:07.414872 kubelet[2482]: I0711 00:07:07.414522 2482 state_mem.go:35] "Initializing new in-memory state store" Jul 11 00:07:07.414872 kubelet[2482]: I0711 00:07:07.414600 2482 state_mem.go:75] "Updated machine memory state" Jul 11 00:07:07.417790 kubelet[2482]: E0711 00:07:07.417768 2482 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 11 00:07:07.418501 kubelet[2482]: I0711 00:07:07.418209 2482 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 00:07:07.418501 kubelet[2482]: I0711 00:07:07.418224 2482 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 00:07:07.418501 kubelet[2482]: I0711 00:07:07.418449 2482 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 00:07:07.420734 kubelet[2482]: E0711 00:07:07.420281 2482 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 11 00:07:07.489698 kubelet[2482]: I0711 00:07:07.489658 2482 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 00:07:07.490291 kubelet[2482]: I0711 00:07:07.490078 2482 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 11 00:07:07.490291 kubelet[2482]: I0711 00:07:07.489986 2482 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 00:07:07.495808 kubelet[2482]: E0711 00:07:07.495771 2482 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 11 00:07:07.522259 kubelet[2482]: I0711 00:07:07.522227 2482 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:07:07.528139 kubelet[2482]: I0711 00:07:07.528107 2482 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 11 00:07:07.528220 kubelet[2482]: I0711 00:07:07.528202 2482 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 11 00:07:07.563833 kubelet[2482]: I0711 00:07:07.563784 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:07:07.563833 kubelet[2482]: I0711 00:07:07.563833 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:07:07.563988 kubelet[2482]: I0711 00:07:07.563855 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 11 00:07:07.563988 kubelet[2482]: I0711 00:07:07.563873 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b356079f0a6166284dec23cf97c880c2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b356079f0a6166284dec23cf97c880c2\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:07:07.563988 kubelet[2482]: I0711 00:07:07.563887 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:07:07.563988 kubelet[2482]: I0711 00:07:07.563901 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:07:07.563988 kubelet[2482]: I0711 00:07:07.563915 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b356079f0a6166284dec23cf97c880c2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b356079f0a6166284dec23cf97c880c2\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:07:07.564096 kubelet[2482]: I0711 00:07:07.563930 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b356079f0a6166284dec23cf97c880c2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b356079f0a6166284dec23cf97c880c2\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:07:07.564096 kubelet[2482]: I0711 00:07:07.563970 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:07:07.795003 kubelet[2482]: E0711 00:07:07.794910 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:07.796037 kubelet[2482]: E0711 00:07:07.795942 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:07.796147 kubelet[2482]: E0711 00:07:07.796060 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:07.932166 sudo[2522]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 11 00:07:07.932430 sudo[2522]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 11 00:07:08.352910 sudo[2522]: pam_unix(sudo:session): session closed for user root Jul 11 00:07:08.359689 kubelet[2482]: I0711 00:07:08.357398 2482 apiserver.go:52] "Watching apiserver" Jul 11 00:07:08.365473 kubelet[2482]: I0711 00:07:08.365190 2482 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 11 00:07:08.405289 kubelet[2482]: E0711 00:07:08.404830 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:08.405289 kubelet[2482]: E0711 00:07:08.404969 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:08.405289 kubelet[2482]: I0711 00:07:08.405079 2482 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 00:07:08.409573 kubelet[2482]: E0711 00:07:08.409541 2482 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 11 00:07:08.409866 kubelet[2482]: E0711 00:07:08.409849 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:08.421599 kubelet[2482]: I0711 00:07:08.421389 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.421358616 podStartE2EDuration="1.421358616s" podCreationTimestamp="2025-07-11 00:07:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:07:08.421267884 +0000 UTC m=+1.120198976" watchObservedRunningTime="2025-07-11 00:07:08.421358616 +0000 UTC m=+1.120289748" Jul 11 00:07:08.436237 kubelet[2482]: I0711 00:07:08.435848 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.435832539 podStartE2EDuration="3.435832539s" podCreationTimestamp="2025-07-11 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:07:08.428856171 +0000 UTC m=+1.127787263" watchObservedRunningTime="2025-07-11 00:07:08.435832539 +0000 UTC m=+1.134763671" Jul 11 00:07:08.443976 kubelet[2482]: I0711 00:07:08.443896 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.443883885 podStartE2EDuration="1.443883885s" podCreationTimestamp="2025-07-11 00:07:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:07:08.436099453 +0000 UTC m=+1.135030585" watchObservedRunningTime="2025-07-11 00:07:08.443883885 +0000 UTC m=+1.142815017" Jul 11 00:07:09.406279 kubelet[2482]: E0711 00:07:09.406015 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:09.406279 kubelet[2482]: E0711 00:07:09.406220 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:09.406992 kubelet[2482]: E0711 00:07:09.406880 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:10.406899 kubelet[2482]: E0711 00:07:10.406863 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:10.751877 sudo[1613]: pam_unix(sudo:session): session closed for user root Jul 11 00:07:10.754835 sshd[1610]: pam_unix(sshd:session): session closed for user core Jul 11 00:07:10.758432 systemd[1]: sshd@6-10.0.0.37:22-10.0.0.1:56382.service: Deactivated successfully. Jul 11 00:07:10.760935 systemd[1]: session-7.scope: Deactivated successfully. Jul 11 00:07:10.761210 systemd[1]: session-7.scope: Consumed 8.900s CPU time, 155.9M memory peak, 0B memory swap peak. Jul 11 00:07:10.761940 systemd-logind[1420]: Session 7 logged out. Waiting for processes to exit. Jul 11 00:07:10.762861 systemd-logind[1420]: Removed session 7. Jul 11 00:07:13.879828 kubelet[2482]: I0711 00:07:13.879688 2482 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 11 00:07:13.880786 containerd[1434]: time="2025-07-11T00:07:13.880496149Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 11 00:07:13.881060 kubelet[2482]: I0711 00:07:13.880659 2482 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 11 00:07:14.583482 systemd[1]: Created slice kubepods-besteffort-pod78a3c794_9c49_4fee_8829_490aaa4469fa.slice - libcontainer container kubepods-besteffort-pod78a3c794_9c49_4fee_8829_490aaa4469fa.slice. Jul 11 00:07:14.609695 kubelet[2482]: I0711 00:07:14.609657 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-etc-cni-netd\") pod \"cilium-c8v2d\" (UID: \"cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5\") " pod="kube-system/cilium-c8v2d" Jul 11 00:07:14.609695 kubelet[2482]: I0711 00:07:14.609692 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/78a3c794-9c49-4fee-8829-490aaa4469fa-xtables-lock\") pod \"kube-proxy-n99mk\" (UID: \"78a3c794-9c49-4fee-8829-490aaa4469fa\") " pod="kube-system/kube-proxy-n99mk" Jul 11 00:07:14.609830 kubelet[2482]: I0711 00:07:14.609715 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-cilium-cgroup\") pod \"cilium-c8v2d\" (UID: \"cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5\") " pod="kube-system/cilium-c8v2d" Jul 11 00:07:14.609830 kubelet[2482]: I0711 00:07:14.609732 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-lib-modules\") pod \"cilium-c8v2d\" (UID: \"cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5\") " pod="kube-system/cilium-c8v2d" Jul 11 00:07:14.609830 kubelet[2482]: I0711 00:07:14.609746 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-clustermesh-secrets\") pod \"cilium-c8v2d\" (UID: \"cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5\") " pod="kube-system/cilium-c8v2d" Jul 11 00:07:14.609830 kubelet[2482]: I0711 00:07:14.609760 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-cilium-config-path\") pod \"cilium-c8v2d\" (UID: \"cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5\") " pod="kube-system/cilium-c8v2d" Jul 11 00:07:14.609830 kubelet[2482]: I0711 00:07:14.609773 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-host-proc-sys-net\") pod \"cilium-c8v2d\" (UID: \"cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5\") " pod="kube-system/cilium-c8v2d" Jul 11 00:07:14.609941 kubelet[2482]: I0711 00:07:14.609790 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8qj9\" (UniqueName: \"kubernetes.io/projected/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-kube-api-access-c8qj9\") pod \"cilium-c8v2d\" (UID: \"cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5\") " pod="kube-system/cilium-c8v2d" Jul 11 00:07:14.609941 kubelet[2482]: I0711 00:07:14.609808 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-cni-path\") pod \"cilium-c8v2d\" (UID: \"cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5\") " pod="kube-system/cilium-c8v2d" Jul 11 00:07:14.609941 kubelet[2482]: I0711 00:07:14.609821 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-xtables-lock\") pod \"cilium-c8v2d\" (UID: \"cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5\") " pod="kube-system/cilium-c8v2d" Jul 11 00:07:14.609941 kubelet[2482]: I0711 00:07:14.609846 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78a3c794-9c49-4fee-8829-490aaa4469fa-lib-modules\") pod \"kube-proxy-n99mk\" (UID: \"78a3c794-9c49-4fee-8829-490aaa4469fa\") " pod="kube-system/kube-proxy-n99mk" Jul 11 00:07:14.609941 kubelet[2482]: I0711 00:07:14.609863 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfq5j\" (UniqueName: \"kubernetes.io/projected/78a3c794-9c49-4fee-8829-490aaa4469fa-kube-api-access-bfq5j\") pod \"kube-proxy-n99mk\" (UID: \"78a3c794-9c49-4fee-8829-490aaa4469fa\") " pod="kube-system/kube-proxy-n99mk" Jul 11 00:07:14.609941 kubelet[2482]: I0711 00:07:14.609877 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-cilium-run\") pod \"cilium-c8v2d\" (UID: \"cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5\") " pod="kube-system/cilium-c8v2d" Jul 11 00:07:14.610067 kubelet[2482]: I0711 00:07:14.609892 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-bpf-maps\") pod \"cilium-c8v2d\" (UID: \"cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5\") " pod="kube-system/cilium-c8v2d" Jul 11 00:07:14.610067 kubelet[2482]: I0711 00:07:14.609905 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-hostproc\") pod \"cilium-c8v2d\" (UID: \"cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5\") " pod="kube-system/cilium-c8v2d" Jul 11 00:07:14.610067 kubelet[2482]: I0711 00:07:14.609918 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-host-proc-sys-kernel\") pod \"cilium-c8v2d\" (UID: \"cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5\") " pod="kube-system/cilium-c8v2d" Jul 11 00:07:14.610067 kubelet[2482]: I0711 00:07:14.609932 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-hubble-tls\") pod \"cilium-c8v2d\" (UID: \"cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5\") " pod="kube-system/cilium-c8v2d" Jul 11 00:07:14.610067 kubelet[2482]: I0711 00:07:14.609946 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/78a3c794-9c49-4fee-8829-490aaa4469fa-kube-proxy\") pod \"kube-proxy-n99mk\" (UID: \"78a3c794-9c49-4fee-8829-490aaa4469fa\") " pod="kube-system/kube-proxy-n99mk" Jul 11 00:07:14.610034 systemd[1]: Created slice kubepods-burstable-podcf0342a8_0a8c_4e84_8e7a_d31e3271d1d5.slice - libcontainer container kubepods-burstable-podcf0342a8_0a8c_4e84_8e7a_d31e3271d1d5.slice. Jul 11 00:07:14.908488 kubelet[2482]: E0711 00:07:14.908445 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:14.909062 containerd[1434]: time="2025-07-11T00:07:14.909007586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n99mk,Uid:78a3c794-9c49-4fee-8829-490aaa4469fa,Namespace:kube-system,Attempt:0,}" Jul 11 00:07:14.912932 kubelet[2482]: E0711 00:07:14.912893 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:14.913549 containerd[1434]: time="2025-07-11T00:07:14.913412773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c8v2d,Uid:cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5,Namespace:kube-system,Attempt:0,}" Jul 11 00:07:14.930827 containerd[1434]: time="2025-07-11T00:07:14.930254531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:07:14.930827 containerd[1434]: time="2025-07-11T00:07:14.930758496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:07:14.930827 containerd[1434]: time="2025-07-11T00:07:14.930814221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:07:14.930997 containerd[1434]: time="2025-07-11T00:07:14.930943472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:07:14.942833 containerd[1434]: time="2025-07-11T00:07:14.942543371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:07:14.942833 containerd[1434]: time="2025-07-11T00:07:14.942610856Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:07:14.942833 containerd[1434]: time="2025-07-11T00:07:14.942622177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:07:14.942833 containerd[1434]: time="2025-07-11T00:07:14.942700944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:07:14.948545 systemd[1]: Started cri-containerd-f807712e511b5f931166ddfc26ce21bcad744972c2a29069b5d7b70d33cd5256.scope - libcontainer container f807712e511b5f931166ddfc26ce21bcad744972c2a29069b5d7b70d33cd5256. Jul 11 00:07:14.962290 systemd[1]: Started cri-containerd-689ad7475e16bcb1ac17455cd921d9a50a6483e992a771f482954e8987bc22a1.scope - libcontainer container 689ad7475e16bcb1ac17455cd921d9a50a6483e992a771f482954e8987bc22a1. Jul 11 00:07:14.981435 containerd[1434]: time="2025-07-11T00:07:14.980088907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n99mk,Uid:78a3c794-9c49-4fee-8829-490aaa4469fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"f807712e511b5f931166ddfc26ce21bcad744972c2a29069b5d7b70d33cd5256\"" Jul 11 00:07:14.982846 kubelet[2482]: E0711 00:07:14.982801 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:14.987888 containerd[1434]: time="2025-07-11T00:07:14.987854549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c8v2d,Uid:cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"689ad7475e16bcb1ac17455cd921d9a50a6483e992a771f482954e8987bc22a1\"" Jul 11 00:07:14.989598 kubelet[2482]: E0711 00:07:14.989511 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:14.990846 containerd[1434]: time="2025-07-11T00:07:14.990737523Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 11 00:07:14.991644 containerd[1434]: time="2025-07-11T00:07:14.991281090Z" level=info msg="CreateContainer within sandbox \"f807712e511b5f931166ddfc26ce21bcad744972c2a29069b5d7b70d33cd5256\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 11 00:07:15.013000 containerd[1434]: time="2025-07-11T00:07:15.011843566Z" level=info msg="CreateContainer within sandbox \"f807712e511b5f931166ddfc26ce21bcad744972c2a29069b5d7b70d33cd5256\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"97995e0be8664b72acea8c60f9b54aa27dfdf85f8e24eb56b17c124738957c72\"" Jul 11 00:07:15.014212 containerd[1434]: time="2025-07-11T00:07:15.014186120Z" level=info msg="StartContainer for \"97995e0be8664b72acea8c60f9b54aa27dfdf85f8e24eb56b17c124738957c72\"" Jul 11 00:07:15.027087 systemd[1]: Created slice kubepods-besteffort-pod35f62f86_2c45_4527_9daf_20678c88a94f.slice - libcontainer container kubepods-besteffort-pod35f62f86_2c45_4527_9daf_20678c88a94f.slice. Jul 11 00:07:15.052278 systemd[1]: Started cri-containerd-97995e0be8664b72acea8c60f9b54aa27dfdf85f8e24eb56b17c124738957c72.scope - libcontainer container 97995e0be8664b72acea8c60f9b54aa27dfdf85f8e24eb56b17c124738957c72. Jul 11 00:07:15.082757 containerd[1434]: time="2025-07-11T00:07:15.082695936Z" level=info msg="StartContainer for \"97995e0be8664b72acea8c60f9b54aa27dfdf85f8e24eb56b17c124738957c72\" returns successfully" Jul 11 00:07:15.113400 kubelet[2482]: I0711 00:07:15.113300 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jbnh\" (UniqueName: \"kubernetes.io/projected/35f62f86-2c45-4527-9daf-20678c88a94f-kube-api-access-8jbnh\") pod \"cilium-operator-6c4d7847fc-hpjwv\" (UID: \"35f62f86-2c45-4527-9daf-20678c88a94f\") " pod="kube-system/cilium-operator-6c4d7847fc-hpjwv" Jul 11 00:07:15.113400 kubelet[2482]: I0711 00:07:15.113343 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/35f62f86-2c45-4527-9daf-20678c88a94f-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-hpjwv\" (UID: \"35f62f86-2c45-4527-9daf-20678c88a94f\") " pod="kube-system/cilium-operator-6c4d7847fc-hpjwv" Jul 11 00:07:15.152407 kubelet[2482]: E0711 00:07:15.150710 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:15.333667 kubelet[2482]: E0711 00:07:15.333448 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:15.335080 containerd[1434]: time="2025-07-11T00:07:15.334924264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-hpjwv,Uid:35f62f86-2c45-4527-9daf-20678c88a94f,Namespace:kube-system,Attempt:0,}" Jul 11 00:07:15.357985 containerd[1434]: time="2025-07-11T00:07:15.357905054Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:07:15.358499 containerd[1434]: time="2025-07-11T00:07:15.358346211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:07:15.358499 containerd[1434]: time="2025-07-11T00:07:15.358368613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:07:15.358499 containerd[1434]: time="2025-07-11T00:07:15.358456700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:07:15.374272 systemd[1]: Started cri-containerd-06b92009a3c66c6d6355871e6bed3ffd5784b77c963347acc0e7be24d0f7dc30.scope - libcontainer container 06b92009a3c66c6d6355871e6bed3ffd5784b77c963347acc0e7be24d0f7dc30. Jul 11 00:07:15.402786 containerd[1434]: time="2025-07-11T00:07:15.402554486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-hpjwv,Uid:35f62f86-2c45-4527-9daf-20678c88a94f,Namespace:kube-system,Attempt:0,} returns sandbox id \"06b92009a3c66c6d6355871e6bed3ffd5784b77c963347acc0e7be24d0f7dc30\"" Jul 11 00:07:15.403538 kubelet[2482]: E0711 00:07:15.403494 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:15.417740 kubelet[2482]: E0711 00:07:15.417464 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:15.417740 kubelet[2482]: E0711 00:07:15.417607 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:15.438833 kubelet[2482]: I0711 00:07:15.438758 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-n99mk" podStartSLOduration=1.438742414 podStartE2EDuration="1.438742414s" podCreationTimestamp="2025-07-11 00:07:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:07:15.429043368 +0000 UTC m=+8.127974500" watchObservedRunningTime="2025-07-11 00:07:15.438742414 +0000 UTC m=+8.137673546" Jul 11 00:07:16.419506 kubelet[2482]: E0711 00:07:16.419440 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:18.961420 kubelet[2482]: E0711 00:07:18.961288 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:19.867368 kubelet[2482]: E0711 00:07:19.867332 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:22.087802 update_engine[1421]: I20250711 00:07:22.087687 1421 update_attempter.cc:509] Updating boot flags... Jul 11 00:07:22.145176 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2871) Jul 11 00:07:22.187245 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2871) Jul 11 00:07:22.217162 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2871) Jul 11 00:07:25.826954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1086538493.mount: Deactivated successfully. Jul 11 00:07:27.277171 containerd[1434]: time="2025-07-11T00:07:27.277109362Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:07:27.280766 containerd[1434]: time="2025-07-11T00:07:27.280451753Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 11 00:07:27.281296 containerd[1434]: time="2025-07-11T00:07:27.281207668Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:07:27.282906 containerd[1434]: time="2025-07-11T00:07:27.282871223Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 12.292074335s" Jul 11 00:07:27.282906 containerd[1434]: time="2025-07-11T00:07:27.282904304Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 11 00:07:27.284403 containerd[1434]: time="2025-07-11T00:07:27.284333209Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 11 00:07:27.298831 containerd[1434]: time="2025-07-11T00:07:27.298784864Z" level=info msg="CreateContainer within sandbox \"689ad7475e16bcb1ac17455cd921d9a50a6483e992a771f482954e8987bc22a1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 11 00:07:27.324452 containerd[1434]: time="2025-07-11T00:07:27.324405784Z" level=info msg="CreateContainer within sandbox \"689ad7475e16bcb1ac17455cd921d9a50a6483e992a771f482954e8987bc22a1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6e22bd82c8e23a7b53e776b381f2bbc14eddc34fd7c2c9017da99ec81f9c14ec\"" Jul 11 00:07:27.325022 containerd[1434]: time="2025-07-11T00:07:27.324995491Z" level=info msg="StartContainer for \"6e22bd82c8e23a7b53e776b381f2bbc14eddc34fd7c2c9017da99ec81f9c14ec\"" Jul 11 00:07:27.354383 systemd[1]: Started cri-containerd-6e22bd82c8e23a7b53e776b381f2bbc14eddc34fd7c2c9017da99ec81f9c14ec.scope - libcontainer container 6e22bd82c8e23a7b53e776b381f2bbc14eddc34fd7c2c9017da99ec81f9c14ec. Jul 11 00:07:27.377724 containerd[1434]: time="2025-07-11T00:07:27.377674476Z" level=info msg="StartContainer for \"6e22bd82c8e23a7b53e776b381f2bbc14eddc34fd7c2c9017da99ec81f9c14ec\" returns successfully" Jul 11 00:07:27.413360 systemd[1]: cri-containerd-6e22bd82c8e23a7b53e776b381f2bbc14eddc34fd7c2c9017da99ec81f9c14ec.scope: Deactivated successfully. Jul 11 00:07:27.475997 kubelet[2482]: E0711 00:07:27.475920 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:27.606465 containerd[1434]: time="2025-07-11T00:07:27.596311258Z" level=info msg="shim disconnected" id=6e22bd82c8e23a7b53e776b381f2bbc14eddc34fd7c2c9017da99ec81f9c14ec namespace=k8s.io Jul 11 00:07:27.606465 containerd[1434]: time="2025-07-11T00:07:27.606296350Z" level=warning msg="cleaning up after shim disconnected" id=6e22bd82c8e23a7b53e776b381f2bbc14eddc34fd7c2c9017da99ec81f9c14ec namespace=k8s.io Jul 11 00:07:27.606465 containerd[1434]: time="2025-07-11T00:07:27.606310991Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:07:27.620094 containerd[1434]: time="2025-07-11T00:07:27.615626493Z" level=warning msg="cleanup warnings time=\"2025-07-11T00:07:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 11 00:07:28.322110 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e22bd82c8e23a7b53e776b381f2bbc14eddc34fd7c2c9017da99ec81f9c14ec-rootfs.mount: Deactivated successfully. Jul 11 00:07:28.479632 kubelet[2482]: E0711 00:07:28.479595 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:28.501215 containerd[1434]: time="2025-07-11T00:07:28.501161589Z" level=info msg="CreateContainer within sandbox \"689ad7475e16bcb1ac17455cd921d9a50a6483e992a771f482954e8987bc22a1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 11 00:07:28.539040 containerd[1434]: time="2025-07-11T00:07:28.538977825Z" level=info msg="CreateContainer within sandbox \"689ad7475e16bcb1ac17455cd921d9a50a6483e992a771f482954e8987bc22a1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2a732d13f6b6314b7148262d09f23b6dbd33bfe276aea3f3c7ba044fa937cc01\"" Jul 11 00:07:28.539667 containerd[1434]: time="2025-07-11T00:07:28.539498688Z" level=info msg="StartContainer for \"2a732d13f6b6314b7148262d09f23b6dbd33bfe276aea3f3c7ba044fa937cc01\"" Jul 11 00:07:28.571406 systemd[1]: Started cri-containerd-2a732d13f6b6314b7148262d09f23b6dbd33bfe276aea3f3c7ba044fa937cc01.scope - libcontainer container 2a732d13f6b6314b7148262d09f23b6dbd33bfe276aea3f3c7ba044fa937cc01. Jul 11 00:07:28.613440 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 11 00:07:28.613672 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:07:28.613740 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:07:28.623478 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:07:28.623709 systemd[1]: cri-containerd-2a732d13f6b6314b7148262d09f23b6dbd33bfe276aea3f3c7ba044fa937cc01.scope: Deactivated successfully. Jul 11 00:07:28.644146 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:07:28.653647 containerd[1434]: time="2025-07-11T00:07:28.653607025Z" level=info msg="StartContainer for \"2a732d13f6b6314b7148262d09f23b6dbd33bfe276aea3f3c7ba044fa937cc01\" returns successfully" Jul 11 00:07:28.862089 containerd[1434]: time="2025-07-11T00:07:28.861949879Z" level=info msg="shim disconnected" id=2a732d13f6b6314b7148262d09f23b6dbd33bfe276aea3f3c7ba044fa937cc01 namespace=k8s.io Jul 11 00:07:28.862089 containerd[1434]: time="2025-07-11T00:07:28.862005361Z" level=warning msg="cleaning up after shim disconnected" id=2a732d13f6b6314b7148262d09f23b6dbd33bfe276aea3f3c7ba044fa937cc01 namespace=k8s.io Jul 11 00:07:28.862089 containerd[1434]: time="2025-07-11T00:07:28.862013642Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:07:29.187611 containerd[1434]: time="2025-07-11T00:07:29.187557533Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:07:29.188031 containerd[1434]: time="2025-07-11T00:07:29.187996231Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 11 00:07:29.188817 containerd[1434]: time="2025-07-11T00:07:29.188770903Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:07:29.190391 containerd[1434]: time="2025-07-11T00:07:29.190302407Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.905932756s" Jul 11 00:07:29.190391 containerd[1434]: time="2025-07-11T00:07:29.190342848Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 11 00:07:29.194675 containerd[1434]: time="2025-07-11T00:07:29.194539262Z" level=info msg="CreateContainer within sandbox \"06b92009a3c66c6d6355871e6bed3ffd5784b77c963347acc0e7be24d0f7dc30\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 11 00:07:29.208703 containerd[1434]: time="2025-07-11T00:07:29.208502240Z" level=info msg="CreateContainer within sandbox \"06b92009a3c66c6d6355871e6bed3ffd5784b77c963347acc0e7be24d0f7dc30\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2ee28684b008d9c11cf483ffe9f1be4e4915660db86c63ed910f2475df3c24e7\"" Jul 11 00:07:29.209331 containerd[1434]: time="2025-07-11T00:07:29.209000700Z" level=info msg="StartContainer for \"2ee28684b008d9c11cf483ffe9f1be4e4915660db86c63ed910f2475df3c24e7\"" Jul 11 00:07:29.238317 systemd[1]: Started cri-containerd-2ee28684b008d9c11cf483ffe9f1be4e4915660db86c63ed910f2475df3c24e7.scope - libcontainer container 2ee28684b008d9c11cf483ffe9f1be4e4915660db86c63ed910f2475df3c24e7. Jul 11 00:07:29.262516 containerd[1434]: time="2025-07-11T00:07:29.262470912Z" level=info msg="StartContainer for \"2ee28684b008d9c11cf483ffe9f1be4e4915660db86c63ed910f2475df3c24e7\" returns successfully" Jul 11 00:07:29.323322 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a732d13f6b6314b7148262d09f23b6dbd33bfe276aea3f3c7ba044fa937cc01-rootfs.mount: Deactivated successfully. Jul 11 00:07:29.485464 kubelet[2482]: E0711 00:07:29.485353 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:29.488471 kubelet[2482]: E0711 00:07:29.488355 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:29.492463 containerd[1434]: time="2025-07-11T00:07:29.492423385Z" level=info msg="CreateContainer within sandbox \"689ad7475e16bcb1ac17455cd921d9a50a6483e992a771f482954e8987bc22a1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 11 00:07:29.497861 kubelet[2482]: I0711 00:07:29.497691 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-hpjwv" podStartSLOduration=1.711044687 podStartE2EDuration="15.497675123s" podCreationTimestamp="2025-07-11 00:07:14 +0000 UTC" firstStartedPulling="2025-07-11 00:07:15.404433562 +0000 UTC m=+8.103364694" lastFinishedPulling="2025-07-11 00:07:29.191064038 +0000 UTC m=+21.889995130" observedRunningTime="2025-07-11 00:07:29.497269426 +0000 UTC m=+22.196200558" watchObservedRunningTime="2025-07-11 00:07:29.497675123 +0000 UTC m=+22.196606255" Jul 11 00:07:29.510561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2249674831.mount: Deactivated successfully. Jul 11 00:07:29.526440 containerd[1434]: time="2025-07-11T00:07:29.526379350Z" level=info msg="CreateContainer within sandbox \"689ad7475e16bcb1ac17455cd921d9a50a6483e992a771f482954e8987bc22a1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f97ea73bac8f7f8cf93beaee50841a09d03d3690753dc41b32f3a6c027e0c409\"" Jul 11 00:07:29.527262 containerd[1434]: time="2025-07-11T00:07:29.527224145Z" level=info msg="StartContainer for \"f97ea73bac8f7f8cf93beaee50841a09d03d3690753dc41b32f3a6c027e0c409\"" Jul 11 00:07:29.550293 systemd[1]: Started cri-containerd-f97ea73bac8f7f8cf93beaee50841a09d03d3690753dc41b32f3a6c027e0c409.scope - libcontainer container f97ea73bac8f7f8cf93beaee50841a09d03d3690753dc41b32f3a6c027e0c409. Jul 11 00:07:29.577697 containerd[1434]: time="2025-07-11T00:07:29.577631430Z" level=info msg="StartContainer for \"f97ea73bac8f7f8cf93beaee50841a09d03d3690753dc41b32f3a6c027e0c409\" returns successfully" Jul 11 00:07:29.599009 systemd[1]: cri-containerd-f97ea73bac8f7f8cf93beaee50841a09d03d3690753dc41b32f3a6c027e0c409.scope: Deactivated successfully. Jul 11 00:07:29.693395 containerd[1434]: time="2025-07-11T00:07:29.693149289Z" level=info msg="shim disconnected" id=f97ea73bac8f7f8cf93beaee50841a09d03d3690753dc41b32f3a6c027e0c409 namespace=k8s.io Jul 11 00:07:29.693395 containerd[1434]: time="2025-07-11T00:07:29.693203732Z" level=warning msg="cleaning up after shim disconnected" id=f97ea73bac8f7f8cf93beaee50841a09d03d3690753dc41b32f3a6c027e0c409 namespace=k8s.io Jul 11 00:07:29.693395 containerd[1434]: time="2025-07-11T00:07:29.693216932Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:07:30.322303 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f97ea73bac8f7f8cf93beaee50841a09d03d3690753dc41b32f3a6c027e0c409-rootfs.mount: Deactivated successfully. Jul 11 00:07:30.491185 kubelet[2482]: E0711 00:07:30.491134 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:30.491571 kubelet[2482]: E0711 00:07:30.491204 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:30.503137 containerd[1434]: time="2025-07-11T00:07:30.503085264Z" level=info msg="CreateContainer within sandbox \"689ad7475e16bcb1ac17455cd921d9a50a6483e992a771f482954e8987bc22a1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 11 00:07:30.525963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2936679126.mount: Deactivated successfully. Jul 11 00:07:30.533490 containerd[1434]: time="2025-07-11T00:07:30.533434506Z" level=info msg="CreateContainer within sandbox \"689ad7475e16bcb1ac17455cd921d9a50a6483e992a771f482954e8987bc22a1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"08d77f52bdbd59f92b52ad8bb5baceda83748d23cecde59082f7e538afcd0523\"" Jul 11 00:07:30.534071 containerd[1434]: time="2025-07-11T00:07:30.534028489Z" level=info msg="StartContainer for \"08d77f52bdbd59f92b52ad8bb5baceda83748d23cecde59082f7e538afcd0523\"" Jul 11 00:07:30.563374 systemd[1]: Started cri-containerd-08d77f52bdbd59f92b52ad8bb5baceda83748d23cecde59082f7e538afcd0523.scope - libcontainer container 08d77f52bdbd59f92b52ad8bb5baceda83748d23cecde59082f7e538afcd0523. Jul 11 00:07:30.582773 systemd[1]: cri-containerd-08d77f52bdbd59f92b52ad8bb5baceda83748d23cecde59082f7e538afcd0523.scope: Deactivated successfully. Jul 11 00:07:30.590452 containerd[1434]: time="2025-07-11T00:07:30.589152832Z" level=info msg="StartContainer for \"08d77f52bdbd59f92b52ad8bb5baceda83748d23cecde59082f7e538afcd0523\" returns successfully" Jul 11 00:07:30.611718 containerd[1434]: time="2025-07-11T00:07:30.611644802Z" level=info msg="shim disconnected" id=08d77f52bdbd59f92b52ad8bb5baceda83748d23cecde59082f7e538afcd0523 namespace=k8s.io Jul 11 00:07:30.611718 containerd[1434]: time="2025-07-11T00:07:30.611701364Z" level=warning msg="cleaning up after shim disconnected" id=08d77f52bdbd59f92b52ad8bb5baceda83748d23cecde59082f7e538afcd0523 namespace=k8s.io Jul 11 00:07:30.611718 containerd[1434]: time="2025-07-11T00:07:30.611711125Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:07:31.322413 systemd[1]: run-containerd-runc-k8s.io-08d77f52bdbd59f92b52ad8bb5baceda83748d23cecde59082f7e538afcd0523-runc.3SCmr4.mount: Deactivated successfully. Jul 11 00:07:31.322504 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-08d77f52bdbd59f92b52ad8bb5baceda83748d23cecde59082f7e538afcd0523-rootfs.mount: Deactivated successfully. Jul 11 00:07:31.495206 kubelet[2482]: E0711 00:07:31.495160 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:31.501878 containerd[1434]: time="2025-07-11T00:07:31.501821612Z" level=info msg="CreateContainer within sandbox \"689ad7475e16bcb1ac17455cd921d9a50a6483e992a771f482954e8987bc22a1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 11 00:07:31.524109 containerd[1434]: time="2025-07-11T00:07:31.523982692Z" level=info msg="CreateContainer within sandbox \"689ad7475e16bcb1ac17455cd921d9a50a6483e992a771f482954e8987bc22a1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7f3b42216747811976231c1b9b74e7df8ad5c7479e2ac5d5ae9c83100d61b04b\"" Jul 11 00:07:31.525665 containerd[1434]: time="2025-07-11T00:07:31.524486031Z" level=info msg="StartContainer for \"7f3b42216747811976231c1b9b74e7df8ad5c7479e2ac5d5ae9c83100d61b04b\"" Jul 11 00:07:31.564326 systemd[1]: Started cri-containerd-7f3b42216747811976231c1b9b74e7df8ad5c7479e2ac5d5ae9c83100d61b04b.scope - libcontainer container 7f3b42216747811976231c1b9b74e7df8ad5c7479e2ac5d5ae9c83100d61b04b. Jul 11 00:07:31.589222 containerd[1434]: time="2025-07-11T00:07:31.587581944Z" level=info msg="StartContainer for \"7f3b42216747811976231c1b9b74e7df8ad5c7479e2ac5d5ae9c83100d61b04b\" returns successfully" Jul 11 00:07:31.743392 kubelet[2482]: I0711 00:07:31.743188 2482 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 11 00:07:31.787946 systemd[1]: Created slice kubepods-burstable-podf0e0d49c_0f67_492d_8895_dd2c9a3777a0.slice - libcontainer container kubepods-burstable-podf0e0d49c_0f67_492d_8895_dd2c9a3777a0.slice. Jul 11 00:07:31.794442 systemd[1]: Created slice kubepods-burstable-podecbc7283_57a3_4e20_ac9b_71fe546898b9.slice - libcontainer container kubepods-burstable-podecbc7283_57a3_4e20_ac9b_71fe546898b9.slice. Jul 11 00:07:31.838934 kubelet[2482]: I0711 00:07:31.838774 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f0e0d49c-0f67-492d-8895-dd2c9a3777a0-config-volume\") pod \"coredns-674b8bbfcf-6vcdx\" (UID: \"f0e0d49c-0f67-492d-8895-dd2c9a3777a0\") " pod="kube-system/coredns-674b8bbfcf-6vcdx" Jul 11 00:07:31.838934 kubelet[2482]: I0711 00:07:31.838820 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd7mn\" (UniqueName: \"kubernetes.io/projected/f0e0d49c-0f67-492d-8895-dd2c9a3777a0-kube-api-access-qd7mn\") pod \"coredns-674b8bbfcf-6vcdx\" (UID: \"f0e0d49c-0f67-492d-8895-dd2c9a3777a0\") " pod="kube-system/coredns-674b8bbfcf-6vcdx" Jul 11 00:07:31.838934 kubelet[2482]: I0711 00:07:31.838842 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ecbc7283-57a3-4e20-ac9b-71fe546898b9-config-volume\") pod \"coredns-674b8bbfcf-8pbbb\" (UID: \"ecbc7283-57a3-4e20-ac9b-71fe546898b9\") " pod="kube-system/coredns-674b8bbfcf-8pbbb" Jul 11 00:07:31.838934 kubelet[2482]: I0711 00:07:31.838869 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqck5\" (UniqueName: \"kubernetes.io/projected/ecbc7283-57a3-4e20-ac9b-71fe546898b9-kube-api-access-qqck5\") pod \"coredns-674b8bbfcf-8pbbb\" (UID: \"ecbc7283-57a3-4e20-ac9b-71fe546898b9\") " pod="kube-system/coredns-674b8bbfcf-8pbbb" Jul 11 00:07:32.092835 kubelet[2482]: E0711 00:07:32.092798 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:32.094594 containerd[1434]: time="2025-07-11T00:07:32.094556786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6vcdx,Uid:f0e0d49c-0f67-492d-8895-dd2c9a3777a0,Namespace:kube-system,Attempt:0,}" Jul 11 00:07:32.096808 kubelet[2482]: E0711 00:07:32.096780 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:32.098360 containerd[1434]: time="2025-07-11T00:07:32.098313362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8pbbb,Uid:ecbc7283-57a3-4e20-ac9b-71fe546898b9,Namespace:kube-system,Attempt:0,}" Jul 11 00:07:32.499826 kubelet[2482]: E0711 00:07:32.499797 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:32.519160 kubelet[2482]: I0711 00:07:32.517541 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-c8v2d" podStartSLOduration=6.223722733 podStartE2EDuration="18.517525006s" podCreationTimestamp="2025-07-11 00:07:14 +0000 UTC" firstStartedPulling="2025-07-11 00:07:14.990318646 +0000 UTC m=+7.689249778" lastFinishedPulling="2025-07-11 00:07:27.284120919 +0000 UTC m=+19.983052051" observedRunningTime="2025-07-11 00:07:32.517145753 +0000 UTC m=+25.216077205" watchObservedRunningTime="2025-07-11 00:07:32.517525006 +0000 UTC m=+25.216456138" Jul 11 00:07:33.502037 kubelet[2482]: E0711 00:07:33.501971 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:33.722920 systemd-networkd[1374]: cilium_host: Link UP Jul 11 00:07:33.723054 systemd-networkd[1374]: cilium_net: Link UP Jul 11 00:07:33.723203 systemd-networkd[1374]: cilium_net: Gained carrier Jul 11 00:07:33.723335 systemd-networkd[1374]: cilium_host: Gained carrier Jul 11 00:07:33.811366 systemd-networkd[1374]: cilium_vxlan: Link UP Jul 11 00:07:33.811379 systemd-networkd[1374]: cilium_vxlan: Gained carrier Jul 11 00:07:33.967487 systemd-networkd[1374]: cilium_net: Gained IPv6LL Jul 11 00:07:34.103174 kernel: NET: Registered PF_ALG protocol family Jul 11 00:07:34.424470 systemd-networkd[1374]: cilium_host: Gained IPv6LL Jul 11 00:07:34.504135 kubelet[2482]: E0711 00:07:34.504074 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:34.685659 systemd-networkd[1374]: lxc_health: Link UP Jul 11 00:07:34.694575 systemd-networkd[1374]: lxc_health: Gained carrier Jul 11 00:07:35.189854 systemd-networkd[1374]: lxc245db8ce7643: Link UP Jul 11 00:07:35.199748 systemd-networkd[1374]: lxcc82c6fcfa602: Link UP Jul 11 00:07:35.220165 kernel: eth0: renamed from tmpfa762 Jul 11 00:07:35.228287 kernel: eth0: renamed from tmp2204a Jul 11 00:07:35.236003 systemd-networkd[1374]: lxc245db8ce7643: Gained carrier Jul 11 00:07:35.236382 systemd-networkd[1374]: lxcc82c6fcfa602: Gained carrier Jul 11 00:07:35.505524 kubelet[2482]: E0711 00:07:35.505169 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:35.512556 systemd-networkd[1374]: cilium_vxlan: Gained IPv6LL Jul 11 00:07:35.704466 systemd-networkd[1374]: lxc_health: Gained IPv6LL Jul 11 00:07:36.343555 systemd-networkd[1374]: lxc245db8ce7643: Gained IPv6LL Jul 11 00:07:36.506846 kubelet[2482]: E0711 00:07:36.506814 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:37.047440 systemd-networkd[1374]: lxcc82c6fcfa602: Gained IPv6LL Jul 11 00:07:37.284444 systemd[1]: Started sshd@7-10.0.0.37:22-10.0.0.1:37260.service - OpenSSH per-connection server daemon (10.0.0.1:37260). Jul 11 00:07:37.322035 sshd[3735]: Accepted publickey for core from 10.0.0.1 port 37260 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:07:37.323388 sshd[3735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:07:37.327826 systemd-logind[1420]: New session 8 of user core. Jul 11 00:07:37.333439 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 11 00:07:37.475033 sshd[3735]: pam_unix(sshd:session): session closed for user core Jul 11 00:07:37.480187 systemd[1]: sshd@7-10.0.0.37:22-10.0.0.1:37260.service: Deactivated successfully. Jul 11 00:07:37.482075 systemd[1]: session-8.scope: Deactivated successfully. Jul 11 00:07:37.483012 systemd-logind[1420]: Session 8 logged out. Waiting for processes to exit. Jul 11 00:07:37.483949 systemd-logind[1420]: Removed session 8. Jul 11 00:07:37.508548 kubelet[2482]: E0711 00:07:37.508433 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:38.830480 containerd[1434]: time="2025-07-11T00:07:38.829789966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:07:38.830982 containerd[1434]: time="2025-07-11T00:07:38.830291580Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:07:38.830982 containerd[1434]: time="2025-07-11T00:07:38.830568148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:07:38.830982 containerd[1434]: time="2025-07-11T00:07:38.830589749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:07:38.830982 containerd[1434]: time="2025-07-11T00:07:38.830741353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:07:38.836591 containerd[1434]: time="2025-07-11T00:07:38.832234436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:07:38.836591 containerd[1434]: time="2025-07-11T00:07:38.832270717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:07:38.836591 containerd[1434]: time="2025-07-11T00:07:38.832536685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:07:38.857270 systemd[1]: Started cri-containerd-2204a866060215209fce4634649f63cffa3ffb5c917ee0a48107a8d9dd076794.scope - libcontainer container 2204a866060215209fce4634649f63cffa3ffb5c917ee0a48107a8d9dd076794. Jul 11 00:07:38.858687 systemd[1]: Started cri-containerd-fa76294480d9a5bd3586a15ce7dbce38cadce0a53df135fde690f16cfb7d3b35.scope - libcontainer container fa76294480d9a5bd3586a15ce7dbce38cadce0a53df135fde690f16cfb7d3b35. Jul 11 00:07:38.869672 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:07:38.869835 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:07:38.888310 containerd[1434]: time="2025-07-11T00:07:38.888175329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8pbbb,Uid:ecbc7283-57a3-4e20-ac9b-71fe546898b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa76294480d9a5bd3586a15ce7dbce38cadce0a53df135fde690f16cfb7d3b35\"" Jul 11 00:07:38.888778 kubelet[2482]: E0711 00:07:38.888745 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:38.891916 containerd[1434]: time="2025-07-11T00:07:38.891515706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6vcdx,Uid:f0e0d49c-0f67-492d-8895-dd2c9a3777a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"2204a866060215209fce4634649f63cffa3ffb5c917ee0a48107a8d9dd076794\"" Jul 11 00:07:38.892935 kubelet[2482]: E0711 00:07:38.892915 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:38.894610 containerd[1434]: time="2025-07-11T00:07:38.894566194Z" level=info msg="CreateContainer within sandbox \"fa76294480d9a5bd3586a15ce7dbce38cadce0a53df135fde690f16cfb7d3b35\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 00:07:38.897073 containerd[1434]: time="2025-07-11T00:07:38.896982583Z" level=info msg="CreateContainer within sandbox \"2204a866060215209fce4634649f63cffa3ffb5c917ee0a48107a8d9dd076794\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 00:07:38.909769 containerd[1434]: time="2025-07-11T00:07:38.909567506Z" level=info msg="CreateContainer within sandbox \"fa76294480d9a5bd3586a15ce7dbce38cadce0a53df135fde690f16cfb7d3b35\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a4fcb33107302c93decea76ed7bfad3a0f419d5f17da4968ed8698d66b82822e\"" Jul 11 00:07:38.910220 containerd[1434]: time="2025-07-11T00:07:38.910186684Z" level=info msg="StartContainer for \"a4fcb33107302c93decea76ed7bfad3a0f419d5f17da4968ed8698d66b82822e\"" Jul 11 00:07:38.912375 containerd[1434]: time="2025-07-11T00:07:38.912285425Z" level=info msg="CreateContainer within sandbox \"2204a866060215209fce4634649f63cffa3ffb5c917ee0a48107a8d9dd076794\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e2cd21df439d27f2caaa06824acf981a726ee76a64c53b20a286ee4db3283614\"" Jul 11 00:07:38.913642 containerd[1434]: time="2025-07-11T00:07:38.912771479Z" level=info msg="StartContainer for \"e2cd21df439d27f2caaa06824acf981a726ee76a64c53b20a286ee4db3283614\"" Jul 11 00:07:38.946281 systemd[1]: Started cri-containerd-a4fcb33107302c93decea76ed7bfad3a0f419d5f17da4968ed8698d66b82822e.scope - libcontainer container a4fcb33107302c93decea76ed7bfad3a0f419d5f17da4968ed8698d66b82822e. Jul 11 00:07:38.947300 systemd[1]: Started cri-containerd-e2cd21df439d27f2caaa06824acf981a726ee76a64c53b20a286ee4db3283614.scope - libcontainer container e2cd21df439d27f2caaa06824acf981a726ee76a64c53b20a286ee4db3283614. Jul 11 00:07:38.990932 containerd[1434]: time="2025-07-11T00:07:38.990699006Z" level=info msg="StartContainer for \"e2cd21df439d27f2caaa06824acf981a726ee76a64c53b20a286ee4db3283614\" returns successfully" Jul 11 00:07:38.990932 containerd[1434]: time="2025-07-11T00:07:38.990784409Z" level=info msg="StartContainer for \"a4fcb33107302c93decea76ed7bfad3a0f419d5f17da4968ed8698d66b82822e\" returns successfully" Jul 11 00:07:39.512795 kubelet[2482]: E0711 00:07:39.512705 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:39.516868 kubelet[2482]: E0711 00:07:39.516786 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:39.523882 kubelet[2482]: I0711 00:07:39.523292 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-8pbbb" podStartSLOduration=24.523277526 podStartE2EDuration="24.523277526s" podCreationTimestamp="2025-07-11 00:07:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:07:39.523049279 +0000 UTC m=+32.221980371" watchObservedRunningTime="2025-07-11 00:07:39.523277526 +0000 UTC m=+32.222208658" Jul 11 00:07:39.535780 kubelet[2482]: I0711 00:07:39.535719 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-6vcdx" podStartSLOduration=24.535701432 podStartE2EDuration="24.535701432s" podCreationTimestamp="2025-07-11 00:07:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:07:39.535379503 +0000 UTC m=+32.234310635" watchObservedRunningTime="2025-07-11 00:07:39.535701432 +0000 UTC m=+32.234632564" Jul 11 00:07:40.518478 kubelet[2482]: E0711 00:07:40.518249 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:40.518478 kubelet[2482]: E0711 00:07:40.518405 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:41.519342 kubelet[2482]: E0711 00:07:41.519303 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:41.519698 kubelet[2482]: E0711 00:07:41.519447 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:07:42.492925 systemd[1]: Started sshd@8-10.0.0.37:22-10.0.0.1:37246.service - OpenSSH per-connection server daemon (10.0.0.1:37246). Jul 11 00:07:42.532121 sshd[3924]: Accepted publickey for core from 10.0.0.1 port 37246 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:07:42.533905 sshd[3924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:07:42.537932 systemd-logind[1420]: New session 9 of user core. Jul 11 00:07:42.547279 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 11 00:07:42.662360 sshd[3924]: pam_unix(sshd:session): session closed for user core Jul 11 00:07:42.666045 systemd-logind[1420]: Session 9 logged out. Waiting for processes to exit. Jul 11 00:07:42.666310 systemd[1]: sshd@8-10.0.0.37:22-10.0.0.1:37246.service: Deactivated successfully. Jul 11 00:07:42.668611 systemd[1]: session-9.scope: Deactivated successfully. Jul 11 00:07:42.669777 systemd-logind[1420]: Removed session 9. Jul 11 00:07:47.676700 systemd[1]: Started sshd@9-10.0.0.37:22-10.0.0.1:37254.service - OpenSSH per-connection server daemon (10.0.0.1:37254). Jul 11 00:07:47.716058 sshd[3945]: Accepted publickey for core from 10.0.0.1 port 37254 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:07:47.717387 sshd[3945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:07:47.721855 systemd-logind[1420]: New session 10 of user core. Jul 11 00:07:47.729352 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 11 00:07:47.850244 sshd[3945]: pam_unix(sshd:session): session closed for user core Jul 11 00:07:47.852960 systemd[1]: sshd@9-10.0.0.37:22-10.0.0.1:37254.service: Deactivated successfully. Jul 11 00:07:47.855007 systemd[1]: session-10.scope: Deactivated successfully. Jul 11 00:07:47.856616 systemd-logind[1420]: Session 10 logged out. Waiting for processes to exit. Jul 11 00:07:47.857698 systemd-logind[1420]: Removed session 10. Jul 11 00:07:52.871911 systemd[1]: Started sshd@10-10.0.0.37:22-10.0.0.1:40530.service - OpenSSH per-connection server daemon (10.0.0.1:40530). Jul 11 00:07:52.908470 sshd[3961]: Accepted publickey for core from 10.0.0.1 port 40530 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:07:52.910203 sshd[3961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:07:52.914679 systemd-logind[1420]: New session 11 of user core. Jul 11 00:07:52.921274 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 11 00:07:53.029014 sshd[3961]: pam_unix(sshd:session): session closed for user core Jul 11 00:07:53.039538 systemd[1]: sshd@10-10.0.0.37:22-10.0.0.1:40530.service: Deactivated successfully. Jul 11 00:07:53.041344 systemd[1]: session-11.scope: Deactivated successfully. Jul 11 00:07:53.042760 systemd-logind[1420]: Session 11 logged out. Waiting for processes to exit. Jul 11 00:07:53.053605 systemd[1]: Started sshd@11-10.0.0.37:22-10.0.0.1:40540.service - OpenSSH per-connection server daemon (10.0.0.1:40540). Jul 11 00:07:53.054399 systemd-logind[1420]: Removed session 11. Jul 11 00:07:53.084096 sshd[3976]: Accepted publickey for core from 10.0.0.1 port 40540 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:07:53.085411 sshd[3976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:07:53.088961 systemd-logind[1420]: New session 12 of user core. Jul 11 00:07:53.095273 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 11 00:07:53.257499 sshd[3976]: pam_unix(sshd:session): session closed for user core Jul 11 00:07:53.271019 systemd[1]: sshd@11-10.0.0.37:22-10.0.0.1:40540.service: Deactivated successfully. Jul 11 00:07:53.272575 systemd[1]: session-12.scope: Deactivated successfully. Jul 11 00:07:53.274169 systemd-logind[1420]: Session 12 logged out. Waiting for processes to exit. Jul 11 00:07:53.282411 systemd[1]: Started sshd@12-10.0.0.37:22-10.0.0.1:40550.service - OpenSSH per-connection server daemon (10.0.0.1:40550). Jul 11 00:07:53.285358 systemd-logind[1420]: Removed session 12. Jul 11 00:07:53.319640 sshd[3989]: Accepted publickey for core from 10.0.0.1 port 40550 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:07:53.321017 sshd[3989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:07:53.325397 systemd-logind[1420]: New session 13 of user core. Jul 11 00:07:53.332409 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 11 00:07:53.447804 sshd[3989]: pam_unix(sshd:session): session closed for user core Jul 11 00:07:53.451768 systemd[1]: sshd@12-10.0.0.37:22-10.0.0.1:40550.service: Deactivated successfully. Jul 11 00:07:53.453445 systemd[1]: session-13.scope: Deactivated successfully. Jul 11 00:07:53.453974 systemd-logind[1420]: Session 13 logged out. Waiting for processes to exit. Jul 11 00:07:53.454751 systemd-logind[1420]: Removed session 13. Jul 11 00:07:58.461789 systemd[1]: Started sshd@13-10.0.0.37:22-10.0.0.1:40556.service - OpenSSH per-connection server daemon (10.0.0.1:40556). Jul 11 00:07:58.495736 sshd[4004]: Accepted publickey for core from 10.0.0.1 port 40556 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:07:58.497095 sshd[4004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:07:58.500630 systemd-logind[1420]: New session 14 of user core. Jul 11 00:07:58.508281 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 11 00:07:58.613724 sshd[4004]: pam_unix(sshd:session): session closed for user core Jul 11 00:07:58.616962 systemd[1]: sshd@13-10.0.0.37:22-10.0.0.1:40556.service: Deactivated successfully. Jul 11 00:07:58.618587 systemd[1]: session-14.scope: Deactivated successfully. Jul 11 00:07:58.619469 systemd-logind[1420]: Session 14 logged out. Waiting for processes to exit. Jul 11 00:07:58.620473 systemd-logind[1420]: Removed session 14. Jul 11 00:08:03.627783 systemd[1]: Started sshd@14-10.0.0.37:22-10.0.0.1:33628.service - OpenSSH per-connection server daemon (10.0.0.1:33628). Jul 11 00:08:03.661787 sshd[4018]: Accepted publickey for core from 10.0.0.1 port 33628 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:08:03.663108 sshd[4018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:08:03.667407 systemd-logind[1420]: New session 15 of user core. Jul 11 00:08:03.681484 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 11 00:08:03.799932 sshd[4018]: pam_unix(sshd:session): session closed for user core Jul 11 00:08:03.813687 systemd[1]: sshd@14-10.0.0.37:22-10.0.0.1:33628.service: Deactivated successfully. Jul 11 00:08:03.815328 systemd[1]: session-15.scope: Deactivated successfully. Jul 11 00:08:03.816785 systemd-logind[1420]: Session 15 logged out. Waiting for processes to exit. Jul 11 00:08:03.818235 systemd[1]: Started sshd@15-10.0.0.37:22-10.0.0.1:33636.service - OpenSSH per-connection server daemon (10.0.0.1:33636). Jul 11 00:08:03.819046 systemd-logind[1420]: Removed session 15. Jul 11 00:08:03.854521 sshd[4033]: Accepted publickey for core from 10.0.0.1 port 33636 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:08:03.855940 sshd[4033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:08:03.860189 systemd-logind[1420]: New session 16 of user core. Jul 11 00:08:03.869273 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 11 00:08:04.072011 sshd[4033]: pam_unix(sshd:session): session closed for user core Jul 11 00:08:04.082811 systemd[1]: sshd@15-10.0.0.37:22-10.0.0.1:33636.service: Deactivated successfully. Jul 11 00:08:04.084412 systemd[1]: session-16.scope: Deactivated successfully. Jul 11 00:08:04.085687 systemd-logind[1420]: Session 16 logged out. Waiting for processes to exit. Jul 11 00:08:04.086921 systemd[1]: Started sshd@16-10.0.0.37:22-10.0.0.1:33646.service - OpenSSH per-connection server daemon (10.0.0.1:33646). Jul 11 00:08:04.087774 systemd-logind[1420]: Removed session 16. Jul 11 00:08:04.124352 sshd[4046]: Accepted publickey for core from 10.0.0.1 port 33646 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:08:04.125926 sshd[4046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:08:04.129966 systemd-logind[1420]: New session 17 of user core. Jul 11 00:08:04.137262 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 11 00:08:04.881335 sshd[4046]: pam_unix(sshd:session): session closed for user core Jul 11 00:08:04.894293 systemd[1]: sshd@16-10.0.0.37:22-10.0.0.1:33646.service: Deactivated successfully. Jul 11 00:08:04.899226 systemd[1]: session-17.scope: Deactivated successfully. Jul 11 00:08:04.901497 systemd-logind[1420]: Session 17 logged out. Waiting for processes to exit. Jul 11 00:08:04.910811 systemd[1]: Started sshd@17-10.0.0.37:22-10.0.0.1:33660.service - OpenSSH per-connection server daemon (10.0.0.1:33660). Jul 11 00:08:04.916486 systemd-logind[1420]: Removed session 17. Jul 11 00:08:04.946884 sshd[4066]: Accepted publickey for core from 10.0.0.1 port 33660 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:08:04.948482 sshd[4066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:08:04.952532 systemd-logind[1420]: New session 18 of user core. Jul 11 00:08:04.961281 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 11 00:08:05.183682 sshd[4066]: pam_unix(sshd:session): session closed for user core Jul 11 00:08:05.192375 systemd[1]: sshd@17-10.0.0.37:22-10.0.0.1:33660.service: Deactivated successfully. Jul 11 00:08:05.195351 systemd[1]: session-18.scope: Deactivated successfully. Jul 11 00:08:05.197860 systemd-logind[1420]: Session 18 logged out. Waiting for processes to exit. Jul 11 00:08:05.207399 systemd[1]: Started sshd@18-10.0.0.37:22-10.0.0.1:33666.service - OpenSSH per-connection server daemon (10.0.0.1:33666). Jul 11 00:08:05.208414 systemd-logind[1420]: Removed session 18. Jul 11 00:08:05.239375 sshd[4078]: Accepted publickey for core from 10.0.0.1 port 33666 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:08:05.240773 sshd[4078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:08:05.245233 systemd-logind[1420]: New session 19 of user core. Jul 11 00:08:05.254327 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 11 00:08:05.365562 sshd[4078]: pam_unix(sshd:session): session closed for user core Jul 11 00:08:05.368927 systemd[1]: sshd@18-10.0.0.37:22-10.0.0.1:33666.service: Deactivated successfully. Jul 11 00:08:05.370727 systemd[1]: session-19.scope: Deactivated successfully. Jul 11 00:08:05.372910 systemd-logind[1420]: Session 19 logged out. Waiting for processes to exit. Jul 11 00:08:05.374179 systemd-logind[1420]: Removed session 19. Jul 11 00:08:10.377509 systemd[1]: Started sshd@19-10.0.0.37:22-10.0.0.1:33680.service - OpenSSH per-connection server daemon (10.0.0.1:33680). Jul 11 00:08:10.410723 sshd[4097]: Accepted publickey for core from 10.0.0.1 port 33680 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:08:10.411929 sshd[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:08:10.415362 systemd-logind[1420]: New session 20 of user core. Jul 11 00:08:10.426320 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 11 00:08:10.531428 sshd[4097]: pam_unix(sshd:session): session closed for user core Jul 11 00:08:10.534062 systemd[1]: sshd@19-10.0.0.37:22-10.0.0.1:33680.service: Deactivated successfully. Jul 11 00:08:10.536729 systemd[1]: session-20.scope: Deactivated successfully. Jul 11 00:08:10.538009 systemd-logind[1420]: Session 20 logged out. Waiting for processes to exit. Jul 11 00:08:10.539075 systemd-logind[1420]: Removed session 20. Jul 11 00:08:15.541805 systemd[1]: Started sshd@20-10.0.0.37:22-10.0.0.1:33188.service - OpenSSH per-connection server daemon (10.0.0.1:33188). Jul 11 00:08:15.584634 sshd[4113]: Accepted publickey for core from 10.0.0.1 port 33188 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:08:15.585937 sshd[4113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:08:15.589894 systemd-logind[1420]: New session 21 of user core. Jul 11 00:08:15.598308 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 11 00:08:15.702240 sshd[4113]: pam_unix(sshd:session): session closed for user core Jul 11 00:08:15.705101 systemd[1]: sshd@20-10.0.0.37:22-10.0.0.1:33188.service: Deactivated successfully. Jul 11 00:08:15.708011 systemd[1]: session-21.scope: Deactivated successfully. Jul 11 00:08:15.709846 systemd-logind[1420]: Session 21 logged out. Waiting for processes to exit. Jul 11 00:08:15.711162 systemd-logind[1420]: Removed session 21. Jul 11 00:08:18.389646 kubelet[2482]: E0711 00:08:18.389603 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:08:20.712931 systemd[1]: Started sshd@21-10.0.0.37:22-10.0.0.1:33194.service - OpenSSH per-connection server daemon (10.0.0.1:33194). Jul 11 00:08:20.746599 sshd[4128]: Accepted publickey for core from 10.0.0.1 port 33194 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:08:20.747868 sshd[4128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:08:20.751931 systemd-logind[1420]: New session 22 of user core. Jul 11 00:08:20.768305 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 11 00:08:20.873109 sshd[4128]: pam_unix(sshd:session): session closed for user core Jul 11 00:08:20.886800 systemd[1]: sshd@21-10.0.0.37:22-10.0.0.1:33194.service: Deactivated successfully. Jul 11 00:08:20.889517 systemd[1]: session-22.scope: Deactivated successfully. Jul 11 00:08:20.890843 systemd-logind[1420]: Session 22 logged out. Waiting for processes to exit. Jul 11 00:08:20.896376 systemd[1]: Started sshd@22-10.0.0.37:22-10.0.0.1:33210.service - OpenSSH per-connection server daemon (10.0.0.1:33210). Jul 11 00:08:20.897176 systemd-logind[1420]: Removed session 22. Jul 11 00:08:20.925974 sshd[4143]: Accepted publickey for core from 10.0.0.1 port 33210 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:08:20.927249 sshd[4143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:08:20.930996 systemd-logind[1420]: New session 23 of user core. Jul 11 00:08:20.937269 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 11 00:08:23.111105 containerd[1434]: time="2025-07-11T00:08:23.110775995Z" level=info msg="StopContainer for \"2ee28684b008d9c11cf483ffe9f1be4e4915660db86c63ed910f2475df3c24e7\" with timeout 30 (s)" Jul 11 00:08:23.111653 containerd[1434]: time="2025-07-11T00:08:23.111206993Z" level=info msg="Stop container \"2ee28684b008d9c11cf483ffe9f1be4e4915660db86c63ed910f2475df3c24e7\" with signal terminated" Jul 11 00:08:23.121164 systemd[1]: cri-containerd-2ee28684b008d9c11cf483ffe9f1be4e4915660db86c63ed910f2475df3c24e7.scope: Deactivated successfully. Jul 11 00:08:23.141956 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ee28684b008d9c11cf483ffe9f1be4e4915660db86c63ed910f2475df3c24e7-rootfs.mount: Deactivated successfully. Jul 11 00:08:23.149139 containerd[1434]: time="2025-07-11T00:08:23.149082735Z" level=info msg="StopContainer for \"7f3b42216747811976231c1b9b74e7df8ad5c7479e2ac5d5ae9c83100d61b04b\" with timeout 2 (s)" Jul 11 00:08:23.149856 containerd[1434]: time="2025-07-11T00:08:23.149332494Z" level=info msg="Stop container \"7f3b42216747811976231c1b9b74e7df8ad5c7479e2ac5d5ae9c83100d61b04b\" with signal terminated" Jul 11 00:08:23.149856 containerd[1434]: time="2025-07-11T00:08:23.149808172Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 00:08:23.151582 containerd[1434]: time="2025-07-11T00:08:23.151536524Z" level=info msg="shim disconnected" id=2ee28684b008d9c11cf483ffe9f1be4e4915660db86c63ed910f2475df3c24e7 namespace=k8s.io Jul 11 00:08:23.151582 containerd[1434]: time="2025-07-11T00:08:23.151581563Z" level=warning msg="cleaning up after shim disconnected" id=2ee28684b008d9c11cf483ffe9f1be4e4915660db86c63ed910f2475df3c24e7 namespace=k8s.io Jul 11 00:08:23.151708 containerd[1434]: time="2025-07-11T00:08:23.151590163Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:08:23.155845 systemd-networkd[1374]: lxc_health: Link DOWN Jul 11 00:08:23.155851 systemd-networkd[1374]: lxc_health: Lost carrier Jul 11 00:08:23.189657 systemd[1]: cri-containerd-7f3b42216747811976231c1b9b74e7df8ad5c7479e2ac5d5ae9c83100d61b04b.scope: Deactivated successfully. Jul 11 00:08:23.189957 systemd[1]: cri-containerd-7f3b42216747811976231c1b9b74e7df8ad5c7479e2ac5d5ae9c83100d61b04b.scope: Consumed 6.428s CPU time. Jul 11 00:08:23.195106 containerd[1434]: time="2025-07-11T00:08:23.194958879Z" level=info msg="StopContainer for \"2ee28684b008d9c11cf483ffe9f1be4e4915660db86c63ed910f2475df3c24e7\" returns successfully" Jul 11 00:08:23.196050 containerd[1434]: time="2025-07-11T00:08:23.195923515Z" level=info msg="StopPodSandbox for \"06b92009a3c66c6d6355871e6bed3ffd5784b77c963347acc0e7be24d0f7dc30\"" Jul 11 00:08:23.196050 containerd[1434]: time="2025-07-11T00:08:23.195954355Z" level=info msg="Container to stop \"2ee28684b008d9c11cf483ffe9f1be4e4915660db86c63ed910f2475df3c24e7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:08:23.198016 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-06b92009a3c66c6d6355871e6bed3ffd5784b77c963347acc0e7be24d0f7dc30-shm.mount: Deactivated successfully. Jul 11 00:08:23.203529 systemd[1]: cri-containerd-06b92009a3c66c6d6355871e6bed3ffd5784b77c963347acc0e7be24d0f7dc30.scope: Deactivated successfully. Jul 11 00:08:23.213436 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f3b42216747811976231c1b9b74e7df8ad5c7479e2ac5d5ae9c83100d61b04b-rootfs.mount: Deactivated successfully. Jul 11 00:08:23.217318 containerd[1434]: time="2025-07-11T00:08:23.217217215Z" level=info msg="shim disconnected" id=7f3b42216747811976231c1b9b74e7df8ad5c7479e2ac5d5ae9c83100d61b04b namespace=k8s.io Jul 11 00:08:23.217318 containerd[1434]: time="2025-07-11T00:08:23.217291254Z" level=warning msg="cleaning up after shim disconnected" id=7f3b42216747811976231c1b9b74e7df8ad5c7479e2ac5d5ae9c83100d61b04b namespace=k8s.io Jul 11 00:08:23.217318 containerd[1434]: time="2025-07-11T00:08:23.217300054Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:08:23.231066 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06b92009a3c66c6d6355871e6bed3ffd5784b77c963347acc0e7be24d0f7dc30-rootfs.mount: Deactivated successfully. Jul 11 00:08:23.236556 containerd[1434]: time="2025-07-11T00:08:23.236506644Z" level=info msg="StopContainer for \"7f3b42216747811976231c1b9b74e7df8ad5c7479e2ac5d5ae9c83100d61b04b\" returns successfully" Jul 11 00:08:23.236974 containerd[1434]: time="2025-07-11T00:08:23.236940242Z" level=info msg="StopPodSandbox for \"689ad7475e16bcb1ac17455cd921d9a50a6483e992a771f482954e8987bc22a1\"" Jul 11 00:08:23.237023 containerd[1434]: time="2025-07-11T00:08:23.236979962Z" level=info msg="Container to stop \"2a732d13f6b6314b7148262d09f23b6dbd33bfe276aea3f3c7ba044fa937cc01\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:08:23.237023 containerd[1434]: time="2025-07-11T00:08:23.236994682Z" level=info msg="Container to stop \"7f3b42216747811976231c1b9b74e7df8ad5c7479e2ac5d5ae9c83100d61b04b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:08:23.237023 containerd[1434]: time="2025-07-11T00:08:23.237005322Z" level=info msg="Container to stop \"6e22bd82c8e23a7b53e776b381f2bbc14eddc34fd7c2c9017da99ec81f9c14ec\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:08:23.237023 containerd[1434]: time="2025-07-11T00:08:23.237015282Z" level=info msg="Container to stop \"f97ea73bac8f7f8cf93beaee50841a09d03d3690753dc41b32f3a6c027e0c409\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:08:23.237166 containerd[1434]: time="2025-07-11T00:08:23.237025042Z" level=info msg="Container to stop \"08d77f52bdbd59f92b52ad8bb5baceda83748d23cecde59082f7e538afcd0523\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:08:23.237579 containerd[1434]: time="2025-07-11T00:08:23.237518599Z" level=info msg="shim disconnected" id=06b92009a3c66c6d6355871e6bed3ffd5784b77c963347acc0e7be24d0f7dc30 namespace=k8s.io Jul 11 00:08:23.237634 containerd[1434]: time="2025-07-11T00:08:23.237580559Z" level=warning msg="cleaning up after shim disconnected" id=06b92009a3c66c6d6355871e6bed3ffd5784b77c963347acc0e7be24d0f7dc30 namespace=k8s.io Jul 11 00:08:23.237634 containerd[1434]: time="2025-07-11T00:08:23.237595479Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:08:23.242600 systemd[1]: cri-containerd-689ad7475e16bcb1ac17455cd921d9a50a6483e992a771f482954e8987bc22a1.scope: Deactivated successfully. Jul 11 00:08:23.247590 containerd[1434]: time="2025-07-11T00:08:23.247544632Z" level=warning msg="cleanup warnings time=\"2025-07-11T00:08:23Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 11 00:08:23.249487 containerd[1434]: time="2025-07-11T00:08:23.249436063Z" level=info msg="TearDown network for sandbox \"06b92009a3c66c6d6355871e6bed3ffd5784b77c963347acc0e7be24d0f7dc30\" successfully" Jul 11 00:08:23.249487 containerd[1434]: time="2025-07-11T00:08:23.249480383Z" level=info msg="StopPodSandbox for \"06b92009a3c66c6d6355871e6bed3ffd5784b77c963347acc0e7be24d0f7dc30\" returns successfully" Jul 11 00:08:23.272248 containerd[1434]: time="2025-07-11T00:08:23.271952797Z" level=info msg="shim disconnected" id=689ad7475e16bcb1ac17455cd921d9a50a6483e992a771f482954e8987bc22a1 namespace=k8s.io Jul 11 00:08:23.272248 containerd[1434]: time="2025-07-11T00:08:23.272235236Z" level=warning msg="cleaning up after shim disconnected" id=689ad7475e16bcb1ac17455cd921d9a50a6483e992a771f482954e8987bc22a1 namespace=k8s.io Jul 11 00:08:23.272248 containerd[1434]: time="2025-07-11T00:08:23.272246596Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:08:23.284702 containerd[1434]: time="2025-07-11T00:08:23.284651738Z" level=info msg="TearDown network for sandbox \"689ad7475e16bcb1ac17455cd921d9a50a6483e992a771f482954e8987bc22a1\" successfully" Jul 11 00:08:23.284702 containerd[1434]: time="2025-07-11T00:08:23.284687018Z" level=info msg="StopPodSandbox for \"689ad7475e16bcb1ac17455cd921d9a50a6483e992a771f482954e8987bc22a1\" returns successfully" Jul 11 00:08:23.355015 kubelet[2482]: I0711 00:08:23.354967 2482 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-cilium-run\") pod \"cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5\" (UID: \"cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5\") " Jul 11 00:08:23.355015 kubelet[2482]: I0711 00:08:23.355009 2482 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-hostproc\") pod \"cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5\" (UID: \"cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5\") " Jul 11 00:08:23.355461 kubelet[2482]: I0711 00:08:23.355032 2482 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-cilium-cgroup\") pod \"cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5\" (UID: \"cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5\") " Jul 11 00:08:23.355461 kubelet[2482]: I0711 00:08:23.355065 2482 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-clustermesh-secrets\") pod \"cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5\" (UID: \"cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5\") " Jul 11 00:08:23.355461 kubelet[2482]: I0711 00:08:23.355084 2482 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-cni-path\") pod \"cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5\" (UID: \"cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5\") " Jul 11 00:08:23.355461 kubelet[2482]: I0711 00:08:23.355105 2482 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-cilium-config-path\") pod \"cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5\" (UID: \"cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5\") " Jul 11 00:08:23.355461 kubelet[2482]: I0711 00:08:23.355134 2482 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-hubble-tls\") pod \"cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5\" (UID: \"cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5\") " Jul 11 00:08:23.355461 kubelet[2482]: I0711 00:08:23.355151 2482 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jbnh\" (UniqueName: \"kubernetes.io/projected/35f62f86-2c45-4527-9daf-20678c88a94f-kube-api-access-8jbnh\") pod \"35f62f86-2c45-4527-9daf-20678c88a94f\" (UID: \"35f62f86-2c45-4527-9daf-20678c88a94f\") " Jul 11 00:08:23.355603 kubelet[2482]: I0711 00:08:23.355168 2482 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8qj9\" (UniqueName: \"kubernetes.io/projected/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-kube-api-access-c8qj9\") pod \"cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5\" (UID: \"cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5\") " Jul 11 00:08:23.355603 kubelet[2482]: I0711 00:08:23.355181 2482 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-host-proc-sys-kernel\") pod \"cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5\" (UID: \"cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5\") " Jul 11 00:08:23.355603 kubelet[2482]: I0711 00:08:23.355197 2482 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-lib-modules\") pod \"cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5\" (UID: \"cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5\") " Jul 11 00:08:23.355603 kubelet[2482]: I0711 00:08:23.355211 2482 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-etc-cni-netd\") pod \"cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5\" (UID: \"cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5\") " Jul 11 00:08:23.355603 kubelet[2482]: I0711 00:08:23.355224 2482 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-host-proc-sys-net\") pod \"cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5\" (UID: \"cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5\") " Jul 11 00:08:23.355603 kubelet[2482]: I0711 00:08:23.355239 2482 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-xtables-lock\") pod \"cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5\" (UID: \"cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5\") " Jul 11 00:08:23.355724 kubelet[2482]: I0711 00:08:23.355252 2482 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-bpf-maps\") pod \"cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5\" (UID: \"cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5\") " Jul 11 00:08:23.355724 kubelet[2482]: I0711 00:08:23.355267 2482 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/35f62f86-2c45-4527-9daf-20678c88a94f-cilium-config-path\") pod \"35f62f86-2c45-4527-9daf-20678c88a94f\" (UID: \"35f62f86-2c45-4527-9daf-20678c88a94f\") " Jul 11 00:08:23.358206 kubelet[2482]: I0711 00:08:23.357923 2482 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5" (UID: "cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:08:23.358206 kubelet[2482]: I0711 00:08:23.357923 2482 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-hostproc" (OuterVolumeSpecName: "hostproc") pod "cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5" (UID: "cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:08:23.358206 kubelet[2482]: I0711 00:08:23.357923 2482 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5" (UID: "cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:08:23.367062 kubelet[2482]: I0711 00:08:23.366950 2482 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-kube-api-access-c8qj9" (OuterVolumeSpecName: "kube-api-access-c8qj9") pod "cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5" (UID: "cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5"). InnerVolumeSpecName "kube-api-access-c8qj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 11 00:08:23.368103 kubelet[2482]: I0711 00:08:23.368059 2482 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5" (UID: "cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 11 00:08:23.368150 kubelet[2482]: I0711 00:08:23.368126 2482 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5" (UID: "cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:08:23.368150 kubelet[2482]: I0711 00:08:23.368145 2482 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5" (UID: "cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:08:23.368199 kubelet[2482]: I0711 00:08:23.368162 2482 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5" (UID: "cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:08:23.368199 kubelet[2482]: I0711 00:08:23.368178 2482 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-cni-path" (OuterVolumeSpecName: "cni-path") pod "cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5" (UID: "cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:08:23.369727 kubelet[2482]: I0711 00:08:23.368466 2482 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5" (UID: "cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:08:23.369727 kubelet[2482]: I0711 00:08:23.368508 2482 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5" (UID: "cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:08:23.369727 kubelet[2482]: I0711 00:08:23.368523 2482 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5" (UID: "cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:08:23.370300 kubelet[2482]: I0711 00:08:23.370252 2482 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35f62f86-2c45-4527-9daf-20678c88a94f-kube-api-access-8jbnh" (OuterVolumeSpecName: "kube-api-access-8jbnh") pod "35f62f86-2c45-4527-9daf-20678c88a94f" (UID: "35f62f86-2c45-4527-9daf-20678c88a94f"). InnerVolumeSpecName "kube-api-access-8jbnh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 11 00:08:23.371210 kubelet[2482]: I0711 00:08:23.371175 2482 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5" (UID: "cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 11 00:08:23.371535 kubelet[2482]: I0711 00:08:23.371503 2482 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5" (UID: "cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 11 00:08:23.371720 kubelet[2482]: I0711 00:08:23.371694 2482 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35f62f86-2c45-4527-9daf-20678c88a94f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "35f62f86-2c45-4527-9daf-20678c88a94f" (UID: "35f62f86-2c45-4527-9daf-20678c88a94f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 11 00:08:23.402025 systemd[1]: Removed slice kubepods-besteffort-pod35f62f86_2c45_4527_9daf_20678c88a94f.slice - libcontainer container kubepods-besteffort-pod35f62f86_2c45_4527_9daf_20678c88a94f.slice. Jul 11 00:08:23.403200 systemd[1]: Removed slice kubepods-burstable-podcf0342a8_0a8c_4e84_8e7a_d31e3271d1d5.slice - libcontainer container kubepods-burstable-podcf0342a8_0a8c_4e84_8e7a_d31e3271d1d5.slice. Jul 11 00:08:23.403329 systemd[1]: kubepods-burstable-podcf0342a8_0a8c_4e84_8e7a_d31e3271d1d5.slice: Consumed 6.563s CPU time. Jul 11 00:08:23.455812 kubelet[2482]: I0711 00:08:23.455759 2482 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 11 00:08:23.455812 kubelet[2482]: I0711 00:08:23.455794 2482 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 11 00:08:23.455812 kubelet[2482]: I0711 00:08:23.455804 2482 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 11 00:08:23.455812 kubelet[2482]: I0711 00:08:23.455813 2482 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 11 00:08:23.455812 kubelet[2482]: I0711 00:08:23.455822 2482 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 11 00:08:23.456030 kubelet[2482]: I0711 00:08:23.455830 2482 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/35f62f86-2c45-4527-9daf-20678c88a94f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 11 00:08:23.456030 kubelet[2482]: I0711 00:08:23.455839 2482 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 11 00:08:23.456030 kubelet[2482]: I0711 00:08:23.455846 2482 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 11 00:08:23.456030 kubelet[2482]: I0711 00:08:23.455853 2482 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 11 00:08:23.456030 kubelet[2482]: I0711 00:08:23.455861 2482 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 11 00:08:23.456030 kubelet[2482]: I0711 00:08:23.455868 2482 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 11 00:08:23.456030 kubelet[2482]: I0711 00:08:23.455875 2482 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 11 00:08:23.456030 kubelet[2482]: I0711 00:08:23.455883 2482 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 11 00:08:23.456243 kubelet[2482]: I0711 00:08:23.455891 2482 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8jbnh\" (UniqueName: \"kubernetes.io/projected/35f62f86-2c45-4527-9daf-20678c88a94f-kube-api-access-8jbnh\") on node \"localhost\" DevicePath \"\"" Jul 11 00:08:23.456243 kubelet[2482]: I0711 00:08:23.455899 2482 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c8qj9\" (UniqueName: \"kubernetes.io/projected/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-kube-api-access-c8qj9\") on node \"localhost\" DevicePath \"\"" Jul 11 00:08:23.456243 kubelet[2482]: I0711 00:08:23.455907 2482 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 11 00:08:23.632025 kubelet[2482]: I0711 00:08:23.631996 2482 scope.go:117] "RemoveContainer" containerID="2ee28684b008d9c11cf483ffe9f1be4e4915660db86c63ed910f2475df3c24e7" Jul 11 00:08:23.634589 containerd[1434]: time="2025-07-11T00:08:23.634456213Z" level=info msg="RemoveContainer for \"2ee28684b008d9c11cf483ffe9f1be4e4915660db86c63ed910f2475df3c24e7\"" Jul 11 00:08:23.645425 containerd[1434]: time="2025-07-11T00:08:23.639605989Z" level=info msg="RemoveContainer for \"2ee28684b008d9c11cf483ffe9f1be4e4915660db86c63ed910f2475df3c24e7\" returns successfully" Jul 11 00:08:23.645595 kubelet[2482]: I0711 00:08:23.645502 2482 scope.go:117] "RemoveContainer" containerID="2ee28684b008d9c11cf483ffe9f1be4e4915660db86c63ed910f2475df3c24e7" Jul 11 00:08:23.645849 containerd[1434]: time="2025-07-11T00:08:23.645765320Z" level=error msg="ContainerStatus for \"2ee28684b008d9c11cf483ffe9f1be4e4915660db86c63ed910f2475df3c24e7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2ee28684b008d9c11cf483ffe9f1be4e4915660db86c63ed910f2475df3c24e7\": not found" Jul 11 00:08:23.652555 kubelet[2482]: E0711 00:08:23.652517 2482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2ee28684b008d9c11cf483ffe9f1be4e4915660db86c63ed910f2475df3c24e7\": not found" containerID="2ee28684b008d9c11cf483ffe9f1be4e4915660db86c63ed910f2475df3c24e7" Jul 11 00:08:23.652667 kubelet[2482]: I0711 00:08:23.652566 2482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2ee28684b008d9c11cf483ffe9f1be4e4915660db86c63ed910f2475df3c24e7"} err="failed to get container status \"2ee28684b008d9c11cf483ffe9f1be4e4915660db86c63ed910f2475df3c24e7\": rpc error: code = NotFound desc = an error occurred when try to find container \"2ee28684b008d9c11cf483ffe9f1be4e4915660db86c63ed910f2475df3c24e7\": not found" Jul 11 00:08:23.652667 kubelet[2482]: I0711 00:08:23.652604 2482 scope.go:117] "RemoveContainer" containerID="7f3b42216747811976231c1b9b74e7df8ad5c7479e2ac5d5ae9c83100d61b04b" Jul 11 00:08:23.653986 containerd[1434]: time="2025-07-11T00:08:23.653696322Z" level=info msg="RemoveContainer for \"7f3b42216747811976231c1b9b74e7df8ad5c7479e2ac5d5ae9c83100d61b04b\"" Jul 11 00:08:23.656323 containerd[1434]: time="2025-07-11T00:08:23.656289630Z" level=info msg="RemoveContainer for \"7f3b42216747811976231c1b9b74e7df8ad5c7479e2ac5d5ae9c83100d61b04b\" returns successfully" Jul 11 00:08:23.656635 kubelet[2482]: I0711 00:08:23.656614 2482 scope.go:117] "RemoveContainer" containerID="08d77f52bdbd59f92b52ad8bb5baceda83748d23cecde59082f7e538afcd0523" Jul 11 00:08:23.657640 containerd[1434]: time="2025-07-11T00:08:23.657611824Z" level=info msg="RemoveContainer for \"08d77f52bdbd59f92b52ad8bb5baceda83748d23cecde59082f7e538afcd0523\"" Jul 11 00:08:23.659792 containerd[1434]: time="2025-07-11T00:08:23.659758974Z" level=info msg="RemoveContainer for \"08d77f52bdbd59f92b52ad8bb5baceda83748d23cecde59082f7e538afcd0523\" returns successfully" Jul 11 00:08:23.660176 kubelet[2482]: I0711 00:08:23.660151 2482 scope.go:117] "RemoveContainer" containerID="f97ea73bac8f7f8cf93beaee50841a09d03d3690753dc41b32f3a6c027e0c409" Jul 11 00:08:23.661463 containerd[1434]: time="2025-07-11T00:08:23.661411926Z" level=info msg="RemoveContainer for \"f97ea73bac8f7f8cf93beaee50841a09d03d3690753dc41b32f3a6c027e0c409\"" Jul 11 00:08:23.663548 containerd[1434]: time="2025-07-11T00:08:23.663522356Z" level=info msg="RemoveContainer for \"f97ea73bac8f7f8cf93beaee50841a09d03d3690753dc41b32f3a6c027e0c409\" returns successfully" Jul 11 00:08:23.663764 kubelet[2482]: I0711 00:08:23.663729 2482 scope.go:117] "RemoveContainer" containerID="2a732d13f6b6314b7148262d09f23b6dbd33bfe276aea3f3c7ba044fa937cc01" Jul 11 00:08:23.664886 containerd[1434]: time="2025-07-11T00:08:23.664855830Z" level=info msg="RemoveContainer for \"2a732d13f6b6314b7148262d09f23b6dbd33bfe276aea3f3c7ba044fa937cc01\"" Jul 11 00:08:23.666788 containerd[1434]: time="2025-07-11T00:08:23.666761021Z" level=info msg="RemoveContainer for \"2a732d13f6b6314b7148262d09f23b6dbd33bfe276aea3f3c7ba044fa937cc01\" returns successfully" Jul 11 00:08:23.666927 kubelet[2482]: I0711 00:08:23.666905 2482 scope.go:117] "RemoveContainer" containerID="6e22bd82c8e23a7b53e776b381f2bbc14eddc34fd7c2c9017da99ec81f9c14ec" Jul 11 00:08:23.667882 containerd[1434]: time="2025-07-11T00:08:23.667856456Z" level=info msg="RemoveContainer for \"6e22bd82c8e23a7b53e776b381f2bbc14eddc34fd7c2c9017da99ec81f9c14ec\"" Jul 11 00:08:23.669789 containerd[1434]: time="2025-07-11T00:08:23.669763327Z" level=info msg="RemoveContainer for \"6e22bd82c8e23a7b53e776b381f2bbc14eddc34fd7c2c9017da99ec81f9c14ec\" returns successfully" Jul 11 00:08:23.669917 kubelet[2482]: I0711 00:08:23.669897 2482 scope.go:117] "RemoveContainer" containerID="7f3b42216747811976231c1b9b74e7df8ad5c7479e2ac5d5ae9c83100d61b04b" Jul 11 00:08:23.670399 kubelet[2482]: E0711 00:08:23.670269 2482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7f3b42216747811976231c1b9b74e7df8ad5c7479e2ac5d5ae9c83100d61b04b\": not found" containerID="7f3b42216747811976231c1b9b74e7df8ad5c7479e2ac5d5ae9c83100d61b04b" Jul 11 00:08:23.670399 kubelet[2482]: I0711 00:08:23.670296 2482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7f3b42216747811976231c1b9b74e7df8ad5c7479e2ac5d5ae9c83100d61b04b"} err="failed to get container status \"7f3b42216747811976231c1b9b74e7df8ad5c7479e2ac5d5ae9c83100d61b04b\": rpc error: code = NotFound desc = an error occurred when try to find container \"7f3b42216747811976231c1b9b74e7df8ad5c7479e2ac5d5ae9c83100d61b04b\": not found" Jul 11 00:08:23.670399 kubelet[2482]: I0711 00:08:23.670315 2482 scope.go:117] "RemoveContainer" containerID="08d77f52bdbd59f92b52ad8bb5baceda83748d23cecde59082f7e538afcd0523" Jul 11 00:08:23.670515 containerd[1434]: time="2025-07-11T00:08:23.670093045Z" level=error msg="ContainerStatus for \"7f3b42216747811976231c1b9b74e7df8ad5c7479e2ac5d5ae9c83100d61b04b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7f3b42216747811976231c1b9b74e7df8ad5c7479e2ac5d5ae9c83100d61b04b\": not found" Jul 11 00:08:23.670515 containerd[1434]: time="2025-07-11T00:08:23.670480643Z" level=error msg="ContainerStatus for \"08d77f52bdbd59f92b52ad8bb5baceda83748d23cecde59082f7e538afcd0523\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"08d77f52bdbd59f92b52ad8bb5baceda83748d23cecde59082f7e538afcd0523\": not found" Jul 11 00:08:23.670618 kubelet[2482]: E0711 00:08:23.670591 2482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"08d77f52bdbd59f92b52ad8bb5baceda83748d23cecde59082f7e538afcd0523\": not found" containerID="08d77f52bdbd59f92b52ad8bb5baceda83748d23cecde59082f7e538afcd0523" Jul 11 00:08:23.670652 kubelet[2482]: I0711 00:08:23.670620 2482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"08d77f52bdbd59f92b52ad8bb5baceda83748d23cecde59082f7e538afcd0523"} err="failed to get container status \"08d77f52bdbd59f92b52ad8bb5baceda83748d23cecde59082f7e538afcd0523\": rpc error: code = NotFound desc = an error occurred when try to find container \"08d77f52bdbd59f92b52ad8bb5baceda83748d23cecde59082f7e538afcd0523\": not found" Jul 11 00:08:23.670652 kubelet[2482]: I0711 00:08:23.670637 2482 scope.go:117] "RemoveContainer" containerID="f97ea73bac8f7f8cf93beaee50841a09d03d3690753dc41b32f3a6c027e0c409" Jul 11 00:08:23.670878 containerd[1434]: time="2025-07-11T00:08:23.670825802Z" level=error msg="ContainerStatus for \"f97ea73bac8f7f8cf93beaee50841a09d03d3690753dc41b32f3a6c027e0c409\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f97ea73bac8f7f8cf93beaee50841a09d03d3690753dc41b32f3a6c027e0c409\": not found" Jul 11 00:08:23.670997 kubelet[2482]: E0711 00:08:23.670971 2482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f97ea73bac8f7f8cf93beaee50841a09d03d3690753dc41b32f3a6c027e0c409\": not found" containerID="f97ea73bac8f7f8cf93beaee50841a09d03d3690753dc41b32f3a6c027e0c409" Jul 11 00:08:23.671305 kubelet[2482]: I0711 00:08:23.670996 2482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f97ea73bac8f7f8cf93beaee50841a09d03d3690753dc41b32f3a6c027e0c409"} err="failed to get container status \"f97ea73bac8f7f8cf93beaee50841a09d03d3690753dc41b32f3a6c027e0c409\": rpc error: code = NotFound desc = an error occurred when try to find container \"f97ea73bac8f7f8cf93beaee50841a09d03d3690753dc41b32f3a6c027e0c409\": not found" Jul 11 00:08:23.671305 kubelet[2482]: I0711 00:08:23.671012 2482 scope.go:117] "RemoveContainer" containerID="2a732d13f6b6314b7148262d09f23b6dbd33bfe276aea3f3c7ba044fa937cc01" Jul 11 00:08:23.671509 containerd[1434]: time="2025-07-11T00:08:23.671167160Z" level=error msg="ContainerStatus for \"2a732d13f6b6314b7148262d09f23b6dbd33bfe276aea3f3c7ba044fa937cc01\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2a732d13f6b6314b7148262d09f23b6dbd33bfe276aea3f3c7ba044fa937cc01\": not found" Jul 11 00:08:23.671570 kubelet[2482]: E0711 00:08:23.671370 2482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2a732d13f6b6314b7148262d09f23b6dbd33bfe276aea3f3c7ba044fa937cc01\": not found" containerID="2a732d13f6b6314b7148262d09f23b6dbd33bfe276aea3f3c7ba044fa937cc01" Jul 11 00:08:23.671570 kubelet[2482]: I0711 00:08:23.671397 2482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2a732d13f6b6314b7148262d09f23b6dbd33bfe276aea3f3c7ba044fa937cc01"} err="failed to get container status \"2a732d13f6b6314b7148262d09f23b6dbd33bfe276aea3f3c7ba044fa937cc01\": rpc error: code = NotFound desc = an error occurred when try to find container \"2a732d13f6b6314b7148262d09f23b6dbd33bfe276aea3f3c7ba044fa937cc01\": not found" Jul 11 00:08:23.671570 kubelet[2482]: I0711 00:08:23.671411 2482 scope.go:117] "RemoveContainer" containerID="6e22bd82c8e23a7b53e776b381f2bbc14eddc34fd7c2c9017da99ec81f9c14ec" Jul 11 00:08:23.672075 containerd[1434]: time="2025-07-11T00:08:23.672044676Z" level=error msg="ContainerStatus for \"6e22bd82c8e23a7b53e776b381f2bbc14eddc34fd7c2c9017da99ec81f9c14ec\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6e22bd82c8e23a7b53e776b381f2bbc14eddc34fd7c2c9017da99ec81f9c14ec\": not found" Jul 11 00:08:23.672352 kubelet[2482]: E0711 00:08:23.672329 2482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6e22bd82c8e23a7b53e776b381f2bbc14eddc34fd7c2c9017da99ec81f9c14ec\": not found" containerID="6e22bd82c8e23a7b53e776b381f2bbc14eddc34fd7c2c9017da99ec81f9c14ec" Jul 11 00:08:23.672352 kubelet[2482]: I0711 00:08:23.672349 2482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6e22bd82c8e23a7b53e776b381f2bbc14eddc34fd7c2c9017da99ec81f9c14ec"} err="failed to get container status \"6e22bd82c8e23a7b53e776b381f2bbc14eddc34fd7c2c9017da99ec81f9c14ec\": rpc error: code = NotFound desc = an error occurred when try to find container \"6e22bd82c8e23a7b53e776b381f2bbc14eddc34fd7c2c9017da99ec81f9c14ec\": not found" Jul 11 00:08:24.126731 systemd[1]: var-lib-kubelet-pods-35f62f86\x2d2c45\x2d4527\x2d9daf\x2d20678c88a94f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8jbnh.mount: Deactivated successfully. Jul 11 00:08:24.126836 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-689ad7475e16bcb1ac17455cd921d9a50a6483e992a771f482954e8987bc22a1-rootfs.mount: Deactivated successfully. Jul 11 00:08:24.126886 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-689ad7475e16bcb1ac17455cd921d9a50a6483e992a771f482954e8987bc22a1-shm.mount: Deactivated successfully. Jul 11 00:08:24.126935 systemd[1]: var-lib-kubelet-pods-cf0342a8\x2d0a8c\x2d4e84\x2d8e7a\x2dd31e3271d1d5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc8qj9.mount: Deactivated successfully. Jul 11 00:08:24.126988 systemd[1]: var-lib-kubelet-pods-cf0342a8\x2d0a8c\x2d4e84\x2d8e7a\x2dd31e3271d1d5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 11 00:08:24.127047 systemd[1]: var-lib-kubelet-pods-cf0342a8\x2d0a8c\x2d4e84\x2d8e7a\x2dd31e3271d1d5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 11 00:08:25.065087 sshd[4143]: pam_unix(sshd:session): session closed for user core Jul 11 00:08:25.074955 systemd[1]: sshd@22-10.0.0.37:22-10.0.0.1:33210.service: Deactivated successfully. Jul 11 00:08:25.076618 systemd[1]: session-23.scope: Deactivated successfully. Jul 11 00:08:25.077391 systemd[1]: session-23.scope: Consumed 1.494s CPU time. Jul 11 00:08:25.078637 systemd-logind[1420]: Session 23 logged out. Waiting for processes to exit. Jul 11 00:08:25.083373 systemd[1]: Started sshd@23-10.0.0.37:22-10.0.0.1:49486.service - OpenSSH per-connection server daemon (10.0.0.1:49486). Jul 11 00:08:25.084192 systemd-logind[1420]: Removed session 23. Jul 11 00:08:25.121106 sshd[4305]: Accepted publickey for core from 10.0.0.1 port 49486 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:08:25.122620 sshd[4305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:08:25.126201 systemd-logind[1420]: New session 24 of user core. Jul 11 00:08:25.133283 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 11 00:08:25.392269 kubelet[2482]: I0711 00:08:25.391445 2482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35f62f86-2c45-4527-9daf-20678c88a94f" path="/var/lib/kubelet/pods/35f62f86-2c45-4527-9daf-20678c88a94f/volumes" Jul 11 00:08:25.392269 kubelet[2482]: I0711 00:08:25.391825 2482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5" path="/var/lib/kubelet/pods/cf0342a8-0a8c-4e84-8e7a-d31e3271d1d5/volumes" Jul 11 00:08:25.927318 sshd[4305]: pam_unix(sshd:session): session closed for user core Jul 11 00:08:25.934184 systemd[1]: sshd@23-10.0.0.37:22-10.0.0.1:49486.service: Deactivated successfully. Jul 11 00:08:25.940089 systemd[1]: session-24.scope: Deactivated successfully. Jul 11 00:08:25.943610 systemd-logind[1420]: Session 24 logged out. Waiting for processes to exit. Jul 11 00:08:25.955426 systemd[1]: Started sshd@24-10.0.0.37:22-10.0.0.1:49494.service - OpenSSH per-connection server daemon (10.0.0.1:49494). Jul 11 00:08:25.960858 systemd-logind[1420]: Removed session 24. Jul 11 00:08:25.974161 systemd[1]: Created slice kubepods-burstable-podfe31098c_0cd7_4ac9_a12a_1c0e364719d5.slice - libcontainer container kubepods-burstable-podfe31098c_0cd7_4ac9_a12a_1c0e364719d5.slice. Jul 11 00:08:26.009882 sshd[4318]: Accepted publickey for core from 10.0.0.1 port 49494 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:08:26.011365 sshd[4318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:08:26.015267 systemd-logind[1420]: New session 25 of user core. Jul 11 00:08:26.025351 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 11 00:08:26.073021 kubelet[2482]: I0711 00:08:26.072976 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fe31098c-0cd7-4ac9-a12a-1c0e364719d5-hubble-tls\") pod \"cilium-96zfv\" (UID: \"fe31098c-0cd7-4ac9-a12a-1c0e364719d5\") " pod="kube-system/cilium-96zfv" Jul 11 00:08:26.073021 kubelet[2482]: I0711 00:08:26.073025 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fe31098c-0cd7-4ac9-a12a-1c0e364719d5-cilium-config-path\") pod \"cilium-96zfv\" (UID: \"fe31098c-0cd7-4ac9-a12a-1c0e364719d5\") " pod="kube-system/cilium-96zfv" Jul 11 00:08:26.073195 kubelet[2482]: I0711 00:08:26.073059 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fe31098c-0cd7-4ac9-a12a-1c0e364719d5-hostproc\") pod \"cilium-96zfv\" (UID: \"fe31098c-0cd7-4ac9-a12a-1c0e364719d5\") " pod="kube-system/cilium-96zfv" Jul 11 00:08:26.073195 kubelet[2482]: I0711 00:08:26.073076 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fe31098c-0cd7-4ac9-a12a-1c0e364719d5-host-proc-sys-net\") pod \"cilium-96zfv\" (UID: \"fe31098c-0cd7-4ac9-a12a-1c0e364719d5\") " pod="kube-system/cilium-96zfv" Jul 11 00:08:26.073195 kubelet[2482]: I0711 00:08:26.073093 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fe31098c-0cd7-4ac9-a12a-1c0e364719d5-bpf-maps\") pod \"cilium-96zfv\" (UID: \"fe31098c-0cd7-4ac9-a12a-1c0e364719d5\") " pod="kube-system/cilium-96zfv" Jul 11 00:08:26.073195 kubelet[2482]: I0711 00:08:26.073109 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fe31098c-0cd7-4ac9-a12a-1c0e364719d5-cni-path\") pod \"cilium-96zfv\" (UID: \"fe31098c-0cd7-4ac9-a12a-1c0e364719d5\") " pod="kube-system/cilium-96zfv" Jul 11 00:08:26.073195 kubelet[2482]: I0711 00:08:26.073148 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fe31098c-0cd7-4ac9-a12a-1c0e364719d5-lib-modules\") pod \"cilium-96zfv\" (UID: \"fe31098c-0cd7-4ac9-a12a-1c0e364719d5\") " pod="kube-system/cilium-96zfv" Jul 11 00:08:26.073195 kubelet[2482]: I0711 00:08:26.073165 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fe31098c-0cd7-4ac9-a12a-1c0e364719d5-clustermesh-secrets\") pod \"cilium-96zfv\" (UID: \"fe31098c-0cd7-4ac9-a12a-1c0e364719d5\") " pod="kube-system/cilium-96zfv" Jul 11 00:08:26.073854 kubelet[2482]: I0711 00:08:26.073179 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fe31098c-0cd7-4ac9-a12a-1c0e364719d5-host-proc-sys-kernel\") pod \"cilium-96zfv\" (UID: \"fe31098c-0cd7-4ac9-a12a-1c0e364719d5\") " pod="kube-system/cilium-96zfv" Jul 11 00:08:26.073854 kubelet[2482]: I0711 00:08:26.073196 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fe31098c-0cd7-4ac9-a12a-1c0e364719d5-cilium-run\") pod \"cilium-96zfv\" (UID: \"fe31098c-0cd7-4ac9-a12a-1c0e364719d5\") " pod="kube-system/cilium-96zfv" Jul 11 00:08:26.073854 kubelet[2482]: I0711 00:08:26.073226 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fe31098c-0cd7-4ac9-a12a-1c0e364719d5-cilium-cgroup\") pod \"cilium-96zfv\" (UID: \"fe31098c-0cd7-4ac9-a12a-1c0e364719d5\") " pod="kube-system/cilium-96zfv" Jul 11 00:08:26.073854 kubelet[2482]: I0711 00:08:26.073240 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fe31098c-0cd7-4ac9-a12a-1c0e364719d5-xtables-lock\") pod \"cilium-96zfv\" (UID: \"fe31098c-0cd7-4ac9-a12a-1c0e364719d5\") " pod="kube-system/cilium-96zfv" Jul 11 00:08:26.073854 kubelet[2482]: I0711 00:08:26.073254 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99wh5\" (UniqueName: \"kubernetes.io/projected/fe31098c-0cd7-4ac9-a12a-1c0e364719d5-kube-api-access-99wh5\") pod \"cilium-96zfv\" (UID: \"fe31098c-0cd7-4ac9-a12a-1c0e364719d5\") " pod="kube-system/cilium-96zfv" Jul 11 00:08:26.073854 kubelet[2482]: I0711 00:08:26.073270 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fe31098c-0cd7-4ac9-a12a-1c0e364719d5-etc-cni-netd\") pod \"cilium-96zfv\" (UID: \"fe31098c-0cd7-4ac9-a12a-1c0e364719d5\") " pod="kube-system/cilium-96zfv" Jul 11 00:08:26.074378 kubelet[2482]: I0711 00:08:26.073390 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fe31098c-0cd7-4ac9-a12a-1c0e364719d5-cilium-ipsec-secrets\") pod \"cilium-96zfv\" (UID: \"fe31098c-0cd7-4ac9-a12a-1c0e364719d5\") " pod="kube-system/cilium-96zfv" Jul 11 00:08:26.075870 sshd[4318]: pam_unix(sshd:session): session closed for user core Jul 11 00:08:26.088877 systemd[1]: sshd@24-10.0.0.37:22-10.0.0.1:49494.service: Deactivated successfully. Jul 11 00:08:26.091609 systemd[1]: session-25.scope: Deactivated successfully. Jul 11 00:08:26.092890 systemd-logind[1420]: Session 25 logged out. Waiting for processes to exit. Jul 11 00:08:26.100649 systemd[1]: Started sshd@25-10.0.0.37:22-10.0.0.1:49508.service - OpenSSH per-connection server daemon (10.0.0.1:49508). Jul 11 00:08:26.101538 systemd-logind[1420]: Removed session 25. Jul 11 00:08:26.131970 sshd[4326]: Accepted publickey for core from 10.0.0.1 port 49508 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:08:26.133302 sshd[4326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:08:26.138126 systemd-logind[1420]: New session 26 of user core. Jul 11 00:08:26.146272 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 11 00:08:26.284232 kubelet[2482]: E0711 00:08:26.283812 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:08:26.284391 containerd[1434]: time="2025-07-11T00:08:26.284349054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-96zfv,Uid:fe31098c-0cd7-4ac9-a12a-1c0e364719d5,Namespace:kube-system,Attempt:0,}" Jul 11 00:08:26.308042 containerd[1434]: time="2025-07-11T00:08:26.307904296Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:08:26.308042 containerd[1434]: time="2025-07-11T00:08:26.307971775Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:08:26.308042 containerd[1434]: time="2025-07-11T00:08:26.307985295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:08:26.308341 containerd[1434]: time="2025-07-11T00:08:26.308078295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:08:26.328384 systemd[1]: Started cri-containerd-8eac29cb55bdf53b0e2924c0c3d5507b0204e14024c870292309478df2591bd4.scope - libcontainer container 8eac29cb55bdf53b0e2924c0c3d5507b0204e14024c870292309478df2591bd4. Jul 11 00:08:26.348797 containerd[1434]: time="2025-07-11T00:08:26.348755519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-96zfv,Uid:fe31098c-0cd7-4ac9-a12a-1c0e364719d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"8eac29cb55bdf53b0e2924c0c3d5507b0204e14024c870292309478df2591bd4\"" Jul 11 00:08:26.349736 kubelet[2482]: E0711 00:08:26.349714 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:08:26.356999 containerd[1434]: time="2025-07-11T00:08:26.356288174Z" level=info msg="CreateContainer within sandbox \"8eac29cb55bdf53b0e2924c0c3d5507b0204e14024c870292309478df2591bd4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 11 00:08:26.370095 containerd[1434]: time="2025-07-11T00:08:26.369933089Z" level=info msg="CreateContainer within sandbox \"8eac29cb55bdf53b0e2924c0c3d5507b0204e14024c870292309478df2591bd4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"50e5a4e8213cf5111c8e81e82de6cf6bcfafd73c89b1641910f8795847024f86\"" Jul 11 00:08:26.370690 containerd[1434]: time="2025-07-11T00:08:26.370520047Z" level=info msg="StartContainer for \"50e5a4e8213cf5111c8e81e82de6cf6bcfafd73c89b1641910f8795847024f86\"" Jul 11 00:08:26.407331 systemd[1]: Started cri-containerd-50e5a4e8213cf5111c8e81e82de6cf6bcfafd73c89b1641910f8795847024f86.scope - libcontainer container 50e5a4e8213cf5111c8e81e82de6cf6bcfafd73c89b1641910f8795847024f86. Jul 11 00:08:26.431894 containerd[1434]: time="2025-07-11T00:08:26.431850962Z" level=info msg="StartContainer for \"50e5a4e8213cf5111c8e81e82de6cf6bcfafd73c89b1641910f8795847024f86\" returns successfully" Jul 11 00:08:26.457239 systemd[1]: cri-containerd-50e5a4e8213cf5111c8e81e82de6cf6bcfafd73c89b1641910f8795847024f86.scope: Deactivated successfully. Jul 11 00:08:26.488031 containerd[1434]: time="2025-07-11T00:08:26.487892495Z" level=info msg="shim disconnected" id=50e5a4e8213cf5111c8e81e82de6cf6bcfafd73c89b1641910f8795847024f86 namespace=k8s.io Jul 11 00:08:26.488031 containerd[1434]: time="2025-07-11T00:08:26.487945495Z" level=warning msg="cleaning up after shim disconnected" id=50e5a4e8213cf5111c8e81e82de6cf6bcfafd73c89b1641910f8795847024f86 namespace=k8s.io Jul 11 00:08:26.488031 containerd[1434]: time="2025-07-11T00:08:26.487954375Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:08:26.497812 containerd[1434]: time="2025-07-11T00:08:26.497746102Z" level=warning msg="cleanup warnings time=\"2025-07-11T00:08:26Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 11 00:08:26.629021 kubelet[2482]: E0711 00:08:26.628909 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:08:26.634483 containerd[1434]: time="2025-07-11T00:08:26.634444806Z" level=info msg="CreateContainer within sandbox \"8eac29cb55bdf53b0e2924c0c3d5507b0204e14024c870292309478df2591bd4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 11 00:08:26.644315 containerd[1434]: time="2025-07-11T00:08:26.644200214Z" level=info msg="CreateContainer within sandbox \"8eac29cb55bdf53b0e2924c0c3d5507b0204e14024c870292309478df2591bd4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e76c771e47b54d8ee254ac9b6d8340cfda235d4a782e9333d89639316ec8eba4\"" Jul 11 00:08:26.647722 containerd[1434]: time="2025-07-11T00:08:26.647688602Z" level=info msg="StartContainer for \"e76c771e47b54d8ee254ac9b6d8340cfda235d4a782e9333d89639316ec8eba4\"" Jul 11 00:08:26.676329 systemd[1]: Started cri-containerd-e76c771e47b54d8ee254ac9b6d8340cfda235d4a782e9333d89639316ec8eba4.scope - libcontainer container e76c771e47b54d8ee254ac9b6d8340cfda235d4a782e9333d89639316ec8eba4. Jul 11 00:08:26.697203 containerd[1434]: time="2025-07-11T00:08:26.697156037Z" level=info msg="StartContainer for \"e76c771e47b54d8ee254ac9b6d8340cfda235d4a782e9333d89639316ec8eba4\" returns successfully" Jul 11 00:08:26.708869 systemd[1]: cri-containerd-e76c771e47b54d8ee254ac9b6d8340cfda235d4a782e9333d89639316ec8eba4.scope: Deactivated successfully. Jul 11 00:08:26.731757 containerd[1434]: time="2025-07-11T00:08:26.731696522Z" level=info msg="shim disconnected" id=e76c771e47b54d8ee254ac9b6d8340cfda235d4a782e9333d89639316ec8eba4 namespace=k8s.io Jul 11 00:08:26.731757 containerd[1434]: time="2025-07-11T00:08:26.731754202Z" level=warning msg="cleaning up after shim disconnected" id=e76c771e47b54d8ee254ac9b6d8340cfda235d4a782e9333d89639316ec8eba4 namespace=k8s.io Jul 11 00:08:26.731757 containerd[1434]: time="2025-07-11T00:08:26.731763082Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:08:27.444520 kubelet[2482]: E0711 00:08:27.444462 2482 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 11 00:08:27.632067 kubelet[2482]: E0711 00:08:27.632032 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:08:27.635727 containerd[1434]: time="2025-07-11T00:08:27.635687857Z" level=info msg="CreateContainer within sandbox \"8eac29cb55bdf53b0e2924c0c3d5507b0204e14024c870292309478df2591bd4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 11 00:08:27.742548 containerd[1434]: time="2025-07-11T00:08:27.742442306Z" level=info msg="CreateContainer within sandbox \"8eac29cb55bdf53b0e2924c0c3d5507b0204e14024c870292309478df2591bd4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2a30876e343d8d988e269abcdd91eada3b2340739568f39898a319dfc66a3456\"" Jul 11 00:08:27.743634 containerd[1434]: time="2025-07-11T00:08:27.743612663Z" level=info msg="StartContainer for \"2a30876e343d8d988e269abcdd91eada3b2340739568f39898a319dfc66a3456\"" Jul 11 00:08:27.769279 systemd[1]: Started cri-containerd-2a30876e343d8d988e269abcdd91eada3b2340739568f39898a319dfc66a3456.scope - libcontainer container 2a30876e343d8d988e269abcdd91eada3b2340739568f39898a319dfc66a3456. Jul 11 00:08:27.797876 systemd[1]: cri-containerd-2a30876e343d8d988e269abcdd91eada3b2340739568f39898a319dfc66a3456.scope: Deactivated successfully. Jul 11 00:08:27.800808 containerd[1434]: time="2025-07-11T00:08:27.800686977Z" level=info msg="StartContainer for \"2a30876e343d8d988e269abcdd91eada3b2340739568f39898a319dfc66a3456\" returns successfully" Jul 11 00:08:27.825041 containerd[1434]: time="2025-07-11T00:08:27.824968946Z" level=info msg="shim disconnected" id=2a30876e343d8d988e269abcdd91eada3b2340739568f39898a319dfc66a3456 namespace=k8s.io Jul 11 00:08:27.825041 containerd[1434]: time="2025-07-11T00:08:27.825033026Z" level=warning msg="cleaning up after shim disconnected" id=2a30876e343d8d988e269abcdd91eada3b2340739568f39898a319dfc66a3456 namespace=k8s.io Jul 11 00:08:27.825041 containerd[1434]: time="2025-07-11T00:08:27.825041786Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:08:28.180365 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a30876e343d8d988e269abcdd91eada3b2340739568f39898a319dfc66a3456-rootfs.mount: Deactivated successfully. Jul 11 00:08:28.635787 kubelet[2482]: E0711 00:08:28.635741 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:08:28.642537 containerd[1434]: time="2025-07-11T00:08:28.642481352Z" level=info msg="CreateContainer within sandbox \"8eac29cb55bdf53b0e2924c0c3d5507b0204e14024c870292309478df2591bd4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 11 00:08:28.654753 containerd[1434]: time="2025-07-11T00:08:28.654687202Z" level=info msg="CreateContainer within sandbox \"8eac29cb55bdf53b0e2924c0c3d5507b0204e14024c870292309478df2591bd4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"70f9bf1a96d2c35f0c5d7a54c2402efd344c76f101a3e5feb3704f5db51ceedc\"" Jul 11 00:08:28.656251 containerd[1434]: time="2025-07-11T00:08:28.656213318Z" level=info msg="StartContainer for \"70f9bf1a96d2c35f0c5d7a54c2402efd344c76f101a3e5feb3704f5db51ceedc\"" Jul 11 00:08:28.692276 systemd[1]: Started cri-containerd-70f9bf1a96d2c35f0c5d7a54c2402efd344c76f101a3e5feb3704f5db51ceedc.scope - libcontainer container 70f9bf1a96d2c35f0c5d7a54c2402efd344c76f101a3e5feb3704f5db51ceedc. Jul 11 00:08:28.710528 systemd[1]: cri-containerd-70f9bf1a96d2c35f0c5d7a54c2402efd344c76f101a3e5feb3704f5db51ceedc.scope: Deactivated successfully. Jul 11 00:08:28.713485 containerd[1434]: time="2025-07-11T00:08:28.713389815Z" level=info msg="StartContainer for \"70f9bf1a96d2c35f0c5d7a54c2402efd344c76f101a3e5feb3704f5db51ceedc\" returns successfully" Jul 11 00:08:28.731876 containerd[1434]: time="2025-07-11T00:08:28.731806369Z" level=info msg="shim disconnected" id=70f9bf1a96d2c35f0c5d7a54c2402efd344c76f101a3e5feb3704f5db51ceedc namespace=k8s.io Jul 11 00:08:28.731876 containerd[1434]: time="2025-07-11T00:08:28.731859929Z" level=warning msg="cleaning up after shim disconnected" id=70f9bf1a96d2c35f0c5d7a54c2402efd344c76f101a3e5feb3704f5db51ceedc namespace=k8s.io Jul 11 00:08:28.731876 containerd[1434]: time="2025-07-11T00:08:28.731868489Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:08:28.836925 kubelet[2482]: I0711 00:08:28.836861 2482 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-11T00:08:28Z","lastTransitionTime":"2025-07-11T00:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 11 00:08:29.180389 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-70f9bf1a96d2c35f0c5d7a54c2402efd344c76f101a3e5feb3704f5db51ceedc-rootfs.mount: Deactivated successfully. Jul 11 00:08:29.640396 kubelet[2482]: E0711 00:08:29.640369 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:08:29.645168 containerd[1434]: time="2025-07-11T00:08:29.644809070Z" level=info msg="CreateContainer within sandbox \"8eac29cb55bdf53b0e2924c0c3d5507b0204e14024c870292309478df2591bd4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 11 00:08:29.659583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1895288403.mount: Deactivated successfully. Jul 11 00:08:29.666432 containerd[1434]: time="2025-07-11T00:08:29.666110906Z" level=info msg="CreateContainer within sandbox \"8eac29cb55bdf53b0e2924c0c3d5507b0204e14024c870292309478df2591bd4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7a484aafd3cf0e77ad90b74bd2d6618a3572359744b128f6291bfcbc9a986295\"" Jul 11 00:08:29.667093 containerd[1434]: time="2025-07-11T00:08:29.666868944Z" level=info msg="StartContainer for \"7a484aafd3cf0e77ad90b74bd2d6618a3572359744b128f6291bfcbc9a986295\"" Jul 11 00:08:29.699824 systemd[1]: Started cri-containerd-7a484aafd3cf0e77ad90b74bd2d6618a3572359744b128f6291bfcbc9a986295.scope - libcontainer container 7a484aafd3cf0e77ad90b74bd2d6618a3572359744b128f6291bfcbc9a986295. Jul 11 00:08:29.726418 containerd[1434]: time="2025-07-11T00:08:29.726356379Z" level=info msg="StartContainer for \"7a484aafd3cf0e77ad90b74bd2d6618a3572359744b128f6291bfcbc9a986295\" returns successfully" Jul 11 00:08:30.002346 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 11 00:08:30.645054 kubelet[2482]: E0711 00:08:30.644970 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:08:30.663436 kubelet[2482]: I0711 00:08:30.663370 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-96zfv" podStartSLOduration=5.663354716 podStartE2EDuration="5.663354716s" podCreationTimestamp="2025-07-11 00:08:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:08:30.662505077 +0000 UTC m=+83.361436249" watchObservedRunningTime="2025-07-11 00:08:30.663354716 +0000 UTC m=+83.362285848" Jul 11 00:08:32.285792 kubelet[2482]: E0711 00:08:32.285759 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:08:32.801506 systemd-networkd[1374]: lxc_health: Link UP Jul 11 00:08:32.807267 systemd-networkd[1374]: lxc_health: Gained carrier Jul 11 00:08:34.287034 kubelet[2482]: E0711 00:08:34.286510 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:08:34.655110 kubelet[2482]: E0711 00:08:34.655073 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:08:34.775282 systemd-networkd[1374]: lxc_health: Gained IPv6LL Jul 11 00:08:35.389657 kubelet[2482]: E0711 00:08:35.389600 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:08:35.657363 kubelet[2482]: E0711 00:08:35.657162 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:08:38.865645 sshd[4326]: pam_unix(sshd:session): session closed for user core Jul 11 00:08:38.868809 systemd[1]: sshd@25-10.0.0.37:22-10.0.0.1:49508.service: Deactivated successfully. Jul 11 00:08:38.870454 systemd[1]: session-26.scope: Deactivated successfully. Jul 11 00:08:38.871707 systemd-logind[1420]: Session 26 logged out. Waiting for processes to exit. Jul 11 00:08:38.873578 systemd-logind[1420]: Removed session 26. Jul 11 00:08:40.389889 kubelet[2482]: E0711 00:08:40.389850 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"