Jul 14 22:05:47.883262 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 14 22:05:47.883283 kernel: Linux version 6.6.97-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Jul 14 20:26:44 -00 2025 Jul 14 22:05:47.883293 kernel: KASLR enabled Jul 14 22:05:47.883298 kernel: efi: EFI v2.7 by EDK II Jul 14 22:05:47.883304 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jul 14 22:05:47.883310 kernel: random: crng init done Jul 14 22:05:47.883317 kernel: ACPI: Early table checksum verification disabled Jul 14 22:05:47.883323 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jul 14 22:05:47.883329 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 14 22:05:47.883336 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:05:47.883342 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:05:47.883348 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:05:47.883354 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:05:47.883360 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:05:47.883368 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:05:47.883375 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:05:47.883382 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:05:47.883388 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:05:47.883394 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 14 22:05:47.883401 kernel: NUMA: Failed to initialise from firmware Jul 14 22:05:47.883407 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 14 22:05:47.883414 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Jul 14 22:05:47.883420 kernel: Zone ranges: Jul 14 22:05:47.883426 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 14 22:05:47.883432 kernel: DMA32 empty Jul 14 22:05:47.883440 kernel: Normal empty Jul 14 22:05:47.883446 kernel: Movable zone start for each node Jul 14 22:05:47.883452 kernel: Early memory node ranges Jul 14 22:05:47.883459 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jul 14 22:05:47.883465 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 14 22:05:47.883471 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 14 22:05:47.883478 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 14 22:05:47.883484 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 14 22:05:47.883490 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 14 22:05:47.883497 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 14 22:05:47.883503 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 14 22:05:47.883509 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 14 22:05:47.883517 kernel: psci: probing for conduit method from ACPI. Jul 14 22:05:47.883523 kernel: psci: PSCIv1.1 detected in firmware. Jul 14 22:05:47.883529 kernel: psci: Using standard PSCI v0.2 function IDs Jul 14 22:05:47.883538 kernel: psci: Trusted OS migration not required Jul 14 22:05:47.883553 kernel: psci: SMC Calling Convention v1.1 Jul 14 22:05:47.883561 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 14 22:05:47.883569 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 14 22:05:47.883576 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 14 22:05:47.883583 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 14 22:05:47.883589 kernel: Detected PIPT I-cache on CPU0 Jul 14 22:05:47.883596 kernel: CPU features: detected: GIC system register CPU interface Jul 14 22:05:47.883603 kernel: CPU features: detected: Hardware dirty bit management Jul 14 22:05:47.883610 kernel: CPU features: detected: Spectre-v4 Jul 14 22:05:47.883616 kernel: CPU features: detected: Spectre-BHB Jul 14 22:05:47.883623 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 14 22:05:47.883630 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 14 22:05:47.883637 kernel: CPU features: detected: ARM erratum 1418040 Jul 14 22:05:47.883644 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 14 22:05:47.883651 kernel: alternatives: applying boot alternatives Jul 14 22:05:47.883659 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=219fd31147cccfc1f4834c1854a4109714661cabce52e86d5c93000af393c45b Jul 14 22:05:47.883666 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 14 22:05:47.883673 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 14 22:05:47.883680 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 14 22:05:47.883686 kernel: Fallback order for Node 0: 0 Jul 14 22:05:47.883693 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 14 22:05:47.883700 kernel: Policy zone: DMA Jul 14 22:05:47.883706 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 14 22:05:47.883714 kernel: software IO TLB: area num 4. Jul 14 22:05:47.883721 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 14 22:05:47.883728 kernel: Memory: 2386400K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185888K reserved, 0K cma-reserved) Jul 14 22:05:47.883735 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 14 22:05:47.883742 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 14 22:05:47.883749 kernel: rcu: RCU event tracing is enabled. Jul 14 22:05:47.883756 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 14 22:05:47.883763 kernel: Trampoline variant of Tasks RCU enabled. Jul 14 22:05:47.883769 kernel: Tracing variant of Tasks RCU enabled. Jul 14 22:05:47.883776 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 14 22:05:47.883783 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 14 22:05:47.883790 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 14 22:05:47.883798 kernel: GICv3: 256 SPIs implemented Jul 14 22:05:47.883804 kernel: GICv3: 0 Extended SPIs implemented Jul 14 22:05:47.883811 kernel: Root IRQ handler: gic_handle_irq Jul 14 22:05:47.883818 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 14 22:05:47.883824 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 14 22:05:47.883831 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 14 22:05:47.883838 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jul 14 22:05:47.883853 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jul 14 22:05:47.883861 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 14 22:05:47.883867 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 14 22:05:47.883874 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 14 22:05:47.883883 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 22:05:47.883889 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 14 22:05:47.883896 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 14 22:05:47.883903 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 14 22:05:47.883910 kernel: arm-pv: using stolen time PV Jul 14 22:05:47.883917 kernel: Console: colour dummy device 80x25 Jul 14 22:05:47.883924 kernel: ACPI: Core revision 20230628 Jul 14 22:05:47.883931 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 14 22:05:47.883938 kernel: pid_max: default: 32768 minimum: 301 Jul 14 22:05:47.883945 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 14 22:05:47.883953 kernel: landlock: Up and running. Jul 14 22:05:47.883960 kernel: SELinux: Initializing. Jul 14 22:05:47.883967 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 22:05:47.883974 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 22:05:47.883981 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 14 22:05:47.883988 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 14 22:05:47.883995 kernel: rcu: Hierarchical SRCU implementation. Jul 14 22:05:47.884002 kernel: rcu: Max phase no-delay instances is 400. Jul 14 22:05:47.884008 kernel: Platform MSI: ITS@0x8080000 domain created Jul 14 22:05:47.884016 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 14 22:05:47.884023 kernel: Remapping and enabling EFI services. Jul 14 22:05:47.884030 kernel: smp: Bringing up secondary CPUs ... Jul 14 22:05:47.884037 kernel: Detected PIPT I-cache on CPU1 Jul 14 22:05:47.884044 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 14 22:05:47.884051 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 14 22:05:47.884058 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 22:05:47.884064 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 14 22:05:47.884071 kernel: Detected PIPT I-cache on CPU2 Jul 14 22:05:47.884078 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 14 22:05:47.884087 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 14 22:05:47.884094 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 22:05:47.884104 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 14 22:05:47.884113 kernel: Detected PIPT I-cache on CPU3 Jul 14 22:05:47.884120 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 14 22:05:47.884128 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 14 22:05:47.884135 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 22:05:47.884142 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 14 22:05:47.884150 kernel: smp: Brought up 1 node, 4 CPUs Jul 14 22:05:47.884158 kernel: SMP: Total of 4 processors activated. Jul 14 22:05:47.884165 kernel: CPU features: detected: 32-bit EL0 Support Jul 14 22:05:47.884172 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 14 22:05:47.884180 kernel: CPU features: detected: Common not Private translations Jul 14 22:05:47.884187 kernel: CPU features: detected: CRC32 instructions Jul 14 22:05:47.884194 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 14 22:05:47.884201 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 14 22:05:47.884208 kernel: CPU features: detected: LSE atomic instructions Jul 14 22:05:47.884217 kernel: CPU features: detected: Privileged Access Never Jul 14 22:05:47.884224 kernel: CPU features: detected: RAS Extension Support Jul 14 22:05:47.884231 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 14 22:05:47.884239 kernel: CPU: All CPU(s) started at EL1 Jul 14 22:05:47.884246 kernel: alternatives: applying system-wide alternatives Jul 14 22:05:47.884253 kernel: devtmpfs: initialized Jul 14 22:05:47.884260 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 14 22:05:47.884268 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 14 22:05:47.884275 kernel: pinctrl core: initialized pinctrl subsystem Jul 14 22:05:47.884283 kernel: SMBIOS 3.0.0 present. Jul 14 22:05:47.884291 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jul 14 22:05:47.884298 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 14 22:05:47.884305 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 14 22:05:47.884313 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 14 22:05:47.884320 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 14 22:05:47.884327 kernel: audit: initializing netlink subsys (disabled) Jul 14 22:05:47.884335 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Jul 14 22:05:47.884342 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 14 22:05:47.884350 kernel: cpuidle: using governor menu Jul 14 22:05:47.884358 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 14 22:05:47.884365 kernel: ASID allocator initialised with 32768 entries Jul 14 22:05:47.884372 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 14 22:05:47.884380 kernel: Serial: AMBA PL011 UART driver Jul 14 22:05:47.884387 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 14 22:05:47.884394 kernel: Modules: 0 pages in range for non-PLT usage Jul 14 22:05:47.884415 kernel: Modules: 509008 pages in range for PLT usage Jul 14 22:05:47.884424 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 14 22:05:47.884433 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 14 22:05:47.884440 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 14 22:05:47.884448 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 14 22:05:47.884455 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 14 22:05:47.884462 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 14 22:05:47.884469 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 14 22:05:47.884477 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 14 22:05:47.884484 kernel: ACPI: Added _OSI(Module Device) Jul 14 22:05:47.884491 kernel: ACPI: Added _OSI(Processor Device) Jul 14 22:05:47.884500 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 14 22:05:47.884507 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 14 22:05:47.884514 kernel: ACPI: Interpreter enabled Jul 14 22:05:47.884522 kernel: ACPI: Using GIC for interrupt routing Jul 14 22:05:47.884529 kernel: ACPI: MCFG table detected, 1 entries Jul 14 22:05:47.884536 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 14 22:05:47.884547 kernel: printk: console [ttyAMA0] enabled Jul 14 22:05:47.884556 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 14 22:05:47.884683 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 14 22:05:47.884756 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 14 22:05:47.884819 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 14 22:05:47.884948 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 14 22:05:47.885012 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 14 22:05:47.885022 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 14 22:05:47.885030 kernel: PCI host bridge to bus 0000:00 Jul 14 22:05:47.885096 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 14 22:05:47.885157 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 14 22:05:47.885213 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 14 22:05:47.885268 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 14 22:05:47.885348 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 14 22:05:47.885420 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 14 22:05:47.885485 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 14 22:05:47.885567 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 14 22:05:47.885640 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 14 22:05:47.885705 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 14 22:05:47.885769 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 14 22:05:47.885833 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 14 22:05:47.885903 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 14 22:05:47.885960 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 14 22:05:47.886020 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 14 22:05:47.886029 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 14 22:05:47.886037 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 14 22:05:47.886045 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 14 22:05:47.886052 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 14 22:05:47.886059 kernel: iommu: Default domain type: Translated Jul 14 22:05:47.886067 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 14 22:05:47.886074 kernel: efivars: Registered efivars operations Jul 14 22:05:47.886081 kernel: vgaarb: loaded Jul 14 22:05:47.886090 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 14 22:05:47.886097 kernel: VFS: Disk quotas dquot_6.6.0 Jul 14 22:05:47.886105 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 14 22:05:47.886112 kernel: pnp: PnP ACPI init Jul 14 22:05:47.886183 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 14 22:05:47.886194 kernel: pnp: PnP ACPI: found 1 devices Jul 14 22:05:47.886202 kernel: NET: Registered PF_INET protocol family Jul 14 22:05:47.886209 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 14 22:05:47.886218 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 14 22:05:47.886226 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 14 22:05:47.886233 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 14 22:05:47.886240 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 14 22:05:47.886248 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 14 22:05:47.886255 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 22:05:47.886262 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 22:05:47.886270 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 14 22:05:47.886277 kernel: PCI: CLS 0 bytes, default 64 Jul 14 22:05:47.886286 kernel: kvm [1]: HYP mode not available Jul 14 22:05:47.886293 kernel: Initialise system trusted keyrings Jul 14 22:05:47.886300 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 14 22:05:47.886307 kernel: Key type asymmetric registered Jul 14 22:05:47.886315 kernel: Asymmetric key parser 'x509' registered Jul 14 22:05:47.886322 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 14 22:05:47.886329 kernel: io scheduler mq-deadline registered Jul 14 22:05:47.886337 kernel: io scheduler kyber registered Jul 14 22:05:47.886344 kernel: io scheduler bfq registered Jul 14 22:05:47.886353 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 14 22:05:47.886361 kernel: ACPI: button: Power Button [PWRB] Jul 14 22:05:47.886368 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 14 22:05:47.886433 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 14 22:05:47.886442 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 14 22:05:47.886450 kernel: thunder_xcv, ver 1.0 Jul 14 22:05:47.886457 kernel: thunder_bgx, ver 1.0 Jul 14 22:05:47.886464 kernel: nicpf, ver 1.0 Jul 14 22:05:47.886471 kernel: nicvf, ver 1.0 Jul 14 22:05:47.886551 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 14 22:05:47.886619 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-14T22:05:47 UTC (1752530747) Jul 14 22:05:47.886629 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 14 22:05:47.886636 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 14 22:05:47.886644 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 14 22:05:47.886651 kernel: watchdog: Hard watchdog permanently disabled Jul 14 22:05:47.886658 kernel: NET: Registered PF_INET6 protocol family Jul 14 22:05:47.886666 kernel: Segment Routing with IPv6 Jul 14 22:05:47.886676 kernel: In-situ OAM (IOAM) with IPv6 Jul 14 22:05:47.886683 kernel: NET: Registered PF_PACKET protocol family Jul 14 22:05:47.886690 kernel: Key type dns_resolver registered Jul 14 22:05:47.886698 kernel: registered taskstats version 1 Jul 14 22:05:47.886705 kernel: Loading compiled-in X.509 certificates Jul 14 22:05:47.886712 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.97-flatcar: 0878f879bf0f15203fd920e9f7d6346db298c301' Jul 14 22:05:47.886719 kernel: Key type .fscrypt registered Jul 14 22:05:47.886727 kernel: Key type fscrypt-provisioning registered Jul 14 22:05:47.886734 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 14 22:05:47.886742 kernel: ima: Allocated hash algorithm: sha1 Jul 14 22:05:47.886750 kernel: ima: No architecture policies found Jul 14 22:05:47.886757 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 14 22:05:47.886765 kernel: clk: Disabling unused clocks Jul 14 22:05:47.886772 kernel: Freeing unused kernel memory: 39424K Jul 14 22:05:47.886779 kernel: Run /init as init process Jul 14 22:05:47.886787 kernel: with arguments: Jul 14 22:05:47.886794 kernel: /init Jul 14 22:05:47.886801 kernel: with environment: Jul 14 22:05:47.886810 kernel: HOME=/ Jul 14 22:05:47.886818 kernel: TERM=linux Jul 14 22:05:47.886825 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 14 22:05:47.886855 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 14 22:05:47.886872 systemd[1]: Detected virtualization kvm. Jul 14 22:05:47.886881 systemd[1]: Detected architecture arm64. Jul 14 22:05:47.886888 systemd[1]: Running in initrd. Jul 14 22:05:47.886899 systemd[1]: No hostname configured, using default hostname. Jul 14 22:05:47.886907 systemd[1]: Hostname set to . Jul 14 22:05:47.886915 systemd[1]: Initializing machine ID from VM UUID. Jul 14 22:05:47.886923 systemd[1]: Queued start job for default target initrd.target. Jul 14 22:05:47.886932 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 22:05:47.886940 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 22:05:47.886948 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 14 22:05:47.886956 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 14 22:05:47.886965 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 14 22:05:47.886974 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 14 22:05:47.886983 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 14 22:05:47.886992 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 14 22:05:47.886999 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 22:05:47.887007 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 14 22:05:47.887016 systemd[1]: Reached target paths.target - Path Units. Jul 14 22:05:47.887025 systemd[1]: Reached target slices.target - Slice Units. Jul 14 22:05:47.887033 systemd[1]: Reached target swap.target - Swaps. Jul 14 22:05:47.887040 systemd[1]: Reached target timers.target - Timer Units. Jul 14 22:05:47.887048 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 14 22:05:47.887056 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 14 22:05:47.887064 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 14 22:05:47.887071 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 14 22:05:47.887079 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 14 22:05:47.887087 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 14 22:05:47.887096 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 22:05:47.887104 systemd[1]: Reached target sockets.target - Socket Units. Jul 14 22:05:47.887112 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 14 22:05:47.887120 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 14 22:05:47.887128 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 14 22:05:47.887136 systemd[1]: Starting systemd-fsck-usr.service... Jul 14 22:05:47.887144 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 14 22:05:47.887152 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 14 22:05:47.887161 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 22:05:47.887169 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 14 22:05:47.887177 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 22:05:47.887185 systemd[1]: Finished systemd-fsck-usr.service. Jul 14 22:05:47.887194 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 14 22:05:47.887203 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:05:47.887212 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 22:05:47.887238 systemd-journald[238]: Collecting audit messages is disabled. Jul 14 22:05:47.887260 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 14 22:05:47.887270 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 14 22:05:47.887279 systemd-journald[238]: Journal started Jul 14 22:05:47.887297 systemd-journald[238]: Runtime Journal (/run/log/journal/a643a64192a34b9e97bfd92d74015aea) is 5.9M, max 47.3M, 41.4M free. Jul 14 22:05:47.874215 systemd-modules-load[239]: Inserted module 'overlay' Jul 14 22:05:47.888921 systemd[1]: Started systemd-journald.service - Journal Service. Jul 14 22:05:47.892793 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 14 22:05:47.893008 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 14 22:05:47.895566 kernel: Bridge firewalling registered Jul 14 22:05:47.893421 systemd-modules-load[239]: Inserted module 'br_netfilter' Jul 14 22:05:47.896382 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 14 22:05:47.898172 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 22:05:47.902056 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 22:05:47.909080 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 22:05:47.910010 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 22:05:47.912130 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 14 22:05:47.917643 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 22:05:47.925815 dracut-cmdline[275]: dracut-dracut-053 Jul 14 22:05:47.928710 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=219fd31147cccfc1f4834c1854a4109714661cabce52e86d5c93000af393c45b Jul 14 22:05:47.926980 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 14 22:05:47.954204 systemd-resolved[280]: Positive Trust Anchors: Jul 14 22:05:47.954218 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 22:05:47.954252 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 14 22:05:47.958828 systemd-resolved[280]: Defaulting to hostname 'linux'. Jul 14 22:05:47.959765 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 14 22:05:47.961817 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 14 22:05:47.997870 kernel: SCSI subsystem initialized Jul 14 22:05:48.004859 kernel: Loading iSCSI transport class v2.0-870. Jul 14 22:05:48.013868 kernel: iscsi: registered transport (tcp) Jul 14 22:05:48.023927 kernel: iscsi: registered transport (qla4xxx) Jul 14 22:05:48.023959 kernel: QLogic iSCSI HBA Driver Jul 14 22:05:48.064893 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 14 22:05:48.072968 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 14 22:05:48.089154 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 14 22:05:48.089200 kernel: device-mapper: uevent: version 1.0.3 Jul 14 22:05:48.089211 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 14 22:05:48.135862 kernel: raid6: neonx8 gen() 15691 MB/s Jul 14 22:05:48.152860 kernel: raid6: neonx4 gen() 15571 MB/s Jul 14 22:05:48.169858 kernel: raid6: neonx2 gen() 13177 MB/s Jul 14 22:05:48.186858 kernel: raid6: neonx1 gen() 10417 MB/s Jul 14 22:05:48.203859 kernel: raid6: int64x8 gen() 6919 MB/s Jul 14 22:05:48.220860 kernel: raid6: int64x4 gen() 7286 MB/s Jul 14 22:05:48.237858 kernel: raid6: int64x2 gen() 6093 MB/s Jul 14 22:05:48.254857 kernel: raid6: int64x1 gen() 5036 MB/s Jul 14 22:05:48.254877 kernel: raid6: using algorithm neonx8 gen() 15691 MB/s Jul 14 22:05:48.271861 kernel: raid6: .... xor() 11885 MB/s, rmw enabled Jul 14 22:05:48.271879 kernel: raid6: using neon recovery algorithm Jul 14 22:05:48.276859 kernel: xor: measuring software checksum speed Jul 14 22:05:48.276873 kernel: 8regs : 19831 MB/sec Jul 14 22:05:48.276882 kernel: 32regs : 18452 MB/sec Jul 14 22:05:48.278138 kernel: arm64_neon : 27105 MB/sec Jul 14 22:05:48.278151 kernel: xor: using function: arm64_neon (27105 MB/sec) Jul 14 22:05:48.327876 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 14 22:05:48.337257 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 14 22:05:48.349001 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 22:05:48.359129 systemd-udevd[461]: Using default interface naming scheme 'v255'. Jul 14 22:05:48.362231 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 22:05:48.365982 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 14 22:05:48.378530 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Jul 14 22:05:48.402760 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 14 22:05:48.414032 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 14 22:05:48.455024 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 22:05:48.465510 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 14 22:05:48.474700 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 14 22:05:48.475936 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 14 22:05:48.476983 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 22:05:48.478457 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 14 22:05:48.488002 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 14 22:05:48.497723 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 14 22:05:48.501171 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 14 22:05:48.501317 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 14 22:05:48.504617 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 14 22:05:48.504652 kernel: GPT:9289727 != 19775487 Jul 14 22:05:48.504662 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 14 22:05:48.504129 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 22:05:48.508175 kernel: GPT:9289727 != 19775487 Jul 14 22:05:48.508192 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 14 22:05:48.508209 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 22:05:48.504235 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 22:05:48.506166 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 22:05:48.507630 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 22:05:48.507752 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:05:48.508912 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 22:05:48.515049 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 22:05:48.528784 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:05:48.531629 kernel: BTRFS: device fsid a239cc51-2249-4f1a-8861-421a0d84a369 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (507) Jul 14 22:05:48.531656 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (509) Jul 14 22:05:48.536486 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 14 22:05:48.540740 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 14 22:05:48.546911 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 14 22:05:48.547774 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 14 22:05:48.552692 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 14 22:05:48.571049 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 14 22:05:48.575026 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 22:05:48.577345 disk-uuid[553]: Primary Header is updated. Jul 14 22:05:48.577345 disk-uuid[553]: Secondary Entries is updated. Jul 14 22:05:48.577345 disk-uuid[553]: Secondary Header is updated. Jul 14 22:05:48.579888 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 22:05:48.595689 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 22:05:49.589877 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 22:05:49.590204 disk-uuid[554]: The operation has completed successfully. Jul 14 22:05:49.611179 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 14 22:05:49.611271 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 14 22:05:49.639056 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 14 22:05:49.641708 sh[576]: Success Jul 14 22:05:49.654877 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 14 22:05:49.681702 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 14 22:05:49.696255 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 14 22:05:49.698870 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 14 22:05:49.707481 kernel: BTRFS info (device dm-0): first mount of filesystem a239cc51-2249-4f1a-8861-421a0d84a369 Jul 14 22:05:49.707520 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 14 22:05:49.707532 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 14 22:05:49.707548 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 14 22:05:49.708856 kernel: BTRFS info (device dm-0): using free space tree Jul 14 22:05:49.711618 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 14 22:05:49.712669 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 14 22:05:49.720987 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 14 22:05:49.722187 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 14 22:05:49.729323 kernel: BTRFS info (device vda6): first mount of filesystem a813e27e-7b70-4c75-b1e9-ccef805dad93 Jul 14 22:05:49.729365 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 14 22:05:49.729376 kernel: BTRFS info (device vda6): using free space tree Jul 14 22:05:49.731877 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 22:05:49.738602 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 14 22:05:49.739968 kernel: BTRFS info (device vda6): last unmount of filesystem a813e27e-7b70-4c75-b1e9-ccef805dad93 Jul 14 22:05:49.745470 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 14 22:05:49.753019 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 14 22:05:49.812574 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 14 22:05:49.820020 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 14 22:05:49.843270 systemd-networkd[768]: lo: Link UP Jul 14 22:05:49.843281 systemd-networkd[768]: lo: Gained carrier Jul 14 22:05:49.843999 systemd-networkd[768]: Enumeration completed Jul 14 22:05:49.844301 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 14 22:05:49.844455 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 22:05:49.844458 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 22:05:49.845303 systemd-networkd[768]: eth0: Link UP Jul 14 22:05:49.848486 ignition[669]: Ignition 2.19.0 Jul 14 22:05:49.845306 systemd-networkd[768]: eth0: Gained carrier Jul 14 22:05:49.848493 ignition[669]: Stage: fetch-offline Jul 14 22:05:49.845313 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 22:05:49.848523 ignition[669]: no configs at "/usr/lib/ignition/base.d" Jul 14 22:05:49.845569 systemd[1]: Reached target network.target - Network. Jul 14 22:05:49.848531 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:05:49.848688 ignition[669]: parsed url from cmdline: "" Jul 14 22:05:49.848691 ignition[669]: no config URL provided Jul 14 22:05:49.848696 ignition[669]: reading system config file "/usr/lib/ignition/user.ign" Jul 14 22:05:49.848702 ignition[669]: no config at "/usr/lib/ignition/user.ign" Jul 14 22:05:49.848722 ignition[669]: op(1): [started] loading QEMU firmware config module Jul 14 22:05:49.848727 ignition[669]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 14 22:05:49.854428 ignition[669]: op(1): [finished] loading QEMU firmware config module Jul 14 22:05:49.876889 systemd-networkd[768]: eth0: DHCPv4 address 10.0.0.114/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 22:05:49.894613 ignition[669]: parsing config with SHA512: e01f8fea085e56293db8802d106a6c520fc17839492589301f1557a00f4a94a2381e1c7c39cf78b6270092ccbcbbf212f4cf12bafc1a2db4c0397bc6a42a7da9 Jul 14 22:05:49.898754 unknown[669]: fetched base config from "system" Jul 14 22:05:49.898763 unknown[669]: fetched user config from "qemu" Jul 14 22:05:49.900184 ignition[669]: fetch-offline: fetch-offline passed Jul 14 22:05:49.900282 ignition[669]: Ignition finished successfully Jul 14 22:05:49.902323 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 14 22:05:49.903310 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 14 22:05:49.912043 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 14 22:05:49.922042 ignition[775]: Ignition 2.19.0 Jul 14 22:05:49.922051 ignition[775]: Stage: kargs Jul 14 22:05:49.922204 ignition[775]: no configs at "/usr/lib/ignition/base.d" Jul 14 22:05:49.922217 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:05:49.923129 ignition[775]: kargs: kargs passed Jul 14 22:05:49.923168 ignition[775]: Ignition finished successfully Jul 14 22:05:49.925623 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 14 22:05:49.937027 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 14 22:05:49.946509 ignition[783]: Ignition 2.19.0 Jul 14 22:05:49.946520 ignition[783]: Stage: disks Jul 14 22:05:49.946670 ignition[783]: no configs at "/usr/lib/ignition/base.d" Jul 14 22:05:49.946679 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:05:49.947627 ignition[783]: disks: disks passed Jul 14 22:05:49.947673 ignition[783]: Ignition finished successfully Jul 14 22:05:49.950184 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 14 22:05:49.951531 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 14 22:05:49.953948 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 14 22:05:49.955381 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 14 22:05:49.956805 systemd[1]: Reached target sysinit.target - System Initialization. Jul 14 22:05:49.958111 systemd[1]: Reached target basic.target - Basic System. Jul 14 22:05:49.978005 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 14 22:05:49.986015 systemd-resolved[280]: Detected conflict on linux IN A 10.0.0.114 Jul 14 22:05:49.986030 systemd-resolved[280]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. Jul 14 22:05:49.988078 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 14 22:05:49.990462 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 14 22:05:49.992532 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 14 22:05:50.035857 kernel: EXT4-fs (vda9): mounted filesystem a9f35e2f-e295-4589-8fb4-4b611a8bb71c r/w with ordered data mode. Quota mode: none. Jul 14 22:05:50.036632 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 14 22:05:50.037707 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 14 22:05:50.046942 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 14 22:05:50.048805 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 14 22:05:50.049624 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 14 22:05:50.049664 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 14 22:05:50.049686 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 14 22:05:50.054579 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 14 22:05:50.056004 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 14 22:05:50.059221 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (801) Jul 14 22:05:50.059252 kernel: BTRFS info (device vda6): first mount of filesystem a813e27e-7b70-4c75-b1e9-ccef805dad93 Jul 14 22:05:50.059263 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 14 22:05:50.059273 kernel: BTRFS info (device vda6): using free space tree Jul 14 22:05:50.062881 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 22:05:50.063705 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 14 22:05:50.106037 initrd-setup-root[826]: cut: /sysroot/etc/passwd: No such file or directory Jul 14 22:05:50.110227 initrd-setup-root[833]: cut: /sysroot/etc/group: No such file or directory Jul 14 22:05:50.113871 initrd-setup-root[840]: cut: /sysroot/etc/shadow: No such file or directory Jul 14 22:05:50.118011 initrd-setup-root[847]: cut: /sysroot/etc/gshadow: No such file or directory Jul 14 22:05:50.186571 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 14 22:05:50.202973 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 14 22:05:50.205097 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 14 22:05:50.209869 kernel: BTRFS info (device vda6): last unmount of filesystem a813e27e-7b70-4c75-b1e9-ccef805dad93 Jul 14 22:05:50.223243 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 14 22:05:50.225695 ignition[915]: INFO : Ignition 2.19.0 Jul 14 22:05:50.225695 ignition[915]: INFO : Stage: mount Jul 14 22:05:50.226899 ignition[915]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 22:05:50.226899 ignition[915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:05:50.226899 ignition[915]: INFO : mount: mount passed Jul 14 22:05:50.226899 ignition[915]: INFO : Ignition finished successfully Jul 14 22:05:50.228148 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 14 22:05:50.236940 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 14 22:05:50.706399 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 14 22:05:50.715020 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 14 22:05:50.719881 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (928) Jul 14 22:05:50.721423 kernel: BTRFS info (device vda6): first mount of filesystem a813e27e-7b70-4c75-b1e9-ccef805dad93 Jul 14 22:05:50.721459 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 14 22:05:50.721471 kernel: BTRFS info (device vda6): using free space tree Jul 14 22:05:50.723876 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 22:05:50.724918 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 14 22:05:50.739719 ignition[945]: INFO : Ignition 2.19.0 Jul 14 22:05:50.739719 ignition[945]: INFO : Stage: files Jul 14 22:05:50.740863 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 22:05:50.740863 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:05:50.740863 ignition[945]: DEBUG : files: compiled without relabeling support, skipping Jul 14 22:05:50.743280 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 14 22:05:50.743280 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 14 22:05:50.746079 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 14 22:05:50.747054 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 14 22:05:50.747054 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 14 22:05:50.746624 unknown[945]: wrote ssh authorized keys file for user: core Jul 14 22:05:50.749675 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 14 22:05:50.749675 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 14 22:05:50.749675 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 14 22:05:50.749675 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 14 22:05:51.011067 systemd-networkd[768]: eth0: Gained IPv6LL Jul 14 22:06:00.809821 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET error: Get "https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz": dial tcp 13.107.253.57:443: connect: connection refused Jul 14 22:06:01.010198 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #2 Jul 14 22:06:11.123089 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 14 22:06:11.255304 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 14 22:06:11.255304 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 14 22:06:11.255304 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 14 22:06:31.626312 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jul 14 22:06:31.730248 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 14 22:06:31.731704 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jul 14 22:06:31.731704 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jul 14 22:06:31.731704 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 14 22:06:31.731704 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 14 22:06:31.731704 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 22:06:31.731704 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 22:06:31.731704 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 22:06:31.731704 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 22:06:31.731704 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 22:06:31.731704 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 22:06:31.731704 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 14 22:06:31.731704 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 14 22:06:31.731704 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 14 22:06:31.731704 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 14 22:06:52.014291 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jul 14 22:06:52.307791 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 14 22:06:52.307791 ignition[945]: INFO : files: op(d): [started] processing unit "containerd.service" Jul 14 22:06:52.310234 ignition[945]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 14 22:06:52.310234 ignition[945]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 14 22:06:52.310234 ignition[945]: INFO : files: op(d): [finished] processing unit "containerd.service" Jul 14 22:06:52.310234 ignition[945]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jul 14 22:06:52.310234 ignition[945]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 22:06:52.310234 ignition[945]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 22:06:52.310234 ignition[945]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jul 14 22:06:52.310234 ignition[945]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Jul 14 22:06:52.310234 ignition[945]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 22:06:52.310234 ignition[945]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 22:06:52.310234 ignition[945]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Jul 14 22:06:52.310234 ignition[945]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Jul 14 22:06:52.340281 ignition[945]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 22:06:52.344893 ignition[945]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 22:06:52.345936 ignition[945]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Jul 14 22:06:52.345936 ignition[945]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Jul 14 22:06:52.345936 ignition[945]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Jul 14 22:06:52.345936 ignition[945]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 14 22:06:52.345936 ignition[945]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 14 22:06:52.345936 ignition[945]: INFO : files: files passed Jul 14 22:06:52.345936 ignition[945]: INFO : Ignition finished successfully Jul 14 22:06:52.346799 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 14 22:06:52.355048 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 14 22:06:52.357382 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 14 22:06:52.361321 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 14 22:06:52.362410 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 14 22:06:52.366076 initrd-setup-root-after-ignition[974]: grep: /sysroot/oem/oem-release: No such file or directory Jul 14 22:06:52.369045 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 22:06:52.369045 initrd-setup-root-after-ignition[976]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 14 22:06:52.371173 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 22:06:52.373298 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 14 22:06:52.374289 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 14 22:06:52.389088 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 14 22:06:52.410967 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 14 22:06:52.412902 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 14 22:06:52.413921 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 14 22:06:52.415246 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 14 22:06:52.416554 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 14 22:06:52.420363 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 14 22:06:52.434718 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 14 22:06:52.442008 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 14 22:06:52.450480 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 14 22:06:52.451467 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 22:06:52.453219 systemd[1]: Stopped target timers.target - Timer Units. Jul 14 22:06:52.454740 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 14 22:06:52.454895 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 14 22:06:52.457152 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 14 22:06:52.458893 systemd[1]: Stopped target basic.target - Basic System. Jul 14 22:06:52.460318 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 14 22:06:52.461708 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 14 22:06:52.463372 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 14 22:06:52.465014 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 14 22:06:52.466607 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 14 22:06:52.468262 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 14 22:06:52.469905 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 14 22:06:52.471419 systemd[1]: Stopped target swap.target - Swaps. Jul 14 22:06:52.472721 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 14 22:06:52.472833 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 14 22:06:52.474897 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 14 22:06:52.476535 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 22:06:52.478125 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 14 22:06:52.480142 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 22:06:52.481127 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 14 22:06:52.481232 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 14 22:06:52.483909 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 14 22:06:52.484024 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 14 22:06:52.485771 systemd[1]: Stopped target paths.target - Path Units. Jul 14 22:06:52.487138 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 14 22:06:52.488804 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 22:06:52.489774 systemd[1]: Stopped target slices.target - Slice Units. Jul 14 22:06:52.491266 systemd[1]: Stopped target sockets.target - Socket Units. Jul 14 22:06:52.493177 systemd[1]: iscsid.socket: Deactivated successfully. Jul 14 22:06:52.493260 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 14 22:06:52.494520 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 14 22:06:52.494596 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 14 22:06:52.495923 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 14 22:06:52.496027 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 14 22:06:52.497453 systemd[1]: ignition-files.service: Deactivated successfully. Jul 14 22:06:52.497550 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 14 22:06:52.516957 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 14 22:06:52.520756 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 14 22:06:52.521617 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 14 22:06:52.521750 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 22:06:52.523336 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 14 22:06:52.523447 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 14 22:06:52.528677 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 14 22:06:52.528767 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 14 22:06:52.531510 ignition[1001]: INFO : Ignition 2.19.0 Jul 14 22:06:52.531510 ignition[1001]: INFO : Stage: umount Jul 14 22:06:52.531510 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 22:06:52.531510 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:06:52.531510 ignition[1001]: INFO : umount: umount passed Jul 14 22:06:52.531510 ignition[1001]: INFO : Ignition finished successfully Jul 14 22:06:52.532469 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 14 22:06:52.532925 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 14 22:06:52.534895 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 14 22:06:52.536123 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 14 22:06:52.536199 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 14 22:06:52.537834 systemd[1]: Stopped target network.target - Network. Jul 14 22:06:52.539940 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 14 22:06:52.540005 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 14 22:06:52.540705 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 14 22:06:52.540741 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 14 22:06:52.541467 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 14 22:06:52.541501 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 14 22:06:52.542766 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 14 22:06:52.542802 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 14 22:06:52.544188 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 14 22:06:52.544226 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 14 22:06:52.545743 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 14 22:06:52.546951 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 14 22:06:52.553683 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 14 22:06:52.553803 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 14 22:06:52.554899 systemd-networkd[768]: eth0: DHCPv6 lease lost Jul 14 22:06:52.556162 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 14 22:06:52.556211 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 22:06:52.557671 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 14 22:06:52.557783 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 14 22:06:52.559544 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 14 22:06:52.559621 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 14 22:06:52.568957 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 14 22:06:52.569610 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 14 22:06:52.569662 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 14 22:06:52.571082 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 14 22:06:52.571118 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 14 22:06:52.572438 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 14 22:06:52.572479 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 14 22:06:52.574068 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 22:06:52.580428 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 14 22:06:52.580576 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 22:06:52.582251 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 14 22:06:52.582332 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 14 22:06:52.583939 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 14 22:06:52.583994 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 14 22:06:52.584773 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 14 22:06:52.584801 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 22:06:52.586012 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 14 22:06:52.586051 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 14 22:06:52.587935 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 14 22:06:52.587973 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 14 22:06:52.589902 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 22:06:52.589944 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 22:06:52.598984 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 14 22:06:52.599804 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 14 22:06:52.599864 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 22:06:52.601550 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 14 22:06:52.601589 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 14 22:06:52.602983 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 14 22:06:52.603018 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 22:06:52.604587 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 22:06:52.604625 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:06:52.606600 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 14 22:06:52.607925 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 14 22:06:52.609904 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 14 22:06:52.611526 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 14 22:06:52.627738 systemd[1]: Switching root. Jul 14 22:06:52.643438 systemd-journald[238]: Journal stopped Jul 14 22:06:53.377952 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jul 14 22:06:53.378011 kernel: SELinux: policy capability network_peer_controls=1 Jul 14 22:06:53.378023 kernel: SELinux: policy capability open_perms=1 Jul 14 22:06:53.378034 kernel: SELinux: policy capability extended_socket_class=1 Jul 14 22:06:53.378047 kernel: SELinux: policy capability always_check_network=0 Jul 14 22:06:53.378057 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 14 22:06:53.378068 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 14 22:06:53.378078 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 14 22:06:53.378088 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 14 22:06:53.378098 kernel: audit: type=1403 audit(1752530812.865:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 14 22:06:53.378112 systemd[1]: Successfully loaded SELinux policy in 32.922ms. Jul 14 22:06:53.378136 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.399ms. Jul 14 22:06:53.378150 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 14 22:06:53.378162 systemd[1]: Detected virtualization kvm. Jul 14 22:06:53.378173 systemd[1]: Detected architecture arm64. Jul 14 22:06:53.378184 systemd[1]: Detected first boot. Jul 14 22:06:53.378195 systemd[1]: Initializing machine ID from VM UUID. Jul 14 22:06:53.378206 zram_generator::config[1063]: No configuration found. Jul 14 22:06:53.378218 systemd[1]: Populated /etc with preset unit settings. Jul 14 22:06:53.378229 systemd[1]: Queued start job for default target multi-user.target. Jul 14 22:06:53.378240 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 14 22:06:53.378253 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 14 22:06:53.378264 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 14 22:06:53.378275 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 14 22:06:53.378286 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 14 22:06:53.378298 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 14 22:06:53.378309 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 14 22:06:53.378320 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 14 22:06:53.378331 systemd[1]: Created slice user.slice - User and Session Slice. Jul 14 22:06:53.378351 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 22:06:53.378364 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 22:06:53.378377 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 14 22:06:53.378388 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 14 22:06:53.378399 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 14 22:06:53.378410 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 14 22:06:53.378421 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 14 22:06:53.378432 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 22:06:53.378443 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 14 22:06:53.378456 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 22:06:53.378467 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 14 22:06:53.378479 systemd[1]: Reached target slices.target - Slice Units. Jul 14 22:06:53.378490 systemd[1]: Reached target swap.target - Swaps. Jul 14 22:06:53.378501 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 14 22:06:53.378512 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 14 22:06:53.378523 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 14 22:06:53.378535 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 14 22:06:53.378546 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 14 22:06:53.378558 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 14 22:06:53.378570 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 22:06:53.378581 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 14 22:06:53.378592 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 14 22:06:53.378603 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 14 22:06:53.378616 systemd[1]: Mounting media.mount - External Media Directory... Jul 14 22:06:53.378627 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 14 22:06:53.378638 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 14 22:06:53.378649 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 14 22:06:53.378661 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 14 22:06:53.378672 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 22:06:53.378684 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 14 22:06:53.378696 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 14 22:06:53.378708 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 22:06:53.378725 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 14 22:06:53.378740 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 22:06:53.378756 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 14 22:06:53.378774 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 22:06:53.378789 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 14 22:06:53.378804 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 14 22:06:53.378816 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jul 14 22:06:53.378834 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 14 22:06:53.378887 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 14 22:06:53.378900 kernel: fuse: init (API version 7.39) Jul 14 22:06:53.378910 kernel: loop: module loaded Jul 14 22:06:53.378921 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 14 22:06:53.378938 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 14 22:06:53.378950 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 14 22:06:53.378979 systemd-journald[1137]: Collecting audit messages is disabled. Jul 14 22:06:53.379001 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 14 22:06:53.379014 systemd-journald[1137]: Journal started Jul 14 22:06:53.379035 systemd-journald[1137]: Runtime Journal (/run/log/journal/a643a64192a34b9e97bfd92d74015aea) is 5.9M, max 47.3M, 41.4M free. Jul 14 22:06:53.379877 kernel: ACPI: bus type drm_connector registered Jul 14 22:06:53.386657 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 14 22:06:53.386705 systemd[1]: Started systemd-journald.service - Journal Service. Jul 14 22:06:53.388143 systemd[1]: Mounted media.mount - External Media Directory. Jul 14 22:06:53.389256 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 14 22:06:53.390198 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 14 22:06:53.391093 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 14 22:06:53.392076 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 22:06:53.393199 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 14 22:06:53.393367 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 14 22:06:53.394454 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:06:53.394596 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 22:06:53.395634 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 22:06:53.395780 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 14 22:06:53.396785 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:06:53.396947 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 22:06:53.398071 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 14 22:06:53.398209 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 14 22:06:53.399418 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:06:53.399606 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 22:06:53.400690 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 14 22:06:53.401833 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 14 22:06:53.403028 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 14 22:06:53.413205 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 14 22:06:53.424000 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 14 22:06:53.425954 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 14 22:06:53.426775 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 14 22:06:53.431045 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 14 22:06:53.432835 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 14 22:06:53.433659 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 22:06:53.438994 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 14 22:06:53.439812 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 14 22:06:53.443126 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 22:06:53.451113 systemd-journald[1137]: Time spent on flushing to /var/log/journal/a643a64192a34b9e97bfd92d74015aea is 23.970ms for 850 entries. Jul 14 22:06:53.451113 systemd-journald[1137]: System Journal (/var/log/journal/a643a64192a34b9e97bfd92d74015aea) is 8.0M, max 195.6M, 187.6M free. Jul 14 22:06:53.508991 systemd-journald[1137]: Received client request to flush runtime journal. Jul 14 22:06:53.446914 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 14 22:06:53.450131 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 14 22:06:53.455081 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 14 22:06:53.473435 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 14 22:06:53.474576 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 14 22:06:53.479703 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 22:06:53.480959 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 14 22:06:53.484271 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Jul 14 22:06:53.484281 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Jul 14 22:06:53.485265 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 22:06:53.495136 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 14 22:06:53.496302 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 14 22:06:53.501060 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 14 22:06:53.506931 udevadm[1207]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 14 22:06:53.514616 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 14 22:06:53.528962 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 14 22:06:53.539066 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 14 22:06:53.551957 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Jul 14 22:06:53.551976 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Jul 14 22:06:53.555881 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 22:06:53.882918 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 14 22:06:53.895028 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 22:06:53.914013 systemd-udevd[1224]: Using default interface naming scheme 'v255'. Jul 14 22:06:53.927314 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 22:06:53.936839 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 14 22:06:53.941807 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 14 22:06:53.960547 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Jul 14 22:06:53.988869 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1234) Jul 14 22:06:53.990263 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 14 22:06:54.024431 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 14 22:06:54.057992 systemd-networkd[1231]: lo: Link UP Jul 14 22:06:54.058005 systemd-networkd[1231]: lo: Gained carrier Jul 14 22:06:54.058691 systemd-networkd[1231]: Enumeration completed Jul 14 22:06:54.059137 systemd-networkd[1231]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 22:06:54.059141 systemd-networkd[1231]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 22:06:54.059765 systemd-networkd[1231]: eth0: Link UP Jul 14 22:06:54.059768 systemd-networkd[1231]: eth0: Gained carrier Jul 14 22:06:54.059780 systemd-networkd[1231]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 22:06:54.065093 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 22:06:54.066017 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 14 22:06:54.068451 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 14 22:06:54.083507 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 14 22:06:54.084948 systemd-networkd[1231]: eth0: DHCPv4 address 10.0.0.114/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 22:06:54.085979 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 14 22:06:54.105582 lvm[1263]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 22:06:54.115140 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:06:54.136355 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 14 22:06:54.137504 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 14 22:06:54.149086 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 14 22:06:54.152779 lvm[1270]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 22:06:54.178387 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 14 22:06:54.179537 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 14 22:06:54.180499 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 14 22:06:54.180539 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 14 22:06:54.181286 systemd[1]: Reached target machines.target - Containers. Jul 14 22:06:54.183003 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 14 22:06:54.202022 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 14 22:06:54.204164 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 14 22:06:54.205029 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 22:06:54.205957 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 14 22:06:54.207992 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 14 22:06:54.209957 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 14 22:06:54.211507 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 14 22:06:54.218825 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 14 22:06:54.225865 kernel: loop0: detected capacity change from 0 to 114328 Jul 14 22:06:54.234834 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 14 22:06:54.235665 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 14 22:06:54.241990 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 14 22:06:54.279876 kernel: loop1: detected capacity change from 0 to 203944 Jul 14 22:06:54.318960 kernel: loop2: detected capacity change from 0 to 114432 Jul 14 22:06:54.354893 kernel: loop3: detected capacity change from 0 to 114328 Jul 14 22:06:54.363881 kernel: loop4: detected capacity change from 0 to 203944 Jul 14 22:06:54.370878 kernel: loop5: detected capacity change from 0 to 114432 Jul 14 22:06:54.374136 (sd-merge)[1290]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 14 22:06:54.374530 (sd-merge)[1290]: Merged extensions into '/usr'. Jul 14 22:06:54.385423 systemd[1]: Reloading requested from client PID 1278 ('systemd-sysext') (unit systemd-sysext.service)... Jul 14 22:06:54.385440 systemd[1]: Reloading... Jul 14 22:06:54.428971 zram_generator::config[1318]: No configuration found. Jul 14 22:06:54.464189 ldconfig[1274]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 14 22:06:54.519780 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:06:54.565070 systemd[1]: Reloading finished in 179 ms. Jul 14 22:06:54.580874 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 14 22:06:54.582053 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 14 22:06:54.596076 systemd[1]: Starting ensure-sysext.service... Jul 14 22:06:54.598069 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 14 22:06:54.603504 systemd[1]: Reloading requested from client PID 1359 ('systemctl') (unit ensure-sysext.service)... Jul 14 22:06:54.603519 systemd[1]: Reloading... Jul 14 22:06:54.615361 systemd-tmpfiles[1360]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 14 22:06:54.615632 systemd-tmpfiles[1360]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 14 22:06:54.616323 systemd-tmpfiles[1360]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 14 22:06:54.616558 systemd-tmpfiles[1360]: ACLs are not supported, ignoring. Jul 14 22:06:54.616648 systemd-tmpfiles[1360]: ACLs are not supported, ignoring. Jul 14 22:06:54.619689 systemd-tmpfiles[1360]: Detected autofs mount point /boot during canonicalization of boot. Jul 14 22:06:54.619703 systemd-tmpfiles[1360]: Skipping /boot Jul 14 22:06:54.629916 systemd-tmpfiles[1360]: Detected autofs mount point /boot during canonicalization of boot. Jul 14 22:06:54.629930 systemd-tmpfiles[1360]: Skipping /boot Jul 14 22:06:54.642980 zram_generator::config[1389]: No configuration found. Jul 14 22:06:54.748834 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:06:54.796059 systemd[1]: Reloading finished in 192 ms. Jul 14 22:06:54.807630 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 22:06:54.822195 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 14 22:06:54.824414 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 14 22:06:54.826633 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 14 22:06:54.830557 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 14 22:06:54.835055 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 14 22:06:54.841156 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 22:06:54.844125 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 22:06:54.850152 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 22:06:54.854529 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 22:06:54.856198 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 22:06:54.857672 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 14 22:06:54.859889 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:06:54.860030 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 22:06:54.863373 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:06:54.863567 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 22:06:54.864933 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:06:54.865167 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 22:06:54.871201 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 22:06:54.881233 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 22:06:54.885232 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 22:06:54.889187 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 22:06:54.891923 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 22:06:54.893991 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 14 22:06:54.898785 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 14 22:06:54.900481 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 14 22:06:54.901825 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:06:54.902148 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 22:06:54.903396 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:06:54.903531 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 22:06:54.906811 augenrules[1472]: No rules Jul 14 22:06:54.908653 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:06:54.908889 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 22:06:54.909294 systemd-resolved[1435]: Positive Trust Anchors: Jul 14 22:06:54.912907 systemd-resolved[1435]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 22:06:54.912944 systemd-resolved[1435]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 14 22:06:54.913749 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 14 22:06:54.916556 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 22:06:54.921454 systemd-resolved[1435]: Defaulting to hostname 'linux'. Jul 14 22:06:54.936257 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 22:06:54.938148 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 14 22:06:54.939892 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 22:06:54.941680 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 22:06:54.942547 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 22:06:54.942688 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 14 22:06:54.943255 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 14 22:06:54.944643 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 14 22:06:54.946054 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:06:54.946200 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 22:06:54.947651 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 22:06:54.947801 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 14 22:06:54.949113 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:06:54.949252 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 22:06:54.950532 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:06:54.950733 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 22:06:54.954377 systemd[1]: Finished ensure-sysext.service. Jul 14 22:06:54.957781 systemd[1]: Reached target network.target - Network. Jul 14 22:06:54.958674 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 14 22:06:54.959665 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 22:06:54.959735 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 14 22:06:54.973085 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 14 22:06:55.015367 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 14 22:06:55.016622 systemd-timesyncd[1504]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 14 22:06:55.016671 systemd-timesyncd[1504]: Initial clock synchronization to Mon 2025-07-14 22:06:55.074963 UTC. Jul 14 22:06:55.016740 systemd[1]: Reached target sysinit.target - System Initialization. Jul 14 22:06:55.017595 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 14 22:06:55.018512 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 14 22:06:55.019406 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 14 22:06:55.020289 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 14 22:06:55.020319 systemd[1]: Reached target paths.target - Path Units. Jul 14 22:06:55.020950 systemd[1]: Reached target time-set.target - System Time Set. Jul 14 22:06:55.021815 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 14 22:06:55.022684 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 14 22:06:55.023602 systemd[1]: Reached target timers.target - Timer Units. Jul 14 22:06:55.025007 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 14 22:06:55.027255 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 14 22:06:55.029205 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 14 22:06:55.036930 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 14 22:06:55.037715 systemd[1]: Reached target sockets.target - Socket Units. Jul 14 22:06:55.038424 systemd[1]: Reached target basic.target - Basic System. Jul 14 22:06:55.039274 systemd[1]: System is tainted: cgroupsv1 Jul 14 22:06:55.039318 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 14 22:06:55.039347 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 14 22:06:55.040506 systemd[1]: Starting containerd.service - containerd container runtime... Jul 14 22:06:55.042240 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 14 22:06:55.043877 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 14 22:06:55.045993 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 14 22:06:55.048436 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 14 22:06:55.050023 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 14 22:06:55.053756 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 14 22:06:55.058809 jq[1510]: false Jul 14 22:06:55.063351 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 14 22:06:55.069229 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 14 22:06:55.070153 extend-filesystems[1512]: Found loop3 Jul 14 22:06:55.071005 extend-filesystems[1512]: Found loop4 Jul 14 22:06:55.071005 extend-filesystems[1512]: Found loop5 Jul 14 22:06:55.071005 extend-filesystems[1512]: Found vda Jul 14 22:06:55.071005 extend-filesystems[1512]: Found vda1 Jul 14 22:06:55.071005 extend-filesystems[1512]: Found vda2 Jul 14 22:06:55.071005 extend-filesystems[1512]: Found vda3 Jul 14 22:06:55.071005 extend-filesystems[1512]: Found usr Jul 14 22:06:55.071005 extend-filesystems[1512]: Found vda4 Jul 14 22:06:55.071005 extend-filesystems[1512]: Found vda6 Jul 14 22:06:55.071005 extend-filesystems[1512]: Found vda7 Jul 14 22:06:55.071005 extend-filesystems[1512]: Found vda9 Jul 14 22:06:55.071005 extend-filesystems[1512]: Checking size of /dev/vda9 Jul 14 22:06:55.075798 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 14 22:06:55.081126 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 14 22:06:55.083260 systemd[1]: Starting update-engine.service - Update Engine... Jul 14 22:06:55.088240 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 14 22:06:55.093978 jq[1535]: true Jul 14 22:06:55.094879 extend-filesystems[1512]: Resized partition /dev/vda9 Jul 14 22:06:55.095429 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 14 22:06:55.095695 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 14 22:06:55.095982 systemd[1]: motdgen.service: Deactivated successfully. Jul 14 22:06:55.096175 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 14 22:06:55.100384 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 14 22:06:55.100601 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 14 22:06:55.111121 extend-filesystems[1538]: resize2fs 1.47.1 (20-May-2024) Jul 14 22:06:55.113710 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1235) Jul 14 22:06:55.116943 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 14 22:06:55.118115 (ntainerd)[1543]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 14 22:06:55.120443 jq[1542]: true Jul 14 22:06:55.138691 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 14 22:06:55.138513 dbus-daemon[1509]: [system] SELinux support is enabled Jul 14 22:06:55.143652 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 14 22:06:55.143678 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 14 22:06:55.147705 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 14 22:06:55.147735 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 14 22:06:55.149874 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 14 22:06:55.153637 tar[1540]: linux-arm64/helm Jul 14 22:06:55.176677 systemd-logind[1523]: Watching system buttons on /dev/input/event0 (Power Button) Jul 14 22:06:55.180088 systemd-logind[1523]: New seat seat0. Jul 14 22:06:55.180784 extend-filesystems[1538]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 14 22:06:55.180784 extend-filesystems[1538]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 14 22:06:55.180784 extend-filesystems[1538]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 14 22:06:55.197734 extend-filesystems[1512]: Resized filesystem in /dev/vda9 Jul 14 22:06:55.184013 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 14 22:06:55.202100 update_engine[1531]: I20250714 22:06:55.180538 1531 main.cc:92] Flatcar Update Engine starting Jul 14 22:06:55.202100 update_engine[1531]: I20250714 22:06:55.193787 1531 update_check_scheduler.cc:74] Next update check in 3m57s Jul 14 22:06:55.184254 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 14 22:06:55.192290 systemd[1]: Started systemd-logind.service - User Login Management. Jul 14 22:06:55.200700 systemd[1]: Started update-engine.service - Update Engine. Jul 14 22:06:55.203742 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 14 22:06:55.210182 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 14 22:06:55.212200 bash[1569]: Updated "/home/core/.ssh/authorized_keys" Jul 14 22:06:55.213826 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 14 22:06:55.218175 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 14 22:06:55.266692 locksmithd[1578]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 14 22:06:55.330974 systemd-networkd[1231]: eth0: Gained IPv6LL Jul 14 22:06:55.339864 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 14 22:06:55.341416 systemd[1]: Reached target network-online.target - Network is Online. Jul 14 22:06:55.350135 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 14 22:06:55.355072 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:06:55.357069 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 14 22:06:55.357803 containerd[1543]: time="2025-07-14T22:06:55.357699080Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 14 22:06:55.384746 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 14 22:06:55.385039 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 14 22:06:55.386319 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 14 22:06:55.394861 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 14 22:06:55.412082 containerd[1543]: time="2025-07-14T22:06:55.412030680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:06:55.413631 containerd[1543]: time="2025-07-14T22:06:55.413591800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.97-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:06:55.413663 containerd[1543]: time="2025-07-14T22:06:55.413631960Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 14 22:06:55.413663 containerd[1543]: time="2025-07-14T22:06:55.413651120Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 14 22:06:55.413826 containerd[1543]: time="2025-07-14T22:06:55.413806560Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 14 22:06:55.413879 containerd[1543]: time="2025-07-14T22:06:55.413828400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 14 22:06:55.413926 containerd[1543]: time="2025-07-14T22:06:55.413904760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:06:55.413926 containerd[1543]: time="2025-07-14T22:06:55.413922840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:06:55.414152 containerd[1543]: time="2025-07-14T22:06:55.414130040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:06:55.414152 containerd[1543]: time="2025-07-14T22:06:55.414149840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 14 22:06:55.414202 containerd[1543]: time="2025-07-14T22:06:55.414163440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:06:55.414202 containerd[1543]: time="2025-07-14T22:06:55.414173600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 14 22:06:55.414269 containerd[1543]: time="2025-07-14T22:06:55.414253480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:06:55.414489 containerd[1543]: time="2025-07-14T22:06:55.414465520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:06:55.414614 containerd[1543]: time="2025-07-14T22:06:55.414596440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:06:55.414641 containerd[1543]: time="2025-07-14T22:06:55.414612920Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 14 22:06:55.414710 containerd[1543]: time="2025-07-14T22:06:55.414694240Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 14 22:06:55.414755 containerd[1543]: time="2025-07-14T22:06:55.414741280Z" level=info msg="metadata content store policy set" policy=shared Jul 14 22:06:55.422287 containerd[1543]: time="2025-07-14T22:06:55.422257880Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 14 22:06:55.422345 containerd[1543]: time="2025-07-14T22:06:55.422309040Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 14 22:06:55.422345 containerd[1543]: time="2025-07-14T22:06:55.422325800Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 14 22:06:55.422387 containerd[1543]: time="2025-07-14T22:06:55.422347720Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 14 22:06:55.422387 containerd[1543]: time="2025-07-14T22:06:55.422373400Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 14 22:06:55.423229 containerd[1543]: time="2025-07-14T22:06:55.422539440Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 14 22:06:55.423229 containerd[1543]: time="2025-07-14T22:06:55.422918920Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 14 22:06:55.423229 containerd[1543]: time="2025-07-14T22:06:55.423030120Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 14 22:06:55.423229 containerd[1543]: time="2025-07-14T22:06:55.423046440Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 14 22:06:55.423229 containerd[1543]: time="2025-07-14T22:06:55.423058200Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 14 22:06:55.423229 containerd[1543]: time="2025-07-14T22:06:55.423072680Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 14 22:06:55.423229 containerd[1543]: time="2025-07-14T22:06:55.423085560Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 14 22:06:55.423229 containerd[1543]: time="2025-07-14T22:06:55.423098520Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 14 22:06:55.423229 containerd[1543]: time="2025-07-14T22:06:55.423111880Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 14 22:06:55.423229 containerd[1543]: time="2025-07-14T22:06:55.423125720Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 14 22:06:55.423229 containerd[1543]: time="2025-07-14T22:06:55.423137440Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 14 22:06:55.423229 containerd[1543]: time="2025-07-14T22:06:55.423155560Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 14 22:06:55.423229 containerd[1543]: time="2025-07-14T22:06:55.423166760Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 14 22:06:55.423229 containerd[1543]: time="2025-07-14T22:06:55.423187600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 14 22:06:55.423493 containerd[1543]: time="2025-07-14T22:06:55.423200760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 14 22:06:55.423493 containerd[1543]: time="2025-07-14T22:06:55.423212160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 14 22:06:55.423493 containerd[1543]: time="2025-07-14T22:06:55.423223640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 14 22:06:55.423493 containerd[1543]: time="2025-07-14T22:06:55.423241880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 14 22:06:55.423493 containerd[1543]: time="2025-07-14T22:06:55.423254520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 14 22:06:55.423493 containerd[1543]: time="2025-07-14T22:06:55.423265760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 14 22:06:55.423493 containerd[1543]: time="2025-07-14T22:06:55.423284040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 14 22:06:55.423493 containerd[1543]: time="2025-07-14T22:06:55.423297480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 14 22:06:55.423493 containerd[1543]: time="2025-07-14T22:06:55.423311800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 14 22:06:55.423493 containerd[1543]: time="2025-07-14T22:06:55.423333160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 14 22:06:55.423493 containerd[1543]: time="2025-07-14T22:06:55.423358480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 14 22:06:55.423493 containerd[1543]: time="2025-07-14T22:06:55.423370800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 14 22:06:55.423493 containerd[1543]: time="2025-07-14T22:06:55.423384920Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 14 22:06:55.423493 containerd[1543]: time="2025-07-14T22:06:55.423406080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 14 22:06:55.423493 containerd[1543]: time="2025-07-14T22:06:55.423420200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 14 22:06:55.423797 containerd[1543]: time="2025-07-14T22:06:55.423430240Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 14 22:06:55.423797 containerd[1543]: time="2025-07-14T22:06:55.423538440Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 14 22:06:55.423797 containerd[1543]: time="2025-07-14T22:06:55.423555720Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 14 22:06:55.423797 containerd[1543]: time="2025-07-14T22:06:55.423568680Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 14 22:06:55.423797 containerd[1543]: time="2025-07-14T22:06:55.423579640Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 14 22:06:55.423797 containerd[1543]: time="2025-07-14T22:06:55.423588720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 14 22:06:55.423797 containerd[1543]: time="2025-07-14T22:06:55.423599760Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 14 22:06:55.423797 containerd[1543]: time="2025-07-14T22:06:55.423608520Z" level=info msg="NRI interface is disabled by configuration." Jul 14 22:06:55.423797 containerd[1543]: time="2025-07-14T22:06:55.423620120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 14 22:06:55.424797 containerd[1543]: time="2025-07-14T22:06:55.424186840Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 14 22:06:55.424797 containerd[1543]: time="2025-07-14T22:06:55.424484360Z" level=info msg="Connect containerd service" Jul 14 22:06:55.424797 containerd[1543]: time="2025-07-14T22:06:55.424535920Z" level=info msg="using legacy CRI server" Jul 14 22:06:55.424797 containerd[1543]: time="2025-07-14T22:06:55.424547240Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 14 22:06:55.424797 containerd[1543]: time="2025-07-14T22:06:55.424662720Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 14 22:06:55.426879 containerd[1543]: time="2025-07-14T22:06:55.426550800Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 14 22:06:55.426879 containerd[1543]: time="2025-07-14T22:06:55.426828080Z" level=info msg="Start subscribing containerd event" Jul 14 22:06:55.426879 containerd[1543]: time="2025-07-14T22:06:55.426883800Z" level=info msg="Start recovering state" Jul 14 22:06:55.427003 containerd[1543]: time="2025-07-14T22:06:55.426950680Z" level=info msg="Start event monitor" Jul 14 22:06:55.427003 containerd[1543]: time="2025-07-14T22:06:55.426962360Z" level=info msg="Start snapshots syncer" Jul 14 22:06:55.427003 containerd[1543]: time="2025-07-14T22:06:55.426970800Z" level=info msg="Start cni network conf syncer for default" Jul 14 22:06:55.427003 containerd[1543]: time="2025-07-14T22:06:55.426979600Z" level=info msg="Start streaming server" Jul 14 22:06:55.431246 containerd[1543]: time="2025-07-14T22:06:55.431195480Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 14 22:06:55.431320 containerd[1543]: time="2025-07-14T22:06:55.431269280Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 14 22:06:55.433299 containerd[1543]: time="2025-07-14T22:06:55.431382440Z" level=info msg="containerd successfully booted in 0.075264s" Jul 14 22:06:55.431491 systemd[1]: Started containerd.service - containerd container runtime. Jul 14 22:06:55.555995 tar[1540]: linux-arm64/LICENSE Jul 14 22:06:55.556109 tar[1540]: linux-arm64/README.md Jul 14 22:06:55.572251 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 14 22:06:55.995708 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:06:55.999798 (kubelet)[1627]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:06:56.446559 kubelet[1627]: E0714 22:06:56.446503 1627 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:06:56.448984 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:06:56.449167 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:06:56.610731 sshd_keygen[1532]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 14 22:06:56.629506 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 14 22:06:56.639185 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 14 22:06:56.644553 systemd[1]: issuegen.service: Deactivated successfully. Jul 14 22:06:56.644798 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 14 22:06:56.647306 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 14 22:06:56.659323 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 14 22:06:56.661907 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 14 22:06:56.663798 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 14 22:06:56.664901 systemd[1]: Reached target getty.target - Login Prompts. Jul 14 22:06:56.665661 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 14 22:06:56.666774 systemd[1]: Startup finished in 1min 5.694s (kernel) + 3.832s (userspace) = 1min 9.526s. Jul 14 22:06:59.995895 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 14 22:07:00.015071 systemd[1]: Started sshd@0-10.0.0.114:22-10.0.0.1:45022.service - OpenSSH per-connection server daemon (10.0.0.1:45022). Jul 14 22:07:00.067465 sshd[1659]: Accepted publickey for core from 10.0.0.1 port 45022 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:07:00.069332 sshd[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:07:00.076627 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 14 22:07:00.088103 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 14 22:07:00.089583 systemd-logind[1523]: New session 1 of user core. Jul 14 22:07:00.100469 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 14 22:07:00.113144 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 14 22:07:00.115875 (systemd)[1664]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:07:00.191901 systemd[1664]: Queued start job for default target default.target. Jul 14 22:07:00.192263 systemd[1664]: Created slice app.slice - User Application Slice. Jul 14 22:07:00.192293 systemd[1664]: Reached target paths.target - Paths. Jul 14 22:07:00.192308 systemd[1664]: Reached target timers.target - Timers. Jul 14 22:07:00.199049 systemd[1664]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 14 22:07:00.204617 systemd[1664]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 14 22:07:00.204677 systemd[1664]: Reached target sockets.target - Sockets. Jul 14 22:07:00.204689 systemd[1664]: Reached target basic.target - Basic System. Jul 14 22:07:00.204724 systemd[1664]: Reached target default.target - Main User Target. Jul 14 22:07:00.204747 systemd[1664]: Startup finished in 83ms. Jul 14 22:07:00.204993 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 14 22:07:00.206339 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 14 22:07:00.264127 systemd[1]: Started sshd@1-10.0.0.114:22-10.0.0.1:45034.service - OpenSSH per-connection server daemon (10.0.0.1:45034). Jul 14 22:07:00.303433 sshd[1677]: Accepted publickey for core from 10.0.0.1 port 45034 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:07:00.304797 sshd[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:07:00.309062 systemd-logind[1523]: New session 2 of user core. Jul 14 22:07:00.317143 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 14 22:07:00.369335 sshd[1677]: pam_unix(sshd:session): session closed for user core Jul 14 22:07:00.379118 systemd[1]: Started sshd@2-10.0.0.114:22-10.0.0.1:45046.service - OpenSSH per-connection server daemon (10.0.0.1:45046). Jul 14 22:07:00.379495 systemd[1]: sshd@1-10.0.0.114:22-10.0.0.1:45034.service: Deactivated successfully. Jul 14 22:07:00.381807 systemd[1]: session-2.scope: Deactivated successfully. Jul 14 22:07:00.381859 systemd-logind[1523]: Session 2 logged out. Waiting for processes to exit. Jul 14 22:07:00.383264 systemd-logind[1523]: Removed session 2. Jul 14 22:07:00.413666 sshd[1682]: Accepted publickey for core from 10.0.0.1 port 45046 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:07:00.414805 sshd[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:07:00.418884 systemd-logind[1523]: New session 3 of user core. Jul 14 22:07:00.431094 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 14 22:07:00.479418 sshd[1682]: pam_unix(sshd:session): session closed for user core Jul 14 22:07:00.494152 systemd[1]: Started sshd@3-10.0.0.114:22-10.0.0.1:45050.service - OpenSSH per-connection server daemon (10.0.0.1:45050). Jul 14 22:07:00.494718 systemd[1]: sshd@2-10.0.0.114:22-10.0.0.1:45046.service: Deactivated successfully. Jul 14 22:07:00.496476 systemd-logind[1523]: Session 3 logged out. Waiting for processes to exit. Jul 14 22:07:00.497089 systemd[1]: session-3.scope: Deactivated successfully. Jul 14 22:07:00.498475 systemd-logind[1523]: Removed session 3. Jul 14 22:07:00.528256 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 45050 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:07:00.529690 sshd[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:07:00.534293 systemd-logind[1523]: New session 4 of user core. Jul 14 22:07:00.545144 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 14 22:07:00.596513 sshd[1690]: pam_unix(sshd:session): session closed for user core Jul 14 22:07:00.610127 systemd[1]: Started sshd@4-10.0.0.114:22-10.0.0.1:45062.service - OpenSSH per-connection server daemon (10.0.0.1:45062). Jul 14 22:07:00.610494 systemd[1]: sshd@3-10.0.0.114:22-10.0.0.1:45050.service: Deactivated successfully. Jul 14 22:07:00.613252 systemd-logind[1523]: Session 4 logged out. Waiting for processes to exit. Jul 14 22:07:00.613316 systemd[1]: session-4.scope: Deactivated successfully. Jul 14 22:07:00.614613 systemd-logind[1523]: Removed session 4. Jul 14 22:07:00.644050 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 45062 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:07:00.645287 sshd[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:07:00.649243 systemd-logind[1523]: New session 5 of user core. Jul 14 22:07:00.663250 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 14 22:07:00.732098 sudo[1705]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 14 22:07:00.732370 sudo[1705]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 22:07:00.748015 sudo[1705]: pam_unix(sudo:session): session closed for user root Jul 14 22:07:00.751544 sshd[1698]: pam_unix(sshd:session): session closed for user core Jul 14 22:07:00.763205 systemd[1]: Started sshd@5-10.0.0.114:22-10.0.0.1:45078.service - OpenSSH per-connection server daemon (10.0.0.1:45078). Jul 14 22:07:00.763592 systemd[1]: sshd@4-10.0.0.114:22-10.0.0.1:45062.service: Deactivated successfully. Jul 14 22:07:00.765929 systemd[1]: session-5.scope: Deactivated successfully. Jul 14 22:07:00.766099 systemd-logind[1523]: Session 5 logged out. Waiting for processes to exit. Jul 14 22:07:00.767654 systemd-logind[1523]: Removed session 5. Jul 14 22:07:00.797397 sshd[1707]: Accepted publickey for core from 10.0.0.1 port 45078 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:07:00.798809 sshd[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:07:00.803118 systemd-logind[1523]: New session 6 of user core. Jul 14 22:07:00.810128 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 14 22:07:00.862495 sudo[1715]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 14 22:07:00.863119 sudo[1715]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 22:07:00.866395 sudo[1715]: pam_unix(sudo:session): session closed for user root Jul 14 22:07:00.870902 sudo[1714]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 14 22:07:00.871164 sudo[1714]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 22:07:00.891384 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 14 22:07:00.892421 auditctl[1718]: No rules Jul 14 22:07:00.892786 systemd[1]: audit-rules.service: Deactivated successfully. Jul 14 22:07:00.893021 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 14 22:07:00.895225 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 14 22:07:00.918257 augenrules[1737]: No rules Jul 14 22:07:00.919484 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 14 22:07:00.920803 sudo[1714]: pam_unix(sudo:session): session closed for user root Jul 14 22:07:00.922311 sshd[1707]: pam_unix(sshd:session): session closed for user core Jul 14 22:07:00.935138 systemd[1]: Started sshd@6-10.0.0.114:22-10.0.0.1:45086.service - OpenSSH per-connection server daemon (10.0.0.1:45086). Jul 14 22:07:00.935568 systemd[1]: sshd@5-10.0.0.114:22-10.0.0.1:45078.service: Deactivated successfully. Jul 14 22:07:00.937009 systemd[1]: session-6.scope: Deactivated successfully. Jul 14 22:07:00.937582 systemd-logind[1523]: Session 6 logged out. Waiting for processes to exit. Jul 14 22:07:00.938749 systemd-logind[1523]: Removed session 6. Jul 14 22:07:00.969116 sshd[1744]: Accepted publickey for core from 10.0.0.1 port 45086 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:07:00.970278 sshd[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:07:00.974698 systemd-logind[1523]: New session 7 of user core. Jul 14 22:07:00.988167 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 14 22:07:01.039155 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 14 22:07:01.039440 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 22:07:01.369089 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 14 22:07:01.369742 (dockerd)[1768]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 14 22:07:01.627380 dockerd[1768]: time="2025-07-14T22:07:01.626957638Z" level=info msg="Starting up" Jul 14 22:07:01.870493 dockerd[1768]: time="2025-07-14T22:07:01.870431748Z" level=info msg="Loading containers: start." Jul 14 22:07:01.963868 kernel: Initializing XFRM netlink socket Jul 14 22:07:02.026318 systemd-networkd[1231]: docker0: Link UP Jul 14 22:07:02.044083 dockerd[1768]: time="2025-07-14T22:07:02.044045088Z" level=info msg="Loading containers: done." Jul 14 22:07:02.055325 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2140105811-merged.mount: Deactivated successfully. Jul 14 22:07:02.055685 dockerd[1768]: time="2025-07-14T22:07:02.055631181Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 14 22:07:02.055760 dockerd[1768]: time="2025-07-14T22:07:02.055742304Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 14 22:07:02.055879 dockerd[1768]: time="2025-07-14T22:07:02.055844839Z" level=info msg="Daemon has completed initialization" Jul 14 22:07:02.084459 dockerd[1768]: time="2025-07-14T22:07:02.084293062Z" level=info msg="API listen on /run/docker.sock" Jul 14 22:07:02.084646 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 14 22:07:06.663377 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 14 22:07:06.677080 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:07:06.774987 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:07:06.778694 (kubelet)[1929]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:07:06.812946 kubelet[1929]: E0714 22:07:06.812882 1929 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:07:06.815685 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:07:06.815882 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:07:12.391566 containerd[1543]: time="2025-07-14T22:07:12.391525337Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" Jul 14 22:07:16.913437 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 14 22:07:16.925031 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:07:17.026037 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:07:17.029667 (kubelet)[1953]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:07:17.060147 kubelet[1953]: E0714 22:07:17.060086 1953 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:07:17.062590 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:07:17.062767 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:07:23.191119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1765761260.mount: Deactivated successfully. Jul 14 22:07:24.071531 containerd[1543]: time="2025-07-14T22:07:24.071482085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:07:24.072514 containerd[1543]: time="2025-07-14T22:07:24.072297867Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=25554610" Jul 14 22:07:24.073262 containerd[1543]: time="2025-07-14T22:07:24.073229109Z" level=info msg="ImageCreate event name:\"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:07:24.076038 containerd[1543]: time="2025-07-14T22:07:24.075993029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:07:24.077193 containerd[1543]: time="2025-07-14T22:07:24.077166232Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"25551408\" in 11.685599147s" Jul 14 22:07:24.077411 containerd[1543]: time="2025-07-14T22:07:24.077271090Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\"" Jul 14 22:07:24.078615 containerd[1543]: time="2025-07-14T22:07:24.078567076Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" Jul 14 22:07:25.064252 containerd[1543]: time="2025-07-14T22:07:25.064203781Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:07:25.064824 containerd[1543]: time="2025-07-14T22:07:25.064792190Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=22458980" Jul 14 22:07:25.065710 containerd[1543]: time="2025-07-14T22:07:25.065682846Z" level=info msg="ImageCreate event name:\"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:07:25.068485 containerd[1543]: time="2025-07-14T22:07:25.068431503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:07:25.069742 containerd[1543]: time="2025-07-14T22:07:25.069659930Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"23900539\" in 991.06133ms" Jul 14 22:07:25.069742 containerd[1543]: time="2025-07-14T22:07:25.069692295Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\"" Jul 14 22:07:25.070300 containerd[1543]: time="2025-07-14T22:07:25.070118520Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" Jul 14 22:07:26.211743 containerd[1543]: time="2025-07-14T22:07:26.211658855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:07:26.212334 containerd[1543]: time="2025-07-14T22:07:26.212261975Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=17125815" Jul 14 22:07:26.212755 containerd[1543]: time="2025-07-14T22:07:26.212712035Z" level=info msg="ImageCreate event name:\"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:07:26.218965 containerd[1543]: time="2025-07-14T22:07:26.218921580Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:07:26.220046 containerd[1543]: time="2025-07-14T22:07:26.219938756Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"18567392\" in 1.149789271s" Jul 14 22:07:26.220046 containerd[1543]: time="2025-07-14T22:07:26.219974640Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\"" Jul 14 22:07:26.220561 containerd[1543]: time="2025-07-14T22:07:26.220488269Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" Jul 14 22:07:27.118397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2966853587.mount: Deactivated successfully. Jul 14 22:07:27.119386 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 14 22:07:27.133107 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:07:27.233869 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:07:27.237225 (kubelet)[2043]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:07:27.276604 kubelet[2043]: E0714 22:07:27.276042 2043 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:07:27.281156 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:07:27.281320 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:07:27.499540 containerd[1543]: time="2025-07-14T22:07:27.499493017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:07:27.500440 containerd[1543]: time="2025-07-14T22:07:27.500407488Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=26871919" Jul 14 22:07:27.501137 containerd[1543]: time="2025-07-14T22:07:27.501073769Z" level=info msg="ImageCreate event name:\"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:07:27.503227 containerd[1543]: time="2025-07-14T22:07:27.503136220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:07:27.504142 containerd[1543]: time="2025-07-14T22:07:27.503698168Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"26870936\" in 1.283176615s" Jul 14 22:07:27.504142 containerd[1543]: time="2025-07-14T22:07:27.503724251Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\"" Jul 14 22:07:27.504227 containerd[1543]: time="2025-07-14T22:07:27.504159024Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 14 22:07:28.032292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1820970081.mount: Deactivated successfully. Jul 14 22:07:28.667069 containerd[1543]: time="2025-07-14T22:07:28.667021341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:07:28.667984 containerd[1543]: time="2025-07-14T22:07:28.667780017Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 14 22:07:28.668611 containerd[1543]: time="2025-07-14T22:07:28.668582894Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:07:28.671624 containerd[1543]: time="2025-07-14T22:07:28.671574274Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:07:28.672929 containerd[1543]: time="2025-07-14T22:07:28.672903576Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.168716148s" Jul 14 22:07:28.673125 containerd[1543]: time="2025-07-14T22:07:28.673011061Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 14 22:07:28.674250 containerd[1543]: time="2025-07-14T22:07:28.674224677Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 14 22:07:29.317903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2974965434.mount: Deactivated successfully. Jul 14 22:07:29.321842 containerd[1543]: time="2025-07-14T22:07:29.321796632Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:07:29.323083 containerd[1543]: time="2025-07-14T22:07:29.323047767Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 14 22:07:29.323914 containerd[1543]: time="2025-07-14T22:07:29.323888644Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:07:29.326473 containerd[1543]: time="2025-07-14T22:07:29.326441876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:07:29.327689 containerd[1543]: time="2025-07-14T22:07:29.327655810Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 653.389851ms" Jul 14 22:07:29.327733 containerd[1543]: time="2025-07-14T22:07:29.327690691Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 14 22:07:29.328163 containerd[1543]: time="2025-07-14T22:07:29.328140231Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 14 22:07:29.832183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2278374565.mount: Deactivated successfully. Jul 14 22:07:31.145780 containerd[1543]: time="2025-07-14T22:07:31.145738057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:07:31.146826 containerd[1543]: time="2025-07-14T22:07:31.146113152Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" Jul 14 22:07:31.147429 containerd[1543]: time="2025-07-14T22:07:31.147387722Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:07:31.151098 containerd[1543]: time="2025-07-14T22:07:31.151034506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:07:31.152499 containerd[1543]: time="2025-07-14T22:07:31.152347917Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 1.824176365s" Jul 14 22:07:31.152499 containerd[1543]: time="2025-07-14T22:07:31.152385159Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 14 22:07:37.413372 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 14 22:07:37.428019 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:07:37.555932 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:07:37.559795 (kubelet)[2182]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:07:37.592360 kubelet[2182]: E0714 22:07:37.592287 2182 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:07:37.594599 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:07:37.594810 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:07:40.178437 update_engine[1531]: I20250714 22:07:40.178318 1531 update_attempter.cc:509] Updating boot flags... Jul 14 22:07:40.373868 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2199) Jul 14 22:07:40.402778 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2198) Jul 14 22:07:45.715667 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:07:45.726043 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:07:45.746233 systemd[1]: Reloading requested from client PID 2229 ('systemctl') (unit session-7.scope)... Jul 14 22:07:45.746252 systemd[1]: Reloading... Jul 14 22:07:45.806881 zram_generator::config[2268]: No configuration found. Jul 14 22:07:45.976356 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:07:46.030770 systemd[1]: Reloading finished in 284 ms. Jul 14 22:07:46.069870 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 14 22:07:46.069940 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 14 22:07:46.070186 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:07:46.072385 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:07:46.172319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:07:46.175975 (kubelet)[2326]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 14 22:07:46.209261 kubelet[2326]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:07:46.209261 kubelet[2326]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 14 22:07:46.209261 kubelet[2326]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:07:46.209593 kubelet[2326]: I0714 22:07:46.209335 2326 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 22:07:47.147046 kubelet[2326]: I0714 22:07:47.146997 2326 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 14 22:07:47.147046 kubelet[2326]: I0714 22:07:47.147032 2326 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 22:07:47.147286 kubelet[2326]: I0714 22:07:47.147261 2326 server.go:934] "Client rotation is on, will bootstrap in background" Jul 14 22:07:47.167622 kubelet[2326]: E0714 22:07:47.167583 2326 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.114:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:07:47.168896 kubelet[2326]: I0714 22:07:47.168787 2326 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 22:07:47.174540 kubelet[2326]: E0714 22:07:47.174507 2326 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 22:07:47.174540 kubelet[2326]: I0714 22:07:47.174536 2326 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 22:07:47.177942 kubelet[2326]: I0714 22:07:47.177920 2326 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 22:07:47.178837 kubelet[2326]: I0714 22:07:47.178811 2326 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 14 22:07:47.179002 kubelet[2326]: I0714 22:07:47.178967 2326 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 22:07:47.179150 kubelet[2326]: I0714 22:07:47.178996 2326 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 14 22:07:47.179232 kubelet[2326]: I0714 22:07:47.179157 2326 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 22:07:47.179232 kubelet[2326]: I0714 22:07:47.179170 2326 container_manager_linux.go:300] "Creating device plugin manager" Jul 14 22:07:47.179409 kubelet[2326]: I0714 22:07:47.179386 2326 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:07:47.183484 kubelet[2326]: I0714 22:07:47.183137 2326 kubelet.go:408] "Attempting to sync node with API server" Jul 14 22:07:47.183484 kubelet[2326]: I0714 22:07:47.183167 2326 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 22:07:47.183484 kubelet[2326]: I0714 22:07:47.183187 2326 kubelet.go:314] "Adding apiserver pod source" Jul 14 22:07:47.183484 kubelet[2326]: I0714 22:07:47.183264 2326 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 22:07:47.187592 kubelet[2326]: W0714 22:07:47.187524 2326 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.114:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Jul 14 22:07:47.187654 kubelet[2326]: E0714 22:07:47.187594 2326 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.114:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:07:47.187692 kubelet[2326]: W0714 22:07:47.187670 2326 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Jul 14 22:07:47.187713 kubelet[2326]: E0714 22:07:47.187697 2326 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:07:47.188866 kubelet[2326]: I0714 22:07:47.188167 2326 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 14 22:07:47.188966 kubelet[2326]: I0714 22:07:47.188941 2326 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 14 22:07:47.189133 kubelet[2326]: W0714 22:07:47.189117 2326 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 14 22:07:47.190235 kubelet[2326]: I0714 22:07:47.190219 2326 server.go:1274] "Started kubelet" Jul 14 22:07:47.192997 kubelet[2326]: I0714 22:07:47.192966 2326 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 22:07:47.193924 kubelet[2326]: I0714 22:07:47.193229 2326 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 22:07:47.194693 kubelet[2326]: I0714 22:07:47.194671 2326 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 22:07:47.196192 kubelet[2326]: I0714 22:07:47.196168 2326 server.go:449] "Adding debug handlers to kubelet server" Jul 14 22:07:47.196263 kubelet[2326]: I0714 22:07:47.196210 2326 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 22:07:47.198455 kubelet[2326]: I0714 22:07:47.198045 2326 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 22:07:47.198592 kubelet[2326]: I0714 22:07:47.198568 2326 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 14 22:07:47.198679 kubelet[2326]: I0714 22:07:47.198662 2326 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 14 22:07:47.198741 kubelet[2326]: I0714 22:07:47.198725 2326 reconciler.go:26] "Reconciler: start to sync state" Jul 14 22:07:47.199058 kubelet[2326]: W0714 22:07:47.199019 2326 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Jul 14 22:07:47.199108 kubelet[2326]: E0714 22:07:47.199059 2326 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:07:47.199354 kubelet[2326]: I0714 22:07:47.199303 2326 factory.go:221] Registration of the systemd container factory successfully Jul 14 22:07:47.199409 kubelet[2326]: I0714 22:07:47.199386 2326 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 22:07:47.201571 kubelet[2326]: E0714 22:07:47.200486 2326 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:07:47.201571 kubelet[2326]: E0714 22:07:47.200711 2326 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.114:6443: connect: connection refused" interval="200ms" Jul 14 22:07:47.202295 kubelet[2326]: I0714 22:07:47.202275 2326 factory.go:221] Registration of the containerd container factory successfully Jul 14 22:07:47.202489 kubelet[2326]: E0714 22:07:47.202466 2326 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 22:07:47.203021 kubelet[2326]: E0714 22:07:47.199252 2326 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.114:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.114:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18523d89d4c76899 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-14 22:07:47.190196377 +0000 UTC m=+1.011183525,LastTimestamp:2025-07-14 22:07:47.190196377 +0000 UTC m=+1.011183525,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 14 22:07:47.210240 kubelet[2326]: I0714 22:07:47.210130 2326 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 14 22:07:47.211237 kubelet[2326]: I0714 22:07:47.211218 2326 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 14 22:07:47.211319 kubelet[2326]: I0714 22:07:47.211311 2326 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 14 22:07:47.211374 kubelet[2326]: I0714 22:07:47.211367 2326 kubelet.go:2321] "Starting kubelet main sync loop" Jul 14 22:07:47.211759 kubelet[2326]: E0714 22:07:47.211460 2326 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 22:07:47.218698 kubelet[2326]: I0714 22:07:47.218597 2326 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 14 22:07:47.218698 kubelet[2326]: I0714 22:07:47.218610 2326 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 14 22:07:47.218698 kubelet[2326]: I0714 22:07:47.218627 2326 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:07:47.218830 kubelet[2326]: W0714 22:07:47.218755 2326 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Jul 14 22:07:47.218830 kubelet[2326]: E0714 22:07:47.218795 2326 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:07:47.300673 kubelet[2326]: E0714 22:07:47.300614 2326 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:07:47.306721 kubelet[2326]: I0714 22:07:47.306636 2326 policy_none.go:49] "None policy: Start" Jul 14 22:07:47.307585 kubelet[2326]: I0714 22:07:47.307551 2326 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 14 22:07:47.307585 kubelet[2326]: I0714 22:07:47.307578 2326 state_mem.go:35] "Initializing new in-memory state store" Jul 14 22:07:47.311621 kubelet[2326]: E0714 22:07:47.311518 2326 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 14 22:07:47.312655 kubelet[2326]: I0714 22:07:47.311751 2326 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 14 22:07:47.312655 kubelet[2326]: I0714 22:07:47.311961 2326 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 22:07:47.312655 kubelet[2326]: I0714 22:07:47.311972 2326 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 22:07:47.312655 kubelet[2326]: I0714 22:07:47.312557 2326 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 22:07:47.313501 kubelet[2326]: E0714 22:07:47.313480 2326 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 14 22:07:47.401298 kubelet[2326]: E0714 22:07:47.401193 2326 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.114:6443: connect: connection refused" interval="400ms" Jul 14 22:07:47.414775 kubelet[2326]: I0714 22:07:47.413507 2326 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:07:47.414775 kubelet[2326]: E0714 22:07:47.413968 2326 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.114:6443/api/v1/nodes\": dial tcp 10.0.0.114:6443: connect: connection refused" node="localhost" Jul 14 22:07:47.600415 kubelet[2326]: I0714 22:07:47.600384 2326 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:07:47.600523 kubelet[2326]: I0714 22:07:47.600440 2326 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cb30b2bc51eb37d42df84282de80bd74-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cb30b2bc51eb37d42df84282de80bd74\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:07:47.600523 kubelet[2326]: I0714 22:07:47.600459 2326 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cb30b2bc51eb37d42df84282de80bd74-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cb30b2bc51eb37d42df84282de80bd74\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:07:47.600523 kubelet[2326]: I0714 22:07:47.600478 2326 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cb30b2bc51eb37d42df84282de80bd74-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cb30b2bc51eb37d42df84282de80bd74\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:07:47.600523 kubelet[2326]: I0714 22:07:47.600495 2326 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:07:47.600523 kubelet[2326]: I0714 22:07:47.600510 2326 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" Jul 14 22:07:47.600744 kubelet[2326]: I0714 22:07:47.600524 2326 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:07:47.600744 kubelet[2326]: I0714 22:07:47.600539 2326 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:07:47.600744 kubelet[2326]: I0714 22:07:47.600555 2326 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:07:47.615412 kubelet[2326]: I0714 22:07:47.615375 2326 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:07:47.615750 kubelet[2326]: E0714 22:07:47.615720 2326 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.114:6443/api/v1/nodes\": dial tcp 10.0.0.114:6443: connect: connection refused" node="localhost" Jul 14 22:07:47.802278 kubelet[2326]: E0714 22:07:47.802230 2326 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.114:6443: connect: connection refused" interval="800ms" Jul 14 22:07:47.817494 kubelet[2326]: E0714 22:07:47.817459 2326 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:07:47.818261 kubelet[2326]: E0714 22:07:47.818210 2326 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:07:47.818736 containerd[1543]: time="2025-07-14T22:07:47.818445874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cb30b2bc51eb37d42df84282de80bd74,Namespace:kube-system,Attempt:0,}" Jul 14 22:07:47.818736 containerd[1543]: time="2025-07-14T22:07:47.818539715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" Jul 14 22:07:47.819262 kubelet[2326]: E0714 22:07:47.819239 2326 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:07:47.819691 containerd[1543]: time="2025-07-14T22:07:47.819522212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" Jul 14 22:07:48.016758 kubelet[2326]: I0714 22:07:48.016734 2326 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:07:48.017201 kubelet[2326]: E0714 22:07:48.017175 2326 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.114:6443/api/v1/nodes\": dial tcp 10.0.0.114:6443: connect: connection refused" node="localhost" Jul 14 22:07:48.347651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4161938990.mount: Deactivated successfully. Jul 14 22:07:48.353010 containerd[1543]: time="2025-07-14T22:07:48.352173024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:07:48.354208 containerd[1543]: time="2025-07-14T22:07:48.354084376Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 14 22:07:48.354835 containerd[1543]: time="2025-07-14T22:07:48.354798588Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:07:48.355711 containerd[1543]: time="2025-07-14T22:07:48.355682843Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:07:48.356517 containerd[1543]: time="2025-07-14T22:07:48.356313453Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 14 22:07:48.357291 containerd[1543]: time="2025-07-14T22:07:48.357082026Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:07:48.357827 containerd[1543]: time="2025-07-14T22:07:48.357783437Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 14 22:07:48.359553 containerd[1543]: time="2025-07-14T22:07:48.359525026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:07:48.362924 containerd[1543]: time="2025-07-14T22:07:48.362842641Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 543.262868ms" Jul 14 22:07:48.365373 containerd[1543]: time="2025-07-14T22:07:48.365109559Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 546.499723ms" Jul 14 22:07:48.367675 containerd[1543]: time="2025-07-14T22:07:48.367625761Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 549.095125ms" Jul 14 22:07:48.526912 containerd[1543]: time="2025-07-14T22:07:48.526232191Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:07:48.526912 containerd[1543]: time="2025-07-14T22:07:48.526308952Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:07:48.526912 containerd[1543]: time="2025-07-14T22:07:48.526333433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:07:48.526912 containerd[1543]: time="2025-07-14T22:07:48.526420674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:07:48.526912 containerd[1543]: time="2025-07-14T22:07:48.526665958Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:07:48.526912 containerd[1543]: time="2025-07-14T22:07:48.526717079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:07:48.526912 containerd[1543]: time="2025-07-14T22:07:48.526733039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:07:48.526912 containerd[1543]: time="2025-07-14T22:07:48.526810681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:07:48.530933 containerd[1543]: time="2025-07-14T22:07:48.530702865Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:07:48.531036 containerd[1543]: time="2025-07-14T22:07:48.530964149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:07:48.531131 containerd[1543]: time="2025-07-14T22:07:48.531097032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:07:48.531466 containerd[1543]: time="2025-07-14T22:07:48.531406557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:07:48.564250 kubelet[2326]: W0714 22:07:48.564190 2326 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Jul 14 22:07:48.564584 kubelet[2326]: E0714 22:07:48.564259 2326 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:07:48.572908 containerd[1543]: time="2025-07-14T22:07:48.572783363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"feebdc232f04009e3f079ac7d58b44a1ff1e9183ed2eedc54539d5f2dac18f2c\"" Jul 14 22:07:48.573978 containerd[1543]: time="2025-07-14T22:07:48.573939302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cb30b2bc51eb37d42df84282de80bd74,Namespace:kube-system,Attempt:0,} returns sandbox id \"6aee6312bd6b0976968815dc449aecfd5ccdd649e750b2199f35a4b937eeb536\"" Jul 14 22:07:48.574783 kubelet[2326]: E0714 22:07:48.574757 2326 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:07:48.575478 kubelet[2326]: E0714 22:07:48.575367 2326 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:07:48.578463 containerd[1543]: time="2025-07-14T22:07:48.578380616Z" level=info msg="CreateContainer within sandbox \"feebdc232f04009e3f079ac7d58b44a1ff1e9183ed2eedc54539d5f2dac18f2c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 14 22:07:48.578705 containerd[1543]: time="2025-07-14T22:07:48.578627700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"7481e4d95191a1f4103f2ef1b66b2414370634653e291aaa652f6442df75d080\"" Jul 14 22:07:48.579065 containerd[1543]: time="2025-07-14T22:07:48.579039467Z" level=info msg="CreateContainer within sandbox \"6aee6312bd6b0976968815dc449aecfd5ccdd649e750b2199f35a4b937eeb536\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 14 22:07:48.579140 kubelet[2326]: E0714 22:07:48.579104 2326 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:07:48.580784 containerd[1543]: time="2025-07-14T22:07:48.580733535Z" level=info msg="CreateContainer within sandbox \"7481e4d95191a1f4103f2ef1b66b2414370634653e291aaa652f6442df75d080\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 14 22:07:48.589490 kubelet[2326]: W0714 22:07:48.589391 2326 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Jul 14 22:07:48.589490 kubelet[2326]: E0714 22:07:48.589463 2326 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:07:48.595199 containerd[1543]: time="2025-07-14T22:07:48.595163534Z" level=info msg="CreateContainer within sandbox \"feebdc232f04009e3f079ac7d58b44a1ff1e9183ed2eedc54539d5f2dac18f2c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1b20c99b8029c777453a5c6cbb1b67f16213347f3a4ac4bd7670b6cd80408a93\"" Jul 14 22:07:48.595887 containerd[1543]: time="2025-07-14T22:07:48.595841025Z" level=info msg="StartContainer for \"1b20c99b8029c777453a5c6cbb1b67f16213347f3a4ac4bd7670b6cd80408a93\"" Jul 14 22:07:48.598427 containerd[1543]: time="2025-07-14T22:07:48.598279226Z" level=info msg="CreateContainer within sandbox \"6aee6312bd6b0976968815dc449aecfd5ccdd649e750b2199f35a4b937eeb536\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"14d59ca7f0bd20841cf50b45b0a4a2320251b504bddfa9f275da1e00904428fb\"" Jul 14 22:07:48.599338 containerd[1543]: time="2025-07-14T22:07:48.598715033Z" level=info msg="StartContainer for \"14d59ca7f0bd20841cf50b45b0a4a2320251b504bddfa9f275da1e00904428fb\"" Jul 14 22:07:48.602153 containerd[1543]: time="2025-07-14T22:07:48.601062992Z" level=info msg="CreateContainer within sandbox \"7481e4d95191a1f4103f2ef1b66b2414370634653e291aaa652f6442df75d080\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"415a84a070616a6a44f251957cdc3ead88c97309f30660aa6c3b137caa679735\"" Jul 14 22:07:48.602153 containerd[1543]: time="2025-07-14T22:07:48.601485199Z" level=info msg="StartContainer for \"415a84a070616a6a44f251957cdc3ead88c97309f30660aa6c3b137caa679735\"" Jul 14 22:07:48.602803 kubelet[2326]: E0714 22:07:48.602772 2326 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.114:6443: connect: connection refused" interval="1.6s" Jul 14 22:07:48.664115 containerd[1543]: time="2025-07-14T22:07:48.664080397Z" level=info msg="StartContainer for \"1b20c99b8029c777453a5c6cbb1b67f16213347f3a4ac4bd7670b6cd80408a93\" returns successfully" Jul 14 22:07:48.664490 containerd[1543]: time="2025-07-14T22:07:48.664263600Z" level=info msg="StartContainer for \"415a84a070616a6a44f251957cdc3ead88c97309f30660aa6c3b137caa679735\" returns successfully" Jul 14 22:07:48.664725 containerd[1543]: time="2025-07-14T22:07:48.664267400Z" level=info msg="StartContainer for \"14d59ca7f0bd20841cf50b45b0a4a2320251b504bddfa9f275da1e00904428fb\" returns successfully" Jul 14 22:07:48.670165 kubelet[2326]: W0714 22:07:48.670116 2326 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Jul 14 22:07:48.670348 kubelet[2326]: E0714 22:07:48.670326 2326 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:07:48.780595 kubelet[2326]: W0714 22:07:48.780512 2326 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.114:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Jul 14 22:07:48.780812 kubelet[2326]: E0714 22:07:48.780762 2326 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.114:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:07:48.818839 kubelet[2326]: I0714 22:07:48.818812 2326 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:07:48.819479 kubelet[2326]: E0714 22:07:48.819436 2326 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.114:6443/api/v1/nodes\": dial tcp 10.0.0.114:6443: connect: connection refused" node="localhost" Jul 14 22:07:49.224126 kubelet[2326]: E0714 22:07:49.224090 2326 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:07:49.226989 kubelet[2326]: E0714 22:07:49.226966 2326 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:07:49.234745 kubelet[2326]: E0714 22:07:49.234705 2326 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:07:50.236274 kubelet[2326]: E0714 22:07:50.236232 2326 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:07:50.348188 kubelet[2326]: E0714 22:07:50.348131 2326 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 14 22:07:50.420512 kubelet[2326]: I0714 22:07:50.420477 2326 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:07:50.439084 kubelet[2326]: I0714 22:07:50.439043 2326 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 14 22:07:50.439084 kubelet[2326]: E0714 22:07:50.439076 2326 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 14 22:07:50.454728 kubelet[2326]: E0714 22:07:50.454684 2326 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:07:50.555133 kubelet[2326]: E0714 22:07:50.555018 2326 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:07:50.613681 kubelet[2326]: E0714 22:07:50.613654 2326 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:07:50.655747 kubelet[2326]: E0714 22:07:50.655692 2326 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:07:50.755933 kubelet[2326]: E0714 22:07:50.755890 2326 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:07:50.856489 kubelet[2326]: E0714 22:07:50.856372 2326 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:07:50.957080 kubelet[2326]: E0714 22:07:50.957040 2326 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:07:51.187195 kubelet[2326]: I0714 22:07:51.187113 2326 apiserver.go:52] "Watching apiserver" Jul 14 22:07:51.199668 kubelet[2326]: I0714 22:07:51.199619 2326 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 14 22:07:51.245723 kubelet[2326]: E0714 22:07:51.245682 2326 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 14 22:07:51.246089 kubelet[2326]: E0714 22:07:51.245866 2326 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:07:52.641135 systemd[1]: Reloading requested from client PID 2607 ('systemctl') (unit session-7.scope)... Jul 14 22:07:52.641152 systemd[1]: Reloading... Jul 14 22:07:52.701880 zram_generator::config[2647]: No configuration found. Jul 14 22:07:52.788941 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:07:52.848526 systemd[1]: Reloading finished in 207 ms. Jul 14 22:07:52.871950 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:07:52.893727 systemd[1]: kubelet.service: Deactivated successfully. Jul 14 22:07:52.894080 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:07:52.905109 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:07:53.007995 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:07:53.013433 (kubelet)[2698]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 14 22:07:53.055063 kubelet[2698]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:07:53.055063 kubelet[2698]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 14 22:07:53.055063 kubelet[2698]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:07:53.056121 kubelet[2698]: I0714 22:07:53.055136 2698 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 22:07:53.061409 kubelet[2698]: I0714 22:07:53.061285 2698 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 14 22:07:53.061409 kubelet[2698]: I0714 22:07:53.061392 2698 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 22:07:53.061680 kubelet[2698]: I0714 22:07:53.061653 2698 server.go:934] "Client rotation is on, will bootstrap in background" Jul 14 22:07:53.063767 kubelet[2698]: I0714 22:07:53.063724 2698 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 14 22:07:53.065816 kubelet[2698]: I0714 22:07:53.065783 2698 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 22:07:53.069792 kubelet[2698]: E0714 22:07:53.069740 2698 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 22:07:53.069792 kubelet[2698]: I0714 22:07:53.069776 2698 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 22:07:53.072360 kubelet[2698]: I0714 22:07:53.072259 2698 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 22:07:53.072724 kubelet[2698]: I0714 22:07:53.072710 2698 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 14 22:07:53.072830 kubelet[2698]: I0714 22:07:53.072806 2698 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 22:07:53.073008 kubelet[2698]: I0714 22:07:53.072830 2698 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 14 22:07:53.073091 kubelet[2698]: I0714 22:07:53.073016 2698 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 22:07:53.073091 kubelet[2698]: I0714 22:07:53.073025 2698 container_manager_linux.go:300] "Creating device plugin manager" Jul 14 22:07:53.073091 kubelet[2698]: I0714 22:07:53.073058 2698 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:07:53.073152 kubelet[2698]: I0714 22:07:53.073145 2698 kubelet.go:408] "Attempting to sync node with API server" Jul 14 22:07:53.073181 kubelet[2698]: I0714 22:07:53.073158 2698 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 22:07:53.073181 kubelet[2698]: I0714 22:07:53.073175 2698 kubelet.go:314] "Adding apiserver pod source" Jul 14 22:07:53.073220 kubelet[2698]: I0714 22:07:53.073188 2698 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 22:07:53.076861 kubelet[2698]: I0714 22:07:53.074113 2698 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 14 22:07:53.076861 kubelet[2698]: I0714 22:07:53.076759 2698 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 14 22:07:53.077586 kubelet[2698]: I0714 22:07:53.077177 2698 server.go:1274] "Started kubelet" Jul 14 22:07:53.077586 kubelet[2698]: I0714 22:07:53.077429 2698 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 22:07:53.078342 kubelet[2698]: I0714 22:07:53.078287 2698 server.go:449] "Adding debug handlers to kubelet server" Jul 14 22:07:53.081930 kubelet[2698]: I0714 22:07:53.080671 2698 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 22:07:53.085312 kubelet[2698]: I0714 22:07:53.077504 2698 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 22:07:53.085518 kubelet[2698]: I0714 22:07:53.085482 2698 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 22:07:53.087232 kubelet[2698]: I0714 22:07:53.087205 2698 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 22:07:53.090450 kubelet[2698]: I0714 22:07:53.090294 2698 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 14 22:07:53.090726 kubelet[2698]: I0714 22:07:53.090704 2698 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 14 22:07:53.090944 kubelet[2698]: I0714 22:07:53.090931 2698 reconciler.go:26] "Reconciler: start to sync state" Jul 14 22:07:53.093713 kubelet[2698]: E0714 22:07:53.093687 2698 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 22:07:53.094753 kubelet[2698]: I0714 22:07:53.094730 2698 factory.go:221] Registration of the systemd container factory successfully Jul 14 22:07:53.094880 kubelet[2698]: I0714 22:07:53.094839 2698 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 22:07:53.096671 kubelet[2698]: I0714 22:07:53.096632 2698 factory.go:221] Registration of the containerd container factory successfully Jul 14 22:07:53.104221 kubelet[2698]: I0714 22:07:53.104157 2698 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 14 22:07:53.105294 kubelet[2698]: I0714 22:07:53.105276 2698 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 14 22:07:53.105420 kubelet[2698]: I0714 22:07:53.105407 2698 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 14 22:07:53.105655 kubelet[2698]: I0714 22:07:53.105642 2698 kubelet.go:2321] "Starting kubelet main sync loop" Jul 14 22:07:53.105796 kubelet[2698]: E0714 22:07:53.105777 2698 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 22:07:53.136219 kubelet[2698]: I0714 22:07:53.136190 2698 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 14 22:07:53.136219 kubelet[2698]: I0714 22:07:53.136211 2698 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 14 22:07:53.136357 kubelet[2698]: I0714 22:07:53.136234 2698 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:07:53.136420 kubelet[2698]: I0714 22:07:53.136404 2698 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 14 22:07:53.136463 kubelet[2698]: I0714 22:07:53.136420 2698 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 14 22:07:53.136463 kubelet[2698]: I0714 22:07:53.136444 2698 policy_none.go:49] "None policy: Start" Jul 14 22:07:53.136997 kubelet[2698]: I0714 22:07:53.136979 2698 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 14 22:07:53.137885 kubelet[2698]: I0714 22:07:53.137143 2698 state_mem.go:35] "Initializing new in-memory state store" Jul 14 22:07:53.137885 kubelet[2698]: I0714 22:07:53.137310 2698 state_mem.go:75] "Updated machine memory state" Jul 14 22:07:53.138552 kubelet[2698]: I0714 22:07:53.138514 2698 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 14 22:07:53.138703 kubelet[2698]: I0714 22:07:53.138677 2698 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 22:07:53.138739 kubelet[2698]: I0714 22:07:53.138696 2698 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 22:07:53.139764 kubelet[2698]: I0714 22:07:53.139735 2698 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 22:07:53.247378 kubelet[2698]: I0714 22:07:53.247320 2698 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:07:53.254588 kubelet[2698]: I0714 22:07:53.254542 2698 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 14 22:07:53.254686 kubelet[2698]: I0714 22:07:53.254668 2698 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 14 22:07:53.292018 kubelet[2698]: I0714 22:07:53.291907 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" Jul 14 22:07:53.392835 kubelet[2698]: I0714 22:07:53.392739 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cb30b2bc51eb37d42df84282de80bd74-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cb30b2bc51eb37d42df84282de80bd74\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:07:53.392835 kubelet[2698]: I0714 22:07:53.392783 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cb30b2bc51eb37d42df84282de80bd74-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cb30b2bc51eb37d42df84282de80bd74\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:07:53.392835 kubelet[2698]: I0714 22:07:53.392803 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cb30b2bc51eb37d42df84282de80bd74-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cb30b2bc51eb37d42df84282de80bd74\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:07:53.392835 kubelet[2698]: I0714 22:07:53.392868 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:07:53.392835 kubelet[2698]: I0714 22:07:53.392911 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:07:53.393315 kubelet[2698]: I0714 22:07:53.393225 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:07:53.393315 kubelet[2698]: I0714 22:07:53.393265 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:07:53.393315 kubelet[2698]: I0714 22:07:53.393282 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:07:53.513104 kubelet[2698]: E0714 22:07:53.512762 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:07:53.513104 kubelet[2698]: E0714 22:07:53.512762 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:07:53.513104 kubelet[2698]: E0714 22:07:53.512885 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:07:53.646434 sudo[2733]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 14 22:07:53.646727 sudo[2733]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 14 22:07:54.075167 kubelet[2698]: I0714 22:07:54.074178 2698 apiserver.go:52] "Watching apiserver" Jul 14 22:07:54.085804 sudo[2733]: pam_unix(sudo:session): session closed for user root Jul 14 22:07:54.091474 kubelet[2698]: I0714 22:07:54.091441 2698 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 14 22:07:54.116346 kubelet[2698]: E0714 22:07:54.116113 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:07:54.116346 kubelet[2698]: E0714 22:07:54.116222 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:07:54.116346 kubelet[2698]: E0714 22:07:54.116281 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:07:54.140599 kubelet[2698]: I0714 22:07:54.140392 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.140374948 podStartE2EDuration="1.140374948s" podCreationTimestamp="2025-07-14 22:07:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:07:54.133868424 +0000 UTC m=+1.116647059" watchObservedRunningTime="2025-07-14 22:07:54.140374948 +0000 UTC m=+1.123153583" Jul 14 22:07:54.149045 kubelet[2698]: I0714 22:07:54.148951 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.1489357789999999 podStartE2EDuration="1.148935779s" podCreationTimestamp="2025-07-14 22:07:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:07:54.147809204 +0000 UTC m=+1.130587839" watchObservedRunningTime="2025-07-14 22:07:54.148935779 +0000 UTC m=+1.131714414" Jul 14 22:07:54.149839 kubelet[2698]: I0714 22:07:54.149085 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.149079981 podStartE2EDuration="1.149079981s" podCreationTimestamp="2025-07-14 22:07:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:07:54.140712752 +0000 UTC m=+1.123491387" watchObservedRunningTime="2025-07-14 22:07:54.149079981 +0000 UTC m=+1.131858616" Jul 14 22:07:55.117533 kubelet[2698]: E0714 22:07:55.117491 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:07:55.616158 sudo[1750]: pam_unix(sudo:session): session closed for user root Jul 14 22:07:55.618216 sshd[1744]: pam_unix(sshd:session): session closed for user core Jul 14 22:07:55.622143 systemd-logind[1523]: Session 7 logged out. Waiting for processes to exit. Jul 14 22:07:55.622390 systemd[1]: sshd@6-10.0.0.114:22-10.0.0.1:45086.service: Deactivated successfully. Jul 14 22:07:55.624575 systemd[1]: session-7.scope: Deactivated successfully. Jul 14 22:07:55.625469 systemd-logind[1523]: Removed session 7. Jul 14 22:07:56.358239 kubelet[2698]: E0714 22:07:56.358162 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:07:58.890165 kubelet[2698]: I0714 22:07:58.890118 2698 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 14 22:07:58.890528 containerd[1543]: time="2025-07-14T22:07:58.890428225Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 14 22:07:58.890747 kubelet[2698]: I0714 22:07:58.890606 2698 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 14 22:07:59.731787 kubelet[2698]: I0714 22:07:59.731743 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-cilium-config-path\") pod \"cilium-pms66\" (UID: \"c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b\") " pod="kube-system/cilium-pms66" Jul 14 22:07:59.732385 kubelet[2698]: I0714 22:07:59.731992 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-etc-cni-netd\") pod \"cilium-pms66\" (UID: \"c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b\") " pod="kube-system/cilium-pms66" Jul 14 22:07:59.732385 kubelet[2698]: I0714 22:07:59.732015 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-clustermesh-secrets\") pod \"cilium-pms66\" (UID: \"c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b\") " pod="kube-system/cilium-pms66" Jul 14 22:07:59.732385 kubelet[2698]: I0714 22:07:59.732042 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-host-proc-sys-net\") pod \"cilium-pms66\" (UID: \"c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b\") " pod="kube-system/cilium-pms66" Jul 14 22:07:59.732385 kubelet[2698]: I0714 22:07:59.732059 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-cni-path\") pod \"cilium-pms66\" (UID: \"c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b\") " pod="kube-system/cilium-pms66" Jul 14 22:07:59.732385 kubelet[2698]: I0714 22:07:59.732074 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bde9af22-0c4d-4b68-acff-7cdf83508095-kube-proxy\") pod \"kube-proxy-m9ksr\" (UID: \"bde9af22-0c4d-4b68-acff-7cdf83508095\") " pod="kube-system/kube-proxy-m9ksr" Jul 14 22:07:59.732385 kubelet[2698]: I0714 22:07:59.732089 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9xvc\" (UniqueName: \"kubernetes.io/projected/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-kube-api-access-r9xvc\") pod \"cilium-pms66\" (UID: \"c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b\") " pod="kube-system/cilium-pms66" Jul 14 22:07:59.732566 kubelet[2698]: I0714 22:07:59.732104 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-bpf-maps\") pod \"cilium-pms66\" (UID: \"c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b\") " pod="kube-system/cilium-pms66" Jul 14 22:07:59.732566 kubelet[2698]: I0714 22:07:59.732123 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-cilium-cgroup\") pod \"cilium-pms66\" (UID: \"c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b\") " pod="kube-system/cilium-pms66" Jul 14 22:07:59.732566 kubelet[2698]: I0714 22:07:59.732138 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-hubble-tls\") pod \"cilium-pms66\" (UID: \"c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b\") " pod="kube-system/cilium-pms66" Jul 14 22:07:59.732566 kubelet[2698]: I0714 22:07:59.732153 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bde9af22-0c4d-4b68-acff-7cdf83508095-xtables-lock\") pod \"kube-proxy-m9ksr\" (UID: \"bde9af22-0c4d-4b68-acff-7cdf83508095\") " pod="kube-system/kube-proxy-m9ksr" Jul 14 22:07:59.732566 kubelet[2698]: I0714 22:07:59.732167 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndqjg\" (UniqueName: \"kubernetes.io/projected/bde9af22-0c4d-4b68-acff-7cdf83508095-kube-api-access-ndqjg\") pod \"kube-proxy-m9ksr\" (UID: \"bde9af22-0c4d-4b68-acff-7cdf83508095\") " pod="kube-system/kube-proxy-m9ksr" Jul 14 22:07:59.732566 kubelet[2698]: I0714 22:07:59.732183 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-lib-modules\") pod \"cilium-pms66\" (UID: \"c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b\") " pod="kube-system/cilium-pms66" Jul 14 22:07:59.732699 kubelet[2698]: I0714 22:07:59.732199 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-host-proc-sys-kernel\") pod \"cilium-pms66\" (UID: \"c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b\") " pod="kube-system/cilium-pms66" Jul 14 22:07:59.732699 kubelet[2698]: I0714 22:07:59.732216 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-cilium-run\") pod \"cilium-pms66\" (UID: \"c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b\") " pod="kube-system/cilium-pms66" Jul 14 22:07:59.732699 kubelet[2698]: I0714 22:07:59.732232 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bde9af22-0c4d-4b68-acff-7cdf83508095-lib-modules\") pod \"kube-proxy-m9ksr\" (UID: \"bde9af22-0c4d-4b68-acff-7cdf83508095\") " pod="kube-system/kube-proxy-m9ksr" Jul 14 22:07:59.732699 kubelet[2698]: I0714 22:07:59.732248 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-hostproc\") pod \"cilium-pms66\" (UID: \"c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b\") " pod="kube-system/cilium-pms66" Jul 14 22:07:59.732699 kubelet[2698]: I0714 22:07:59.732264 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-xtables-lock\") pod \"cilium-pms66\" (UID: \"c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b\") " pod="kube-system/cilium-pms66" Jul 14 22:07:59.832536 kubelet[2698]: I0714 22:07:59.832438 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxvmv\" (UniqueName: \"kubernetes.io/projected/08cf2ef0-a5ad-4cef-bd0c-fd8c37d9d005-kube-api-access-gxvmv\") pod \"cilium-operator-5d85765b45-gt2v9\" (UID: \"08cf2ef0-a5ad-4cef-bd0c-fd8c37d9d005\") " pod="kube-system/cilium-operator-5d85765b45-gt2v9" Jul 14 22:07:59.832536 kubelet[2698]: I0714 22:07:59.832509 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/08cf2ef0-a5ad-4cef-bd0c-fd8c37d9d005-cilium-config-path\") pod \"cilium-operator-5d85765b45-gt2v9\" (UID: \"08cf2ef0-a5ad-4cef-bd0c-fd8c37d9d005\") " pod="kube-system/cilium-operator-5d85765b45-gt2v9" Jul 14 22:07:59.903922 kubelet[2698]: E0714 22:07:59.903710 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:07:59.993841 kubelet[2698]: E0714 22:07:59.993258 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:07:59.995392 containerd[1543]: time="2025-07-14T22:07:59.994820181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m9ksr,Uid:bde9af22-0c4d-4b68-acff-7cdf83508095,Namespace:kube-system,Attempt:0,}" Jul 14 22:07:59.995392 containerd[1543]: time="2025-07-14T22:07:59.995377227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pms66,Uid:c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b,Namespace:kube-system,Attempt:0,}" Jul 14 22:07:59.995773 kubelet[2698]: E0714 22:07:59.995057 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:08:00.023035 containerd[1543]: time="2025-07-14T22:08:00.022941917Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:08:00.023134 containerd[1543]: time="2025-07-14T22:08:00.023046518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:08:00.023134 containerd[1543]: time="2025-07-14T22:08:00.023080078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:08:00.023529 containerd[1543]: time="2025-07-14T22:08:00.023477122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:08:00.027335 containerd[1543]: time="2025-07-14T22:08:00.027254922Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:08:00.027924 containerd[1543]: time="2025-07-14T22:08:00.027878448Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:08:00.027924 containerd[1543]: time="2025-07-14T22:08:00.027926289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:08:00.028318 containerd[1543]: time="2025-07-14T22:08:00.028033930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:08:00.052768 kubelet[2698]: E0714 22:08:00.052713 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:08:00.053292 containerd[1543]: time="2025-07-14T22:08:00.053250353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-gt2v9,Uid:08cf2ef0-a5ad-4cef-bd0c-fd8c37d9d005,Namespace:kube-system,Attempt:0,}" Jul 14 22:08:00.065900 containerd[1543]: time="2025-07-14T22:08:00.065758404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pms66,Uid:c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b,Namespace:kube-system,Attempt:0,} returns sandbox id \"90bb74cbf6fe882b179f3a0c81518b71f832a9affa16f759378dd13450f14eea\"" Jul 14 22:08:00.067479 kubelet[2698]: E0714 22:08:00.067446 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:08:00.070016 containerd[1543]: time="2025-07-14T22:08:00.069658564Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 14 22:08:00.073752 containerd[1543]: time="2025-07-14T22:08:00.073713607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m9ksr,Uid:bde9af22-0c4d-4b68-acff-7cdf83508095,Namespace:kube-system,Attempt:0,} returns sandbox id \"4970c999393dce3affb79dec615890e0f89efbf929df7136539c20faa92b9778\"" Jul 14 22:08:00.074370 kubelet[2698]: E0714 22:08:00.074345 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:08:00.076403 containerd[1543]: time="2025-07-14T22:08:00.076327154Z" level=info msg="CreateContainer within sandbox \"4970c999393dce3affb79dec615890e0f89efbf929df7136539c20faa92b9778\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 14 22:08:00.109647 containerd[1543]: time="2025-07-14T22:08:00.109594341Z" level=info msg="CreateContainer within sandbox \"4970c999393dce3affb79dec615890e0f89efbf929df7136539c20faa92b9778\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a593e858f8f2b131456fa496a4df881650980e3bc8d903ed676c30205f531d1c\"" Jul 14 22:08:00.111641 containerd[1543]: time="2025-07-14T22:08:00.111605882Z" level=info msg="StartContainer for \"a593e858f8f2b131456fa496a4df881650980e3bc8d903ed676c30205f531d1c\"" Jul 14 22:08:00.120277 containerd[1543]: time="2025-07-14T22:08:00.120191412Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:08:00.120277 containerd[1543]: time="2025-07-14T22:08:00.120256732Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:08:00.120376 containerd[1543]: time="2025-07-14T22:08:00.120284053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:08:00.120450 containerd[1543]: time="2025-07-14T22:08:00.120386934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:08:00.128533 kubelet[2698]: E0714 22:08:00.128495 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:08:00.227498 containerd[1543]: time="2025-07-14T22:08:00.227257889Z" level=info msg="StartContainer for \"a593e858f8f2b131456fa496a4df881650980e3bc8d903ed676c30205f531d1c\" returns successfully" Jul 14 22:08:00.227498 containerd[1543]: time="2025-07-14T22:08:00.227350610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-gt2v9,Uid:08cf2ef0-a5ad-4cef-bd0c-fd8c37d9d005,Namespace:kube-system,Attempt:0,} returns sandbox id \"c33b4bb04f33dc6eabc27918a9faf98d50eb0d039dac4545583807ee1ad26a36\"" Jul 14 22:08:00.228321 kubelet[2698]: E0714 22:08:00.228277 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:08:01.131649 kubelet[2698]: E0714 22:08:01.131598 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:08:01.135194 kubelet[2698]: E0714 22:08:01.135170 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:08:01.144359 kubelet[2698]: I0714 22:08:01.144090 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m9ksr" podStartSLOduration=2.144070172 podStartE2EDuration="2.144070172s" podCreationTimestamp="2025-07-14 22:07:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:08:01.142048831 +0000 UTC m=+8.124827466" watchObservedRunningTime="2025-07-14 22:08:01.144070172 +0000 UTC m=+8.126848807" Jul 14 22:08:02.141071 kubelet[2698]: E0714 22:08:02.141027 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:08:02.690830 kubelet[2698]: E0714 22:08:02.690499 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:08:03.138425 kubelet[2698]: E0714 22:08:03.138307 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:08:04.258169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount763762490.mount: Deactivated successfully. Jul 14 22:08:05.582390 containerd[1543]: time="2025-07-14T22:08:05.582310456Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 14 22:08:05.585058 containerd[1543]: time="2025-07-14T22:08:05.585012800Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.515308595s" Jul 14 22:08:05.585058 containerd[1543]: time="2025-07-14T22:08:05.585055721Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 14 22:08:05.587698 containerd[1543]: time="2025-07-14T22:08:05.587603303Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 14 22:08:05.588763 containerd[1543]: time="2025-07-14T22:08:05.588676393Z" level=info msg="CreateContainer within sandbox \"90bb74cbf6fe882b179f3a0c81518b71f832a9affa16f759378dd13450f14eea\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 14 22:08:05.600889 containerd[1543]: time="2025-07-14T22:08:05.600575220Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:08:05.601557 containerd[1543]: time="2025-07-14T22:08:05.601523669Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:08:05.619953 containerd[1543]: time="2025-07-14T22:08:05.619841473Z" level=info msg="CreateContainer within sandbox \"90bb74cbf6fe882b179f3a0c81518b71f832a9affa16f759378dd13450f14eea\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"949aa867bf633a5981a45da040dd8d0322901f8c060ee6c1c687a4cc06e59526\"" Jul 14 22:08:05.620528 containerd[1543]: time="2025-07-14T22:08:05.620489279Z" level=info msg="StartContainer for \"949aa867bf633a5981a45da040dd8d0322901f8c060ee6c1c687a4cc06e59526\"" Jul 14 22:08:05.669226 containerd[1543]: time="2025-07-14T22:08:05.669185957Z" level=info msg="StartContainer for \"949aa867bf633a5981a45da040dd8d0322901f8c060ee6c1c687a4cc06e59526\" returns successfully" Jul 14 22:08:05.884402 containerd[1543]: time="2025-07-14T22:08:05.877188186Z" level=info msg="shim disconnected" id=949aa867bf633a5981a45da040dd8d0322901f8c060ee6c1c687a4cc06e59526 namespace=k8s.io Jul 14 22:08:05.884402 containerd[1543]: time="2025-07-14T22:08:05.884229050Z" level=warning msg="cleaning up after shim disconnected" id=949aa867bf633a5981a45da040dd8d0322901f8c060ee6c1c687a4cc06e59526 namespace=k8s.io Jul 14 22:08:05.884402 containerd[1543]: time="2025-07-14T22:08:05.884245130Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 22:08:06.146593 kubelet[2698]: E0714 22:08:06.146471 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:08:06.151046 containerd[1543]: time="2025-07-14T22:08:06.150910851Z" level=info msg="CreateContainer within sandbox \"90bb74cbf6fe882b179f3a0c81518b71f832a9affa16f759378dd13450f14eea\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 14 22:08:06.163443 containerd[1543]: time="2025-07-14T22:08:06.163345160Z" level=info msg="CreateContainer within sandbox \"90bb74cbf6fe882b179f3a0c81518b71f832a9affa16f759378dd13450f14eea\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"960808931da021a69119af1ae8cb79f1560de061fccdf90fc1257e134ea7d134\"" Jul 14 22:08:06.164736 containerd[1543]: time="2025-07-14T22:08:06.164141567Z" level=info msg="StartContainer for \"960808931da021a69119af1ae8cb79f1560de061fccdf90fc1257e134ea7d134\"" Jul 14 22:08:06.204384 containerd[1543]: time="2025-07-14T22:08:06.204343198Z" level=info msg="StartContainer for \"960808931da021a69119af1ae8cb79f1560de061fccdf90fc1257e134ea7d134\" returns successfully" Jul 14 22:08:06.220279 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 14 22:08:06.220646 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 14 22:08:06.220711 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 14 22:08:06.228658 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 22:08:06.244404 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 22:08:06.247279 containerd[1543]: time="2025-07-14T22:08:06.247185893Z" level=info msg="shim disconnected" id=960808931da021a69119af1ae8cb79f1560de061fccdf90fc1257e134ea7d134 namespace=k8s.io Jul 14 22:08:06.247279 containerd[1543]: time="2025-07-14T22:08:06.247235814Z" level=warning msg="cleaning up after shim disconnected" id=960808931da021a69119af1ae8cb79f1560de061fccdf90fc1257e134ea7d134 namespace=k8s.io Jul 14 22:08:06.247279 containerd[1543]: time="2025-07-14T22:08:06.247245174Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 22:08:06.367931 kubelet[2698]: E0714 22:08:06.367888 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:08:06.616563 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-949aa867bf633a5981a45da040dd8d0322901f8c060ee6c1c687a4cc06e59526-rootfs.mount: Deactivated successfully. Jul 14 22:08:06.731452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount252852497.mount: Deactivated successfully. Jul 14 22:08:07.150152 kubelet[2698]: E0714 22:08:07.150118 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:08:07.153204 containerd[1543]: time="2025-07-14T22:08:07.153167827Z" level=info msg="CreateContainer within sandbox \"90bb74cbf6fe882b179f3a0c81518b71f832a9affa16f759378dd13450f14eea\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 14 22:08:07.191248 containerd[1543]: time="2025-07-14T22:08:07.191186871Z" level=info msg="CreateContainer within sandbox \"90bb74cbf6fe882b179f3a0c81518b71f832a9affa16f759378dd13450f14eea\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6a54c108b7b2f5ef7096cb2d9d7d4406266bd4fbea93f0bf63d1fad9e04e7ca9\"" Jul 14 22:08:07.192052 containerd[1543]: time="2025-07-14T22:08:07.192023718Z" level=info msg="StartContainer for \"6a54c108b7b2f5ef7096cb2d9d7d4406266bd4fbea93f0bf63d1fad9e04e7ca9\"" Jul 14 22:08:07.253484 containerd[1543]: time="2025-07-14T22:08:07.253434602Z" level=info msg="StartContainer for \"6a54c108b7b2f5ef7096cb2d9d7d4406266bd4fbea93f0bf63d1fad9e04e7ca9\" returns successfully" Jul 14 22:08:07.338610 containerd[1543]: time="2025-07-14T22:08:07.338397127Z" level=info msg="shim disconnected" id=6a54c108b7b2f5ef7096cb2d9d7d4406266bd4fbea93f0bf63d1fad9e04e7ca9 namespace=k8s.io Jul 14 22:08:07.338610 containerd[1543]: time="2025-07-14T22:08:07.338453847Z" level=warning msg="cleaning up after shim disconnected" id=6a54c108b7b2f5ef7096cb2d9d7d4406266bd4fbea93f0bf63d1fad9e04e7ca9 namespace=k8s.io Jul 14 22:08:07.338610 containerd[1543]: time="2025-07-14T22:08:07.338461967Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 22:08:07.349715 containerd[1543]: time="2025-07-14T22:08:07.349550022Z" level=warning msg="cleanup warnings time=\"2025-07-14T22:08:07Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 14 22:08:07.378258 containerd[1543]: time="2025-07-14T22:08:07.377441339Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:08:07.378258 containerd[1543]: time="2025-07-14T22:08:07.378222906Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 14 22:08:07.378715 containerd[1543]: time="2025-07-14T22:08:07.378679910Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:08:07.380472 containerd[1543]: time="2025-07-14T22:08:07.380429645Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.792784701s" Jul 14 22:08:07.380472 containerd[1543]: time="2025-07-14T22:08:07.380469485Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 14 22:08:07.395342 containerd[1543]: time="2025-07-14T22:08:07.395303372Z" level=info msg="CreateContainer within sandbox \"c33b4bb04f33dc6eabc27918a9faf98d50eb0d039dac4545583807ee1ad26a36\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 14 22:08:07.408641 containerd[1543]: time="2025-07-14T22:08:07.408190242Z" level=info msg="CreateContainer within sandbox \"c33b4bb04f33dc6eabc27918a9faf98d50eb0d039dac4545583807ee1ad26a36\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c2ffebf549fb0026ba9b26e1e3fd1238085ceb9d588dcd8362d3f652b51403d1\"" Jul 14 22:08:07.409807 containerd[1543]: time="2025-07-14T22:08:07.409767255Z" level=info msg="StartContainer for \"c2ffebf549fb0026ba9b26e1e3fd1238085ceb9d588dcd8362d3f652b51403d1\"" Jul 14 22:08:07.451491 containerd[1543]: time="2025-07-14T22:08:07.451449691Z" level=info msg="StartContainer for \"c2ffebf549fb0026ba9b26e1e3fd1238085ceb9d588dcd8362d3f652b51403d1\" returns successfully" Jul 14 22:08:08.160399 kubelet[2698]: E0714 22:08:08.160360 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:08:08.166689 kubelet[2698]: E0714 22:08:08.166637 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:08:08.170064 containerd[1543]: time="2025-07-14T22:08:08.169892622Z" level=info msg="CreateContainer within sandbox \"90bb74cbf6fe882b179f3a0c81518b71f832a9affa16f759378dd13450f14eea\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 14 22:08:08.172644 kubelet[2698]: I0714 22:08:08.171574 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-gt2v9" podStartSLOduration=2.012548672 podStartE2EDuration="9.171555915s" podCreationTimestamp="2025-07-14 22:07:59 +0000 UTC" firstStartedPulling="2025-07-14 22:08:00.228764145 +0000 UTC m=+7.211542740" lastFinishedPulling="2025-07-14 22:08:07.387771348 +0000 UTC m=+14.370549983" observedRunningTime="2025-07-14 22:08:08.171554155 +0000 UTC m=+15.154332790" watchObservedRunningTime="2025-07-14 22:08:08.171555915 +0000 UTC m=+15.154334550" Jul 14 22:08:08.211283 containerd[1543]: time="2025-07-14T22:08:08.211221765Z" level=info msg="CreateContainer within sandbox \"90bb74cbf6fe882b179f3a0c81518b71f832a9affa16f759378dd13450f14eea\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cbe02411a47d5840a83c4d44936d25157e2c9ce5617526bcae6b666439418ac5\"" Jul 14 22:08:08.214047 containerd[1543]: time="2025-07-14T22:08:08.213941508Z" level=info msg="StartContainer for \"cbe02411a47d5840a83c4d44936d25157e2c9ce5617526bcae6b666439418ac5\"" Jul 14 22:08:08.265239 containerd[1543]: time="2025-07-14T22:08:08.265190734Z" level=info msg="StartContainer for \"cbe02411a47d5840a83c4d44936d25157e2c9ce5617526bcae6b666439418ac5\" returns successfully" Jul 14 22:08:08.297093 containerd[1543]: time="2025-07-14T22:08:08.297016759Z" level=info msg="shim disconnected" id=cbe02411a47d5840a83c4d44936d25157e2c9ce5617526bcae6b666439418ac5 namespace=k8s.io Jul 14 22:08:08.297093 containerd[1543]: time="2025-07-14T22:08:08.297075279Z" level=warning msg="cleaning up after shim disconnected" id=cbe02411a47d5840a83c4d44936d25157e2c9ce5617526bcae6b666439418ac5 namespace=k8s.io Jul 14 22:08:08.297093 containerd[1543]: time="2025-07-14T22:08:08.297084600Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 22:08:08.616700 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cbe02411a47d5840a83c4d44936d25157e2c9ce5617526bcae6b666439418ac5-rootfs.mount: Deactivated successfully. Jul 14 22:08:09.171074 kubelet[2698]: E0714 22:08:09.171029 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:08:09.171471 kubelet[2698]: E0714 22:08:09.171136 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:08:09.175881 containerd[1543]: time="2025-07-14T22:08:09.174073661Z" level=info msg="CreateContainer within sandbox \"90bb74cbf6fe882b179f3a0c81518b71f832a9affa16f759378dd13450f14eea\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 14 22:08:09.196288 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3335427245.mount: Deactivated successfully. Jul 14 22:08:09.197915 containerd[1543]: time="2025-07-14T22:08:09.197871294Z" level=info msg="CreateContainer within sandbox \"90bb74cbf6fe882b179f3a0c81518b71f832a9affa16f759378dd13450f14eea\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8eee75f40e295d2985e122a67094acf1cde877e2a00806d7dd4b4591f59ce8c2\"" Jul 14 22:08:09.198447 containerd[1543]: time="2025-07-14T22:08:09.198414858Z" level=info msg="StartContainer for \"8eee75f40e295d2985e122a67094acf1cde877e2a00806d7dd4b4591f59ce8c2\"" Jul 14 22:08:09.258298 containerd[1543]: time="2025-07-14T22:08:09.258045463Z" level=info msg="StartContainer for \"8eee75f40e295d2985e122a67094acf1cde877e2a00806d7dd4b4591f59ce8c2\" returns successfully" Jul 14 22:08:09.380168 kubelet[2698]: I0714 22:08:09.380126 2698 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 14 22:08:09.504284 kubelet[2698]: I0714 22:08:09.502529 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58e591f3-8e23-437b-ba6a-cefcbbcfad4f-config-volume\") pod \"coredns-7c65d6cfc9-p59l8\" (UID: \"58e591f3-8e23-437b-ba6a-cefcbbcfad4f\") " pod="kube-system/coredns-7c65d6cfc9-p59l8" Jul 14 22:08:09.504284 kubelet[2698]: I0714 22:08:09.502574 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bhgl\" (UniqueName: \"kubernetes.io/projected/64dae2f1-7195-4c3a-a954-09e9e5312dbd-kube-api-access-4bhgl\") pod \"coredns-7c65d6cfc9-llkr4\" (UID: \"64dae2f1-7195-4c3a-a954-09e9e5312dbd\") " pod="kube-system/coredns-7c65d6cfc9-llkr4" Jul 14 22:08:09.504284 kubelet[2698]: I0714 22:08:09.502598 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2p5q\" (UniqueName: \"kubernetes.io/projected/58e591f3-8e23-437b-ba6a-cefcbbcfad4f-kube-api-access-r2p5q\") pod \"coredns-7c65d6cfc9-p59l8\" (UID: \"58e591f3-8e23-437b-ba6a-cefcbbcfad4f\") " pod="kube-system/coredns-7c65d6cfc9-p59l8" Jul 14 22:08:09.504284 kubelet[2698]: I0714 22:08:09.502667 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64dae2f1-7195-4c3a-a954-09e9e5312dbd-config-volume\") pod \"coredns-7c65d6cfc9-llkr4\" (UID: \"64dae2f1-7195-4c3a-a954-09e9e5312dbd\") " pod="kube-system/coredns-7c65d6cfc9-llkr4" Jul 14 22:08:09.719204 kubelet[2698]: E0714 22:08:09.719166 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:08:09.720233 containerd[1543]: time="2025-07-14T22:08:09.720077215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-llkr4,Uid:64dae2f1-7195-4c3a-a954-09e9e5312dbd,Namespace:kube-system,Attempt:0,}" Jul 14 22:08:09.721858 kubelet[2698]: E0714 22:08:09.721820 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:08:09.722516 containerd[1543]: time="2025-07-14T22:08:09.722437195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-p59l8,Uid:58e591f3-8e23-437b-ba6a-cefcbbcfad4f,Namespace:kube-system,Attempt:0,}" Jul 14 22:08:10.180065 kubelet[2698]: E0714 22:08:10.180036 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:08:10.195882 kubelet[2698]: I0714 22:08:10.195496 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pms66" podStartSLOduration=5.676918375 podStartE2EDuration="11.195472881s" podCreationTimestamp="2025-07-14 22:07:59 +0000 UTC" firstStartedPulling="2025-07-14 22:08:00.068893996 +0000 UTC m=+7.051672631" lastFinishedPulling="2025-07-14 22:08:05.587448502 +0000 UTC m=+12.570227137" observedRunningTime="2025-07-14 22:08:10.194981877 +0000 UTC m=+17.177760552" watchObservedRunningTime="2025-07-14 22:08:10.195472881 +0000 UTC m=+17.178251516" Jul 14 22:08:11.181127 kubelet[2698]: E0714 22:08:11.181095 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:08:11.454869 systemd-networkd[1231]: cilium_host: Link UP Jul 14 22:08:11.455011 systemd-networkd[1231]: cilium_net: Link UP Jul 14 22:08:11.455057 systemd-networkd[1231]: cilium_net: Gained carrier Jul 14 22:08:11.455219 systemd-networkd[1231]: cilium_host: Gained carrier Jul 14 22:08:11.455363 systemd-networkd[1231]: cilium_host: Gained IPv6LL Jul 14 22:08:11.536254 systemd-networkd[1231]: cilium_vxlan: Link UP Jul 14 22:08:11.536261 systemd-networkd[1231]: cilium_vxlan: Gained carrier Jul 14 22:08:11.811137 systemd-networkd[1231]: cilium_net: Gained IPv6LL Jul 14 22:08:11.828881 kernel: NET: Registered PF_ALG protocol family Jul 14 22:08:12.182924 kubelet[2698]: E0714 22:08:12.182900 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:08:12.385719 systemd-networkd[1231]: lxc_health: Link UP Jul 14 22:08:12.390670 systemd-networkd[1231]: lxc_health: Gained carrier Jul 14 22:08:12.868835 systemd-networkd[1231]: lxc1ce25adc8205: Link UP Jul 14 22:08:12.896969 kernel: eth0: renamed from tmp2a9f8 Jul 14 22:08:12.901273 systemd-networkd[1231]: lxc3f90b6bf093e: Link UP Jul 14 22:08:12.909867 kernel: eth0: renamed from tmp5eb52 Jul 14 22:08:12.924375 systemd-networkd[1231]: lxc3f90b6bf093e: Gained carrier Jul 14 22:08:12.933016 systemd-networkd[1231]: lxc1ce25adc8205: Gained carrier Jul 14 22:08:13.283196 systemd-networkd[1231]: cilium_vxlan: Gained IPv6LL Jul 14 22:08:13.539251 systemd-networkd[1231]: lxc_health: Gained IPv6LL Jul 14 22:08:14.008086 kubelet[2698]: E0714 22:08:14.007555 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:08:14.243297 systemd-networkd[1231]: lxc3f90b6bf093e: Gained IPv6LL Jul 14 22:08:14.564114 systemd-networkd[1231]: lxc1ce25adc8205: Gained IPv6LL Jul 14 22:08:16.455865 containerd[1543]: time="2025-07-14T22:08:16.455640197Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:08:16.455865 containerd[1543]: time="2025-07-14T22:08:16.455705437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:08:16.455865 containerd[1543]: time="2025-07-14T22:08:16.455716437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:08:16.456307 containerd[1543]: time="2025-07-14T22:08:16.455918839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:08:16.456937 containerd[1543]: time="2025-07-14T22:08:16.456759324Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:08:16.457597 containerd[1543]: time="2025-07-14T22:08:16.456875325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:08:16.457597 containerd[1543]: time="2025-07-14T22:08:16.456897605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:08:16.458145 containerd[1543]: time="2025-07-14T22:08:16.458064654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:08:16.477699 systemd-resolved[1435]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:08:16.483104 systemd-resolved[1435]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:08:16.498318 containerd[1543]: time="2025-07-14T22:08:16.498262217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-p59l8,Uid:58e591f3-8e23-437b-ba6a-cefcbbcfad4f,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a9f8c8b3f9c31d368866de910672c7847d13e85c447632eb56c1e76ed62d595\"" Jul 14 22:08:16.499133 kubelet[2698]: E0714 22:08:16.499108 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:08:16.502168 containerd[1543]: time="2025-07-14T22:08:16.502066964Z" level=info msg="CreateContainer within sandbox \"2a9f8c8b3f9c31d368866de910672c7847d13e85c447632eb56c1e76ed62d595\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 22:08:16.503961 containerd[1543]: time="2025-07-14T22:08:16.503811336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-llkr4,Uid:64dae2f1-7195-4c3a-a954-09e9e5312dbd,Namespace:kube-system,Attempt:0,} returns sandbox id \"5eb5289ba17ef3a41598734f7f5c97a36ad30dd2ab2a6cc21a77850c648644b7\"" Jul 14 22:08:16.504584 kubelet[2698]: E0714 22:08:16.504553 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:08:16.508792 containerd[1543]: time="2025-07-14T22:08:16.508748771Z" level=info msg="CreateContainer within sandbox \"5eb5289ba17ef3a41598734f7f5c97a36ad30dd2ab2a6cc21a77850c648644b7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 22:08:16.516768 containerd[1543]: time="2025-07-14T22:08:16.516716067Z" level=info msg="CreateContainer within sandbox \"2a9f8c8b3f9c31d368866de910672c7847d13e85c447632eb56c1e76ed62d595\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"563d2aadfc71e2cacc63b8d77d4ae5676dec9bb53c84d01dd229179c739be02e\"" Jul 14 22:08:16.517526 containerd[1543]: time="2025-07-14T22:08:16.517497753Z" level=info msg="StartContainer for \"563d2aadfc71e2cacc63b8d77d4ae5676dec9bb53c84d01dd229179c739be02e\"" Jul 14 22:08:16.523300 containerd[1543]: time="2025-07-14T22:08:16.523258553Z" level=info msg="CreateContainer within sandbox \"5eb5289ba17ef3a41598734f7f5c97a36ad30dd2ab2a6cc21a77850c648644b7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8b17736b7f531b565e5d2e5615d69826341b04d916e6cf647d1343718e019965\"" Jul 14 22:08:16.524787 containerd[1543]: time="2025-07-14T22:08:16.523906958Z" level=info msg="StartContainer for \"8b17736b7f531b565e5d2e5615d69826341b04d916e6cf647d1343718e019965\"" Jul 14 22:08:16.571757 containerd[1543]: time="2025-07-14T22:08:16.571688815Z" level=info msg="StartContainer for \"563d2aadfc71e2cacc63b8d77d4ae5676dec9bb53c84d01dd229179c739be02e\" returns successfully" Jul 14 22:08:16.571968 containerd[1543]: time="2025-07-14T22:08:16.571688855Z" level=info msg="StartContainer for \"8b17736b7f531b565e5d2e5615d69826341b04d916e6cf647d1343718e019965\" returns successfully" Jul 14 22:08:17.232780 kubelet[2698]: E0714 22:08:17.231408 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:08:17.237312 kubelet[2698]: E0714 22:08:17.236736 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:08:17.247563 kubelet[2698]: I0714 22:08:17.247489 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-llkr4" podStartSLOduration=18.246600987 podStartE2EDuration="18.246600987s" podCreationTimestamp="2025-07-14 22:07:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:08:17.246439146 +0000 UTC m=+24.229217781" watchObservedRunningTime="2025-07-14 22:08:17.246600987 +0000 UTC m=+24.229379662" Jul 14 22:08:17.395878 kubelet[2698]: I0714 22:08:17.394617 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-p59l8" podStartSLOduration=18.394599214 podStartE2EDuration="18.394599214s" podCreationTimestamp="2025-07-14 22:07:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:08:17.283365922 +0000 UTC m=+24.266144557" watchObservedRunningTime="2025-07-14 22:08:17.394599214 +0000 UTC m=+24.377377849" Jul 14 22:08:18.237841 kubelet[2698]: E0714 22:08:18.237688 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:08:18.237841 kubelet[2698]: E0714 22:08:18.237760 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:08:19.239386 kubelet[2698]: E0714 22:08:19.239148 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:08:19.239386 kubelet[2698]: E0714 22:08:19.239228 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:08:20.435543 kubelet[2698]: I0714 22:08:20.435495 2698 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 22:08:20.436171 kubelet[2698]: E0714 22:08:20.435949 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:08:21.242912 kubelet[2698]: E0714 22:08:21.242824 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:08:51.316140 systemd[1]: Started sshd@7-10.0.0.114:22-10.0.0.1:51092.service - OpenSSH per-connection server daemon (10.0.0.1:51092). Jul 14 22:08:51.357148 sshd[4102]: Accepted publickey for core from 10.0.0.1 port 51092 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:08:51.358804 sshd[4102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:08:51.362381 systemd-logind[1523]: New session 8 of user core. Jul 14 22:08:51.374131 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 14 22:08:51.491973 sshd[4102]: pam_unix(sshd:session): session closed for user core Jul 14 22:08:51.495332 systemd-logind[1523]: Session 8 logged out. Waiting for processes to exit. Jul 14 22:08:51.495438 systemd[1]: sshd@7-10.0.0.114:22-10.0.0.1:51092.service: Deactivated successfully. Jul 14 22:08:51.498965 systemd[1]: session-8.scope: Deactivated successfully. Jul 14 22:08:51.500071 systemd-logind[1523]: Removed session 8. Jul 14 22:08:56.506057 systemd[1]: Started sshd@8-10.0.0.114:22-10.0.0.1:45376.service - OpenSSH per-connection server daemon (10.0.0.1:45376). Jul 14 22:08:56.539946 sshd[4121]: Accepted publickey for core from 10.0.0.1 port 45376 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:08:56.541095 sshd[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:08:56.544637 systemd-logind[1523]: New session 9 of user core. Jul 14 22:08:56.551181 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 14 22:08:56.657711 sshd[4121]: pam_unix(sshd:session): session closed for user core Jul 14 22:08:56.660819 systemd[1]: sshd@8-10.0.0.114:22-10.0.0.1:45376.service: Deactivated successfully. Jul 14 22:08:56.662688 systemd-logind[1523]: Session 9 logged out. Waiting for processes to exit. Jul 14 22:08:56.662762 systemd[1]: session-9.scope: Deactivated successfully. Jul 14 22:08:56.663571 systemd-logind[1523]: Removed session 9. Jul 14 22:09:01.673160 systemd[1]: Started sshd@9-10.0.0.114:22-10.0.0.1:45388.service - OpenSSH per-connection server daemon (10.0.0.1:45388). Jul 14 22:09:01.707273 sshd[4139]: Accepted publickey for core from 10.0.0.1 port 45388 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:09:01.708520 sshd[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:09:01.712230 systemd-logind[1523]: New session 10 of user core. Jul 14 22:09:01.723082 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 14 22:09:01.831729 sshd[4139]: pam_unix(sshd:session): session closed for user core Jul 14 22:09:01.834907 systemd[1]: sshd@9-10.0.0.114:22-10.0.0.1:45388.service: Deactivated successfully. Jul 14 22:09:01.836829 systemd-logind[1523]: Session 10 logged out. Waiting for processes to exit. Jul 14 22:09:01.836886 systemd[1]: session-10.scope: Deactivated successfully. Jul 14 22:09:01.838495 systemd-logind[1523]: Removed session 10. Jul 14 22:09:06.842087 systemd[1]: Started sshd@10-10.0.0.114:22-10.0.0.1:42904.service - OpenSSH per-connection server daemon (10.0.0.1:42904). Jul 14 22:09:06.881968 sshd[4155]: Accepted publickey for core from 10.0.0.1 port 42904 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:09:06.883391 sshd[4155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:09:06.887679 systemd-logind[1523]: New session 11 of user core. Jul 14 22:09:06.895121 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 14 22:09:07.034982 sshd[4155]: pam_unix(sshd:session): session closed for user core Jul 14 22:09:07.038535 systemd[1]: sshd@10-10.0.0.114:22-10.0.0.1:42904.service: Deactivated successfully. Jul 14 22:09:07.043582 systemd[1]: session-11.scope: Deactivated successfully. Jul 14 22:09:07.043732 systemd-logind[1523]: Session 11 logged out. Waiting for processes to exit. Jul 14 22:09:07.046480 systemd-logind[1523]: Removed session 11. Jul 14 22:09:12.048129 systemd[1]: Started sshd@11-10.0.0.114:22-10.0.0.1:42912.service - OpenSSH per-connection server daemon (10.0.0.1:42912). Jul 14 22:09:12.084912 sshd[4171]: Accepted publickey for core from 10.0.0.1 port 42912 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:09:12.086213 sshd[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:09:12.091991 systemd-logind[1523]: New session 12 of user core. Jul 14 22:09:12.104150 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 14 22:09:12.108061 kubelet[2698]: E0714 22:09:12.107209 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:09:12.221730 sshd[4171]: pam_unix(sshd:session): session closed for user core Jul 14 22:09:12.241105 systemd[1]: Started sshd@12-10.0.0.114:22-10.0.0.1:42918.service - OpenSSH per-connection server daemon (10.0.0.1:42918). Jul 14 22:09:12.241496 systemd[1]: sshd@11-10.0.0.114:22-10.0.0.1:42912.service: Deactivated successfully. Jul 14 22:09:12.244811 systemd-logind[1523]: Session 12 logged out. Waiting for processes to exit. Jul 14 22:09:12.245397 systemd[1]: session-12.scope: Deactivated successfully. Jul 14 22:09:12.247504 systemd-logind[1523]: Removed session 12. Jul 14 22:09:12.282582 sshd[4185]: Accepted publickey for core from 10.0.0.1 port 42918 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:09:12.284150 sshd[4185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:09:12.288930 systemd-logind[1523]: New session 13 of user core. Jul 14 22:09:12.299078 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 14 22:09:12.463036 sshd[4185]: pam_unix(sshd:session): session closed for user core Jul 14 22:09:12.472119 systemd[1]: Started sshd@13-10.0.0.114:22-10.0.0.1:59128.service - OpenSSH per-connection server daemon (10.0.0.1:59128). Jul 14 22:09:12.472498 systemd[1]: sshd@12-10.0.0.114:22-10.0.0.1:42918.service: Deactivated successfully. Jul 14 22:09:12.482656 systemd[1]: session-13.scope: Deactivated successfully. Jul 14 22:09:12.486470 systemd-logind[1523]: Session 13 logged out. Waiting for processes to exit. Jul 14 22:09:12.495233 systemd-logind[1523]: Removed session 13. Jul 14 22:09:12.534169 sshd[4200]: Accepted publickey for core from 10.0.0.1 port 59128 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:09:12.535427 sshd[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:09:12.539635 systemd-logind[1523]: New session 14 of user core. Jul 14 22:09:12.551125 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 14 22:09:12.664788 sshd[4200]: pam_unix(sshd:session): session closed for user core Jul 14 22:09:12.668762 systemd-logind[1523]: Session 14 logged out. Waiting for processes to exit. Jul 14 22:09:12.669342 systemd[1]: sshd@13-10.0.0.114:22-10.0.0.1:59128.service: Deactivated successfully. Jul 14 22:09:12.671359 systemd[1]: session-14.scope: Deactivated successfully. Jul 14 22:09:12.672136 systemd-logind[1523]: Removed session 14. Jul 14 22:09:17.680098 systemd[1]: Started sshd@14-10.0.0.114:22-10.0.0.1:59144.service - OpenSSH per-connection server daemon (10.0.0.1:59144). Jul 14 22:09:17.726274 sshd[4219]: Accepted publickey for core from 10.0.0.1 port 59144 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:09:17.728702 sshd[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:09:17.732784 systemd-logind[1523]: New session 15 of user core. Jul 14 22:09:17.742086 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 14 22:09:17.868753 sshd[4219]: pam_unix(sshd:session): session closed for user core Jul 14 22:09:17.872161 systemd[1]: sshd@14-10.0.0.114:22-10.0.0.1:59144.service: Deactivated successfully. Jul 14 22:09:17.874106 systemd[1]: session-15.scope: Deactivated successfully. Jul 14 22:09:17.874130 systemd-logind[1523]: Session 15 logged out. Waiting for processes to exit. Jul 14 22:09:17.875239 systemd-logind[1523]: Removed session 15. Jul 14 22:09:22.106803 kubelet[2698]: E0714 22:09:22.106765 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:09:22.884082 systemd[1]: Started sshd@15-10.0.0.114:22-10.0.0.1:52412.service - OpenSSH per-connection server daemon (10.0.0.1:52412). Jul 14 22:09:22.919665 sshd[4234]: Accepted publickey for core from 10.0.0.1 port 52412 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:09:22.921002 sshd[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:09:22.924323 systemd-logind[1523]: New session 16 of user core. Jul 14 22:09:22.931122 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 14 22:09:23.039515 sshd[4234]: pam_unix(sshd:session): session closed for user core Jul 14 22:09:23.051076 systemd[1]: Started sshd@16-10.0.0.114:22-10.0.0.1:52422.service - OpenSSH per-connection server daemon (10.0.0.1:52422). Jul 14 22:09:23.051997 systemd[1]: sshd@15-10.0.0.114:22-10.0.0.1:52412.service: Deactivated successfully. Jul 14 22:09:23.053818 systemd[1]: session-16.scope: Deactivated successfully. Jul 14 22:09:23.056134 systemd-logind[1523]: Session 16 logged out. Waiting for processes to exit. Jul 14 22:09:23.057639 systemd-logind[1523]: Removed session 16. Jul 14 22:09:23.085677 sshd[4247]: Accepted publickey for core from 10.0.0.1 port 52422 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:09:23.086868 sshd[4247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:09:23.090774 systemd-logind[1523]: New session 17 of user core. Jul 14 22:09:23.100100 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 14 22:09:25.109868 kubelet[2698]: E0714 22:09:25.107681 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:09:30.106497 kubelet[2698]: E0714 22:09:30.106459 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:09:33.106718 kubelet[2698]: E0714 22:09:33.106654 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:09:33.312972 sshd[4247]: pam_unix(sshd:session): session closed for user core Jul 14 22:09:33.323131 systemd[1]: Started sshd@17-10.0.0.114:22-10.0.0.1:51196.service - OpenSSH per-connection server daemon (10.0.0.1:51196). Jul 14 22:09:33.323550 systemd[1]: sshd@16-10.0.0.114:22-10.0.0.1:52422.service: Deactivated successfully. Jul 14 22:09:33.327053 systemd[1]: session-17.scope: Deactivated successfully. Jul 14 22:09:33.328174 systemd-logind[1523]: Session 17 logged out. Waiting for processes to exit. Jul 14 22:09:33.330646 systemd-logind[1523]: Removed session 17. Jul 14 22:09:33.388169 sshd[4262]: Accepted publickey for core from 10.0.0.1 port 51196 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:09:33.390685 sshd[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:09:33.396332 systemd-logind[1523]: New session 18 of user core. Jul 14 22:09:33.405098 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 14 22:09:34.109710 kubelet[2698]: E0714 22:09:34.107179 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:09:36.106686 kubelet[2698]: E0714 22:09:36.106575 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:09:36.107091 kubelet[2698]: E0714 22:09:36.106773 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:09:54.660976 sshd[4262]: pam_unix(sshd:session): session closed for user core Jul 14 22:09:54.673232 systemd[1]: Started sshd@18-10.0.0.114:22-10.0.0.1:55586.service - OpenSSH per-connection server daemon (10.0.0.1:55586). Jul 14 22:09:54.675762 systemd[1]: sshd@17-10.0.0.114:22-10.0.0.1:51196.service: Deactivated successfully. Jul 14 22:09:54.680585 systemd[1]: session-18.scope: Deactivated successfully. Jul 14 22:09:54.683792 systemd-logind[1523]: Session 18 logged out. Waiting for processes to exit. Jul 14 22:09:54.685219 systemd-logind[1523]: Removed session 18. Jul 14 22:09:54.714279 sshd[4286]: Accepted publickey for core from 10.0.0.1 port 55586 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:09:54.715647 sshd[4286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:09:54.719649 systemd-logind[1523]: New session 19 of user core. Jul 14 22:09:54.729108 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 14 22:09:54.948241 sshd[4286]: pam_unix(sshd:session): session closed for user core Jul 14 22:09:54.963134 systemd[1]: Started sshd@19-10.0.0.114:22-10.0.0.1:55600.service - OpenSSH per-connection server daemon (10.0.0.1:55600). Jul 14 22:09:54.963888 systemd[1]: sshd@18-10.0.0.114:22-10.0.0.1:55586.service: Deactivated successfully. Jul 14 22:09:54.965426 systemd[1]: session-19.scope: Deactivated successfully. Jul 14 22:09:54.966953 systemd-logind[1523]: Session 19 logged out. Waiting for processes to exit. Jul 14 22:09:54.968008 systemd-logind[1523]: Removed session 19. Jul 14 22:09:54.999548 sshd[4300]: Accepted publickey for core from 10.0.0.1 port 55600 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:09:55.000823 sshd[4300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:09:55.004927 systemd-logind[1523]: New session 20 of user core. Jul 14 22:09:55.016120 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 14 22:09:55.123905 sshd[4300]: pam_unix(sshd:session): session closed for user core Jul 14 22:09:55.127744 systemd[1]: sshd@19-10.0.0.114:22-10.0.0.1:55600.service: Deactivated successfully. Jul 14 22:09:55.128177 systemd-logind[1523]: Session 20 logged out. Waiting for processes to exit. Jul 14 22:09:55.129669 systemd[1]: session-20.scope: Deactivated successfully. Jul 14 22:09:55.131018 systemd-logind[1523]: Removed session 20. Jul 14 22:10:00.139345 systemd[1]: Started sshd@20-10.0.0.114:22-10.0.0.1:55616.service - OpenSSH per-connection server daemon (10.0.0.1:55616). Jul 14 22:10:00.172791 sshd[4322]: Accepted publickey for core from 10.0.0.1 port 55616 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:10:00.173931 sshd[4322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:10:00.177391 systemd-logind[1523]: New session 21 of user core. Jul 14 22:10:00.187076 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 14 22:10:00.289677 sshd[4322]: pam_unix(sshd:session): session closed for user core Jul 14 22:10:00.293101 systemd[1]: sshd@20-10.0.0.114:22-10.0.0.1:55616.service: Deactivated successfully. Jul 14 22:10:00.295736 systemd[1]: session-21.scope: Deactivated successfully. Jul 14 22:10:00.295811 systemd-logind[1523]: Session 21 logged out. Waiting for processes to exit. Jul 14 22:10:00.297500 systemd-logind[1523]: Removed session 21. Jul 14 22:10:05.310106 systemd[1]: Started sshd@21-10.0.0.114:22-10.0.0.1:41194.service - OpenSSH per-connection server daemon (10.0.0.1:41194). Jul 14 22:10:05.346714 sshd[4340]: Accepted publickey for core from 10.0.0.1 port 41194 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:10:05.348069 sshd[4340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:10:05.351767 systemd-logind[1523]: New session 22 of user core. Jul 14 22:10:05.359194 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 14 22:10:05.463423 sshd[4340]: pam_unix(sshd:session): session closed for user core Jul 14 22:10:05.467503 systemd[1]: sshd@21-10.0.0.114:22-10.0.0.1:41194.service: Deactivated successfully. Jul 14 22:10:05.469376 systemd-logind[1523]: Session 22 logged out. Waiting for processes to exit. Jul 14 22:10:05.470000 systemd[1]: session-22.scope: Deactivated successfully. Jul 14 22:10:05.470446 systemd-logind[1523]: Removed session 22. Jul 14 22:10:10.474068 systemd[1]: Started sshd@22-10.0.0.114:22-10.0.0.1:41200.service - OpenSSH per-connection server daemon (10.0.0.1:41200). Jul 14 22:10:10.511667 sshd[4355]: Accepted publickey for core from 10.0.0.1 port 41200 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:10:10.512876 sshd[4355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:10:10.516958 systemd-logind[1523]: New session 23 of user core. Jul 14 22:10:10.525081 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 14 22:10:10.638635 sshd[4355]: pam_unix(sshd:session): session closed for user core Jul 14 22:10:10.657091 systemd[1]: Started sshd@23-10.0.0.114:22-10.0.0.1:41208.service - OpenSSH per-connection server daemon (10.0.0.1:41208). Jul 14 22:10:10.657454 systemd[1]: sshd@22-10.0.0.114:22-10.0.0.1:41200.service: Deactivated successfully. Jul 14 22:10:10.659453 systemd-logind[1523]: Session 23 logged out. Waiting for processes to exit. Jul 14 22:10:10.660002 systemd[1]: session-23.scope: Deactivated successfully. Jul 14 22:10:10.661547 systemd-logind[1523]: Removed session 23. Jul 14 22:10:10.691426 sshd[4369]: Accepted publickey for core from 10.0.0.1 port 41208 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:10:10.692632 sshd[4369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:10:10.696671 systemd-logind[1523]: New session 24 of user core. Jul 14 22:10:10.704152 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 14 22:10:12.441247 containerd[1543]: time="2025-07-14T22:10:12.440396273Z" level=info msg="StopContainer for \"c2ffebf549fb0026ba9b26e1e3fd1238085ceb9d588dcd8362d3f652b51403d1\" with timeout 30 (s)" Jul 14 22:10:12.442786 containerd[1543]: time="2025-07-14T22:10:12.442654122Z" level=info msg="Stop container \"c2ffebf549fb0026ba9b26e1e3fd1238085ceb9d588dcd8362d3f652b51403d1\" with signal terminated" Jul 14 22:10:12.475097 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c2ffebf549fb0026ba9b26e1e3fd1238085ceb9d588dcd8362d3f652b51403d1-rootfs.mount: Deactivated successfully. Jul 14 22:10:12.477271 containerd[1543]: time="2025-07-14T22:10:12.477218182Z" level=info msg="shim disconnected" id=c2ffebf549fb0026ba9b26e1e3fd1238085ceb9d588dcd8362d3f652b51403d1 namespace=k8s.io Jul 14 22:10:12.477271 containerd[1543]: time="2025-07-14T22:10:12.477268142Z" level=warning msg="cleaning up after shim disconnected" id=c2ffebf549fb0026ba9b26e1e3fd1238085ceb9d588dcd8362d3f652b51403d1 namespace=k8s.io Jul 14 22:10:12.477415 containerd[1543]: time="2025-07-14T22:10:12.477277382Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 22:10:12.479768 containerd[1543]: time="2025-07-14T22:10:12.479720352Z" level=info msg="StopContainer for \"8eee75f40e295d2985e122a67094acf1cde877e2a00806d7dd4b4591f59ce8c2\" with timeout 2 (s)" Jul 14 22:10:12.480047 containerd[1543]: time="2025-07-14T22:10:12.480011433Z" level=info msg="Stop container \"8eee75f40e295d2985e122a67094acf1cde877e2a00806d7dd4b4591f59ce8c2\" with signal terminated" Jul 14 22:10:12.485397 systemd-networkd[1231]: lxc_health: Link DOWN Jul 14 22:10:12.485403 systemd-networkd[1231]: lxc_health: Lost carrier Jul 14 22:10:12.504231 containerd[1543]: time="2025-07-14T22:10:12.504166170Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 14 22:10:12.529176 containerd[1543]: time="2025-07-14T22:10:12.529133471Z" level=info msg="StopContainer for \"c2ffebf549fb0026ba9b26e1e3fd1238085ceb9d588dcd8362d3f652b51403d1\" returns successfully" Jul 14 22:10:12.529741 containerd[1543]: time="2025-07-14T22:10:12.529717113Z" level=info msg="StopPodSandbox for \"c33b4bb04f33dc6eabc27918a9faf98d50eb0d039dac4545583807ee1ad26a36\"" Jul 14 22:10:12.529807 containerd[1543]: time="2025-07-14T22:10:12.529758833Z" level=info msg="Container to stop \"c2ffebf549fb0026ba9b26e1e3fd1238085ceb9d588dcd8362d3f652b51403d1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 22:10:12.531642 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c33b4bb04f33dc6eabc27918a9faf98d50eb0d039dac4545583807ee1ad26a36-shm.mount: Deactivated successfully. Jul 14 22:10:12.542774 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8eee75f40e295d2985e122a67094acf1cde877e2a00806d7dd4b4591f59ce8c2-rootfs.mount: Deactivated successfully. Jul 14 22:10:12.546810 containerd[1543]: time="2025-07-14T22:10:12.546744862Z" level=info msg="shim disconnected" id=8eee75f40e295d2985e122a67094acf1cde877e2a00806d7dd4b4591f59ce8c2 namespace=k8s.io Jul 14 22:10:12.546810 containerd[1543]: time="2025-07-14T22:10:12.546796742Z" level=warning msg="cleaning up after shim disconnected" id=8eee75f40e295d2985e122a67094acf1cde877e2a00806d7dd4b4591f59ce8c2 namespace=k8s.io Jul 14 22:10:12.546810 containerd[1543]: time="2025-07-14T22:10:12.546805382Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 22:10:12.561646 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c33b4bb04f33dc6eabc27918a9faf98d50eb0d039dac4545583807ee1ad26a36-rootfs.mount: Deactivated successfully. Jul 14 22:10:12.565680 containerd[1543]: time="2025-07-14T22:10:12.565614018Z" level=info msg="shim disconnected" id=c33b4bb04f33dc6eabc27918a9faf98d50eb0d039dac4545583807ee1ad26a36 namespace=k8s.io Jul 14 22:10:12.565680 containerd[1543]: time="2025-07-14T22:10:12.565675338Z" level=warning msg="cleaning up after shim disconnected" id=c33b4bb04f33dc6eabc27918a9faf98d50eb0d039dac4545583807ee1ad26a36 namespace=k8s.io Jul 14 22:10:12.565830 containerd[1543]: time="2025-07-14T22:10:12.565694018Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 22:10:12.566822 containerd[1543]: time="2025-07-14T22:10:12.566776743Z" level=info msg="StopContainer for \"8eee75f40e295d2985e122a67094acf1cde877e2a00806d7dd4b4591f59ce8c2\" returns successfully" Jul 14 22:10:12.567870 containerd[1543]: time="2025-07-14T22:10:12.567653346Z" level=info msg="StopPodSandbox for \"90bb74cbf6fe882b179f3a0c81518b71f832a9affa16f759378dd13450f14eea\"" Jul 14 22:10:12.567870 containerd[1543]: time="2025-07-14T22:10:12.567708147Z" level=info msg="Container to stop \"949aa867bf633a5981a45da040dd8d0322901f8c060ee6c1c687a4cc06e59526\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 22:10:12.567870 containerd[1543]: time="2025-07-14T22:10:12.567723147Z" level=info msg="Container to stop \"8eee75f40e295d2985e122a67094acf1cde877e2a00806d7dd4b4591f59ce8c2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 22:10:12.567870 containerd[1543]: time="2025-07-14T22:10:12.567733827Z" level=info msg="Container to stop \"960808931da021a69119af1ae8cb79f1560de061fccdf90fc1257e134ea7d134\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 22:10:12.567870 containerd[1543]: time="2025-07-14T22:10:12.567747707Z" level=info msg="Container to stop \"6a54c108b7b2f5ef7096cb2d9d7d4406266bd4fbea93f0bf63d1fad9e04e7ca9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 22:10:12.567870 containerd[1543]: time="2025-07-14T22:10:12.567756947Z" level=info msg="Container to stop \"cbe02411a47d5840a83c4d44936d25157e2c9ce5617526bcae6b666439418ac5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 22:10:12.578567 containerd[1543]: time="2025-07-14T22:10:12.578487990Z" level=info msg="TearDown network for sandbox \"c33b4bb04f33dc6eabc27918a9faf98d50eb0d039dac4545583807ee1ad26a36\" successfully" Jul 14 22:10:12.578567 containerd[1543]: time="2025-07-14T22:10:12.578558470Z" level=info msg="StopPodSandbox for \"c33b4bb04f33dc6eabc27918a9faf98d50eb0d039dac4545583807ee1ad26a36\" returns successfully" Jul 14 22:10:12.608582 containerd[1543]: time="2025-07-14T22:10:12.608468911Z" level=info msg="shim disconnected" id=90bb74cbf6fe882b179f3a0c81518b71f832a9affa16f759378dd13450f14eea namespace=k8s.io Jul 14 22:10:12.608582 containerd[1543]: time="2025-07-14T22:10:12.608523711Z" level=warning msg="cleaning up after shim disconnected" id=90bb74cbf6fe882b179f3a0c81518b71f832a9affa16f759378dd13450f14eea namespace=k8s.io Jul 14 22:10:12.608582 containerd[1543]: time="2025-07-14T22:10:12.608538831Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 22:10:12.621442 containerd[1543]: time="2025-07-14T22:10:12.621398563Z" level=info msg="TearDown network for sandbox \"90bb74cbf6fe882b179f3a0c81518b71f832a9affa16f759378dd13450f14eea\" successfully" Jul 14 22:10:12.621442 containerd[1543]: time="2025-07-14T22:10:12.621432203Z" level=info msg="StopPodSandbox for \"90bb74cbf6fe882b179f3a0c81518b71f832a9affa16f759378dd13450f14eea\" returns successfully" Jul 14 22:10:12.649844 kubelet[2698]: I0714 22:10:12.649795 2698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxvmv\" (UniqueName: \"kubernetes.io/projected/08cf2ef0-a5ad-4cef-bd0c-fd8c37d9d005-kube-api-access-gxvmv\") pod \"08cf2ef0-a5ad-4cef-bd0c-fd8c37d9d005\" (UID: \"08cf2ef0-a5ad-4cef-bd0c-fd8c37d9d005\") " Jul 14 22:10:12.649844 kubelet[2698]: I0714 22:10:12.649842 2698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/08cf2ef0-a5ad-4cef-bd0c-fd8c37d9d005-cilium-config-path\") pod \"08cf2ef0-a5ad-4cef-bd0c-fd8c37d9d005\" (UID: \"08cf2ef0-a5ad-4cef-bd0c-fd8c37d9d005\") " Jul 14 22:10:12.657182 kubelet[2698]: I0714 22:10:12.657138 2698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08cf2ef0-a5ad-4cef-bd0c-fd8c37d9d005-kube-api-access-gxvmv" (OuterVolumeSpecName: "kube-api-access-gxvmv") pod "08cf2ef0-a5ad-4cef-bd0c-fd8c37d9d005" (UID: "08cf2ef0-a5ad-4cef-bd0c-fd8c37d9d005"). InnerVolumeSpecName "kube-api-access-gxvmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 14 22:10:12.657419 kubelet[2698]: I0714 22:10:12.657386 2698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08cf2ef0-a5ad-4cef-bd0c-fd8c37d9d005-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "08cf2ef0-a5ad-4cef-bd0c-fd8c37d9d005" (UID: "08cf2ef0-a5ad-4cef-bd0c-fd8c37d9d005"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 14 22:10:12.750552 kubelet[2698]: I0714 22:10:12.750300 2698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-bpf-maps\") pod \"c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b\" (UID: \"c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b\") " Jul 14 22:10:12.750552 kubelet[2698]: I0714 22:10:12.750346 2698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-hubble-tls\") pod \"c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b\" (UID: \"c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b\") " Jul 14 22:10:12.750552 kubelet[2698]: I0714 22:10:12.750361 2698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-lib-modules\") pod \"c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b\" (UID: \"c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b\") " Jul 14 22:10:12.750552 kubelet[2698]: I0714 22:10:12.750381 2698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-cilium-config-path\") pod \"c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b\" (UID: \"c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b\") " Jul 14 22:10:12.750552 kubelet[2698]: I0714 22:10:12.750404 2698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-xtables-lock\") pod \"c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b\" (UID: \"c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b\") " Jul 14 22:10:12.750552 kubelet[2698]: I0714 22:10:12.750418 2698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-cilium-cgroup\") pod \"c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b\" (UID: \"c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b\") " Jul 14 22:10:12.750786 kubelet[2698]: I0714 22:10:12.750417 2698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b" (UID: "c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:10:12.750786 kubelet[2698]: I0714 22:10:12.750438 2698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9xvc\" (UniqueName: \"kubernetes.io/projected/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-kube-api-access-r9xvc\") pod \"c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b\" (UID: \"c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b\") " Jul 14 22:10:12.750786 kubelet[2698]: I0714 22:10:12.750452 2698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-cni-path\") pod \"c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b\" (UID: \"c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b\") " Jul 14 22:10:12.750786 kubelet[2698]: I0714 22:10:12.750467 2698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-host-proc-sys-net\") pod \"c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b\" (UID: \"c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b\") " Jul 14 22:10:12.750786 kubelet[2698]: I0714 22:10:12.750468 2698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b" (UID: "c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:10:12.750927 kubelet[2698]: I0714 22:10:12.750484 2698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b" (UID: "c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:10:12.752896 kubelet[2698]: I0714 22:10:12.751771 2698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-cilium-run\") pod \"c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b\" (UID: \"c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b\") " Jul 14 22:10:12.752896 kubelet[2698]: I0714 22:10:12.751826 2698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-hostproc\") pod \"c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b\" (UID: \"c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b\") " Jul 14 22:10:12.752896 kubelet[2698]: I0714 22:10:12.751859 2698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-etc-cni-netd\") pod \"c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b\" (UID: \"c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b\") " Jul 14 22:10:12.752896 kubelet[2698]: I0714 22:10:12.751880 2698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-clustermesh-secrets\") pod \"c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b\" (UID: \"c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b\") " Jul 14 22:10:12.752896 kubelet[2698]: I0714 22:10:12.751896 2698 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-host-proc-sys-kernel\") pod \"c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b\" (UID: \"c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b\") " Jul 14 22:10:12.752896 kubelet[2698]: I0714 22:10:12.751938 2698 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 14 22:10:12.752896 kubelet[2698]: I0714 22:10:12.751954 2698 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 14 22:10:12.753137 kubelet[2698]: I0714 22:10:12.751968 2698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxvmv\" (UniqueName: \"kubernetes.io/projected/08cf2ef0-a5ad-4cef-bd0c-fd8c37d9d005-kube-api-access-gxvmv\") on node \"localhost\" DevicePath \"\"" Jul 14 22:10:12.753137 kubelet[2698]: I0714 22:10:12.751977 2698 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/08cf2ef0-a5ad-4cef-bd0c-fd8c37d9d005-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 14 22:10:12.753137 kubelet[2698]: I0714 22:10:12.752002 2698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b" (UID: "c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:10:12.753137 kubelet[2698]: I0714 22:10:12.752021 2698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-cni-path" (OuterVolumeSpecName: "cni-path") pod "c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b" (UID: "c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:10:12.753137 kubelet[2698]: I0714 22:10:12.752035 2698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b" (UID: "c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:10:12.753247 kubelet[2698]: I0714 22:10:12.752050 2698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b" (UID: "c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:10:12.753247 kubelet[2698]: I0714 22:10:12.752065 2698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-hostproc" (OuterVolumeSpecName: "hostproc") pod "c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b" (UID: "c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:10:12.753247 kubelet[2698]: I0714 22:10:12.752078 2698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b" (UID: "c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:10:12.753247 kubelet[2698]: I0714 22:10:12.752312 2698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b" (UID: "c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 14 22:10:12.753247 kubelet[2698]: I0714 22:10:12.752351 2698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b" (UID: "c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:10:12.753378 kubelet[2698]: I0714 22:10:12.752645 2698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b" (UID: "c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 14 22:10:12.753483 kubelet[2698]: I0714 22:10:12.753457 2698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-kube-api-access-r9xvc" (OuterVolumeSpecName: "kube-api-access-r9xvc") pod "c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b" (UID: "c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b"). InnerVolumeSpecName "kube-api-access-r9xvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 14 22:10:12.754101 kubelet[2698]: I0714 22:10:12.754063 2698 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b" (UID: "c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 14 22:10:12.854870 kubelet[2698]: I0714 22:10:12.852465 2698 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9xvc\" (UniqueName: \"kubernetes.io/projected/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-kube-api-access-r9xvc\") on node \"localhost\" DevicePath \"\"" Jul 14 22:10:12.854870 kubelet[2698]: I0714 22:10:12.852504 2698 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 14 22:10:12.854870 kubelet[2698]: I0714 22:10:12.852515 2698 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 14 22:10:12.854870 kubelet[2698]: I0714 22:10:12.852526 2698 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 14 22:10:12.854870 kubelet[2698]: I0714 22:10:12.852558 2698 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 14 22:10:12.854870 kubelet[2698]: I0714 22:10:12.852567 2698 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 14 22:10:12.854870 kubelet[2698]: I0714 22:10:12.852574 2698 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 14 22:10:12.854870 kubelet[2698]: I0714 22:10:12.852582 2698 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 14 22:10:12.855179 kubelet[2698]: I0714 22:10:12.852591 2698 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 14 22:10:12.855179 kubelet[2698]: I0714 22:10:12.852599 2698 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 14 22:10:12.855179 kubelet[2698]: I0714 22:10:12.852607 2698 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 14 22:10:12.855179 kubelet[2698]: I0714 22:10:12.852628 2698 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 14 22:10:13.174546 kubelet[2698]: E0714 22:10:13.174506 2698 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 14 22:10:13.453092 kubelet[2698]: I0714 22:10:13.452992 2698 scope.go:117] "RemoveContainer" containerID="8eee75f40e295d2985e122a67094acf1cde877e2a00806d7dd4b4591f59ce8c2" Jul 14 22:10:13.455138 containerd[1543]: time="2025-07-14T22:10:13.454896248Z" level=info msg="RemoveContainer for \"8eee75f40e295d2985e122a67094acf1cde877e2a00806d7dd4b4591f59ce8c2\"" Jul 14 22:10:13.458474 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90bb74cbf6fe882b179f3a0c81518b71f832a9affa16f759378dd13450f14eea-rootfs.mount: Deactivated successfully. Jul 14 22:10:13.462033 containerd[1543]: time="2025-07-14T22:10:13.461254554Z" level=info msg="RemoveContainer for \"8eee75f40e295d2985e122a67094acf1cde877e2a00806d7dd4b4591f59ce8c2\" returns successfully" Jul 14 22:10:13.458626 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-90bb74cbf6fe882b179f3a0c81518b71f832a9affa16f759378dd13450f14eea-shm.mount: Deactivated successfully. Jul 14 22:10:13.458726 systemd[1]: var-lib-kubelet-pods-08cf2ef0\x2da5ad\x2d4cef\x2dbd0c\x2dfd8c37d9d005-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgxvmv.mount: Deactivated successfully. Jul 14 22:10:13.458807 systemd[1]: var-lib-kubelet-pods-c8b7914e\x2ddc8e\x2d42f4\x2da37a\x2d8dc0f3872d5b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr9xvc.mount: Deactivated successfully. Jul 14 22:10:13.458907 systemd[1]: var-lib-kubelet-pods-c8b7914e\x2ddc8e\x2d42f4\x2da37a\x2d8dc0f3872d5b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 14 22:10:13.458987 systemd[1]: var-lib-kubelet-pods-c8b7914e\x2ddc8e\x2d42f4\x2da37a\x2d8dc0f3872d5b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 14 22:10:13.462554 kubelet[2698]: I0714 22:10:13.462529 2698 scope.go:117] "RemoveContainer" containerID="cbe02411a47d5840a83c4d44936d25157e2c9ce5617526bcae6b666439418ac5" Jul 14 22:10:13.464734 containerd[1543]: time="2025-07-14T22:10:13.464235326Z" level=info msg="RemoveContainer for \"cbe02411a47d5840a83c4d44936d25157e2c9ce5617526bcae6b666439418ac5\"" Jul 14 22:10:13.467261 containerd[1543]: time="2025-07-14T22:10:13.467187498Z" level=info msg="RemoveContainer for \"cbe02411a47d5840a83c4d44936d25157e2c9ce5617526bcae6b666439418ac5\" returns successfully" Jul 14 22:10:13.468464 kubelet[2698]: I0714 22:10:13.467880 2698 scope.go:117] "RemoveContainer" containerID="6a54c108b7b2f5ef7096cb2d9d7d4406266bd4fbea93f0bf63d1fad9e04e7ca9" Jul 14 22:10:13.471129 containerd[1543]: time="2025-07-14T22:10:13.470886953Z" level=info msg="RemoveContainer for \"6a54c108b7b2f5ef7096cb2d9d7d4406266bd4fbea93f0bf63d1fad9e04e7ca9\"" Jul 14 22:10:13.474580 containerd[1543]: time="2025-07-14T22:10:13.474245246Z" level=info msg="RemoveContainer for \"6a54c108b7b2f5ef7096cb2d9d7d4406266bd4fbea93f0bf63d1fad9e04e7ca9\" returns successfully" Jul 14 22:10:13.474677 kubelet[2698]: I0714 22:10:13.474565 2698 scope.go:117] "RemoveContainer" containerID="960808931da021a69119af1ae8cb79f1560de061fccdf90fc1257e134ea7d134" Jul 14 22:10:13.476660 containerd[1543]: time="2025-07-14T22:10:13.476621696Z" level=info msg="RemoveContainer for \"960808931da021a69119af1ae8cb79f1560de061fccdf90fc1257e134ea7d134\"" Jul 14 22:10:13.478815 containerd[1543]: time="2025-07-14T22:10:13.478700384Z" level=info msg="RemoveContainer for \"960808931da021a69119af1ae8cb79f1560de061fccdf90fc1257e134ea7d134\" returns successfully" Jul 14 22:10:13.478906 kubelet[2698]: I0714 22:10:13.478842 2698 scope.go:117] "RemoveContainer" containerID="949aa867bf633a5981a45da040dd8d0322901f8c060ee6c1c687a4cc06e59526" Jul 14 22:10:13.479865 containerd[1543]: time="2025-07-14T22:10:13.479775869Z" level=info msg="RemoveContainer for \"949aa867bf633a5981a45da040dd8d0322901f8c060ee6c1c687a4cc06e59526\"" Jul 14 22:10:13.482145 containerd[1543]: time="2025-07-14T22:10:13.482108078Z" level=info msg="RemoveContainer for \"949aa867bf633a5981a45da040dd8d0322901f8c060ee6c1c687a4cc06e59526\" returns successfully" Jul 14 22:10:13.482288 kubelet[2698]: I0714 22:10:13.482262 2698 scope.go:117] "RemoveContainer" containerID="8eee75f40e295d2985e122a67094acf1cde877e2a00806d7dd4b4591f59ce8c2" Jul 14 22:10:13.482619 containerd[1543]: time="2025-07-14T22:10:13.482526640Z" level=error msg="ContainerStatus for \"8eee75f40e295d2985e122a67094acf1cde877e2a00806d7dd4b4591f59ce8c2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8eee75f40e295d2985e122a67094acf1cde877e2a00806d7dd4b4591f59ce8c2\": not found" Jul 14 22:10:13.489641 kubelet[2698]: E0714 22:10:13.489606 2698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8eee75f40e295d2985e122a67094acf1cde877e2a00806d7dd4b4591f59ce8c2\": not found" containerID="8eee75f40e295d2985e122a67094acf1cde877e2a00806d7dd4b4591f59ce8c2" Jul 14 22:10:13.489744 kubelet[2698]: I0714 22:10:13.489656 2698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8eee75f40e295d2985e122a67094acf1cde877e2a00806d7dd4b4591f59ce8c2"} err="failed to get container status \"8eee75f40e295d2985e122a67094acf1cde877e2a00806d7dd4b4591f59ce8c2\": rpc error: code = NotFound desc = an error occurred when try to find container \"8eee75f40e295d2985e122a67094acf1cde877e2a00806d7dd4b4591f59ce8c2\": not found" Jul 14 22:10:13.489744 kubelet[2698]: I0714 22:10:13.489740 2698 scope.go:117] "RemoveContainer" containerID="cbe02411a47d5840a83c4d44936d25157e2c9ce5617526bcae6b666439418ac5" Jul 14 22:10:13.489997 containerd[1543]: time="2025-07-14T22:10:13.489955230Z" level=error msg="ContainerStatus for \"cbe02411a47d5840a83c4d44936d25157e2c9ce5617526bcae6b666439418ac5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cbe02411a47d5840a83c4d44936d25157e2c9ce5617526bcae6b666439418ac5\": not found" Jul 14 22:10:13.490223 kubelet[2698]: E0714 22:10:13.490112 2698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cbe02411a47d5840a83c4d44936d25157e2c9ce5617526bcae6b666439418ac5\": not found" containerID="cbe02411a47d5840a83c4d44936d25157e2c9ce5617526bcae6b666439418ac5" Jul 14 22:10:13.490223 kubelet[2698]: I0714 22:10:13.490140 2698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cbe02411a47d5840a83c4d44936d25157e2c9ce5617526bcae6b666439418ac5"} err="failed to get container status \"cbe02411a47d5840a83c4d44936d25157e2c9ce5617526bcae6b666439418ac5\": rpc error: code = NotFound desc = an error occurred when try to find container \"cbe02411a47d5840a83c4d44936d25157e2c9ce5617526bcae6b666439418ac5\": not found" Jul 14 22:10:13.490223 kubelet[2698]: I0714 22:10:13.490156 2698 scope.go:117] "RemoveContainer" containerID="6a54c108b7b2f5ef7096cb2d9d7d4406266bd4fbea93f0bf63d1fad9e04e7ca9" Jul 14 22:10:13.490328 containerd[1543]: time="2025-07-14T22:10:13.490288631Z" level=error msg="ContainerStatus for \"6a54c108b7b2f5ef7096cb2d9d7d4406266bd4fbea93f0bf63d1fad9e04e7ca9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6a54c108b7b2f5ef7096cb2d9d7d4406266bd4fbea93f0bf63d1fad9e04e7ca9\": not found" Jul 14 22:10:13.490434 kubelet[2698]: E0714 22:10:13.490393 2698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6a54c108b7b2f5ef7096cb2d9d7d4406266bd4fbea93f0bf63d1fad9e04e7ca9\": not found" containerID="6a54c108b7b2f5ef7096cb2d9d7d4406266bd4fbea93f0bf63d1fad9e04e7ca9" Jul 14 22:10:13.490434 kubelet[2698]: I0714 22:10:13.490418 2698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6a54c108b7b2f5ef7096cb2d9d7d4406266bd4fbea93f0bf63d1fad9e04e7ca9"} err="failed to get container status \"6a54c108b7b2f5ef7096cb2d9d7d4406266bd4fbea93f0bf63d1fad9e04e7ca9\": rpc error: code = NotFound desc = an error occurred when try to find container \"6a54c108b7b2f5ef7096cb2d9d7d4406266bd4fbea93f0bf63d1fad9e04e7ca9\": not found" Jul 14 22:10:13.490434 kubelet[2698]: I0714 22:10:13.490435 2698 scope.go:117] "RemoveContainer" containerID="960808931da021a69119af1ae8cb79f1560de061fccdf90fc1257e134ea7d134" Jul 14 22:10:13.490572 containerd[1543]: time="2025-07-14T22:10:13.490548752Z" level=error msg="ContainerStatus for \"960808931da021a69119af1ae8cb79f1560de061fccdf90fc1257e134ea7d134\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"960808931da021a69119af1ae8cb79f1560de061fccdf90fc1257e134ea7d134\": not found" Jul 14 22:10:13.490673 kubelet[2698]: E0714 22:10:13.490653 2698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"960808931da021a69119af1ae8cb79f1560de061fccdf90fc1257e134ea7d134\": not found" containerID="960808931da021a69119af1ae8cb79f1560de061fccdf90fc1257e134ea7d134" Jul 14 22:10:13.490713 kubelet[2698]: I0714 22:10:13.490681 2698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"960808931da021a69119af1ae8cb79f1560de061fccdf90fc1257e134ea7d134"} err="failed to get container status \"960808931da021a69119af1ae8cb79f1560de061fccdf90fc1257e134ea7d134\": rpc error: code = NotFound desc = an error occurred when try to find container \"960808931da021a69119af1ae8cb79f1560de061fccdf90fc1257e134ea7d134\": not found" Jul 14 22:10:13.490713 kubelet[2698]: I0714 22:10:13.490701 2698 scope.go:117] "RemoveContainer" containerID="949aa867bf633a5981a45da040dd8d0322901f8c060ee6c1c687a4cc06e59526" Jul 14 22:10:13.490857 containerd[1543]: time="2025-07-14T22:10:13.490827033Z" level=error msg="ContainerStatus for \"949aa867bf633a5981a45da040dd8d0322901f8c060ee6c1c687a4cc06e59526\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"949aa867bf633a5981a45da040dd8d0322901f8c060ee6c1c687a4cc06e59526\": not found" Jul 14 22:10:13.490950 kubelet[2698]: E0714 22:10:13.490934 2698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"949aa867bf633a5981a45da040dd8d0322901f8c060ee6c1c687a4cc06e59526\": not found" containerID="949aa867bf633a5981a45da040dd8d0322901f8c060ee6c1c687a4cc06e59526" Jul 14 22:10:13.490994 kubelet[2698]: I0714 22:10:13.490953 2698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"949aa867bf633a5981a45da040dd8d0322901f8c060ee6c1c687a4cc06e59526"} err="failed to get container status \"949aa867bf633a5981a45da040dd8d0322901f8c060ee6c1c687a4cc06e59526\": rpc error: code = NotFound desc = an error occurred when try to find container \"949aa867bf633a5981a45da040dd8d0322901f8c060ee6c1c687a4cc06e59526\": not found" Jul 14 22:10:13.490994 kubelet[2698]: I0714 22:10:13.490965 2698 scope.go:117] "RemoveContainer" containerID="c2ffebf549fb0026ba9b26e1e3fd1238085ceb9d588dcd8362d3f652b51403d1" Jul 14 22:10:13.491768 containerd[1543]: time="2025-07-14T22:10:13.491737757Z" level=info msg="RemoveContainer for \"c2ffebf549fb0026ba9b26e1e3fd1238085ceb9d588dcd8362d3f652b51403d1\"" Jul 14 22:10:13.493948 containerd[1543]: time="2025-07-14T22:10:13.493912686Z" level=info msg="RemoveContainer for \"c2ffebf549fb0026ba9b26e1e3fd1238085ceb9d588dcd8362d3f652b51403d1\" returns successfully" Jul 14 22:10:13.494129 kubelet[2698]: I0714 22:10:13.494096 2698 scope.go:117] "RemoveContainer" containerID="c2ffebf549fb0026ba9b26e1e3fd1238085ceb9d588dcd8362d3f652b51403d1" Jul 14 22:10:13.494357 containerd[1543]: time="2025-07-14T22:10:13.494311367Z" level=error msg="ContainerStatus for \"c2ffebf549fb0026ba9b26e1e3fd1238085ceb9d588dcd8362d3f652b51403d1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c2ffebf549fb0026ba9b26e1e3fd1238085ceb9d588dcd8362d3f652b51403d1\": not found" Jul 14 22:10:13.494520 kubelet[2698]: E0714 22:10:13.494467 2698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c2ffebf549fb0026ba9b26e1e3fd1238085ceb9d588dcd8362d3f652b51403d1\": not found" containerID="c2ffebf549fb0026ba9b26e1e3fd1238085ceb9d588dcd8362d3f652b51403d1" Jul 14 22:10:13.494520 kubelet[2698]: I0714 22:10:13.494494 2698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c2ffebf549fb0026ba9b26e1e3fd1238085ceb9d588dcd8362d3f652b51403d1"} err="failed to get container status \"c2ffebf549fb0026ba9b26e1e3fd1238085ceb9d588dcd8362d3f652b51403d1\": rpc error: code = NotFound desc = an error occurred when try to find container \"c2ffebf549fb0026ba9b26e1e3fd1238085ceb9d588dcd8362d3f652b51403d1\": not found" Jul 14 22:10:14.402822 sshd[4369]: pam_unix(sshd:session): session closed for user core Jul 14 22:10:14.417099 systemd[1]: Started sshd@24-10.0.0.114:22-10.0.0.1:53274.service - OpenSSH per-connection server daemon (10.0.0.1:53274). Jul 14 22:10:14.417487 systemd[1]: sshd@23-10.0.0.114:22-10.0.0.1:41208.service: Deactivated successfully. Jul 14 22:10:14.420273 systemd[1]: session-24.scope: Deactivated successfully. Jul 14 22:10:14.420458 systemd-logind[1523]: Session 24 logged out. Waiting for processes to exit. Jul 14 22:10:14.422273 systemd-logind[1523]: Removed session 24. Jul 14 22:10:14.454833 sshd[4537]: Accepted publickey for core from 10.0.0.1 port 53274 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:10:14.456062 sshd[4537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:10:14.460616 systemd-logind[1523]: New session 25 of user core. Jul 14 22:10:14.476140 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 14 22:10:15.109133 kubelet[2698]: I0714 22:10:15.108211 2698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08cf2ef0-a5ad-4cef-bd0c-fd8c37d9d005" path="/var/lib/kubelet/pods/08cf2ef0-a5ad-4cef-bd0c-fd8c37d9d005/volumes" Jul 14 22:10:15.109133 kubelet[2698]: I0714 22:10:15.108597 2698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b" path="/var/lib/kubelet/pods/c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b/volumes" Jul 14 22:10:15.115416 kubelet[2698]: I0714 22:10:15.115375 2698 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-14T22:10:15Z","lastTransitionTime":"2025-07-14T22:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 14 22:10:15.615075 sshd[4537]: pam_unix(sshd:session): session closed for user core Jul 14 22:10:15.624310 systemd[1]: Started sshd@25-10.0.0.114:22-10.0.0.1:53278.service - OpenSSH per-connection server daemon (10.0.0.1:53278). Jul 14 22:10:15.630249 systemd[1]: sshd@24-10.0.0.114:22-10.0.0.1:53274.service: Deactivated successfully. Jul 14 22:10:15.635010 kubelet[2698]: E0714 22:10:15.634968 2698 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b" containerName="apply-sysctl-overwrites" Jul 14 22:10:15.635010 kubelet[2698]: E0714 22:10:15.635004 2698 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b" containerName="mount-bpf-fs" Jul 14 22:10:15.635010 kubelet[2698]: E0714 22:10:15.635013 2698 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b" containerName="clean-cilium-state" Jul 14 22:10:15.635010 kubelet[2698]: E0714 22:10:15.635019 2698 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b" containerName="cilium-agent" Jul 14 22:10:15.635191 kubelet[2698]: E0714 22:10:15.635025 2698 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b" containerName="mount-cgroup" Jul 14 22:10:15.635191 kubelet[2698]: E0714 22:10:15.635030 2698 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="08cf2ef0-a5ad-4cef-bd0c-fd8c37d9d005" containerName="cilium-operator" Jul 14 22:10:15.635191 kubelet[2698]: I0714 22:10:15.635053 2698 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8b7914e-dc8e-42f4-a37a-8dc0f3872d5b" containerName="cilium-agent" Jul 14 22:10:15.635191 kubelet[2698]: I0714 22:10:15.635059 2698 memory_manager.go:354] "RemoveStaleState removing state" podUID="08cf2ef0-a5ad-4cef-bd0c-fd8c37d9d005" containerName="cilium-operator" Jul 14 22:10:15.637475 systemd[1]: session-25.scope: Deactivated successfully. Jul 14 22:10:15.642966 systemd-logind[1523]: Session 25 logged out. Waiting for processes to exit. Jul 14 22:10:15.656284 systemd-logind[1523]: Removed session 25. Jul 14 22:10:15.669473 kubelet[2698]: I0714 22:10:15.669436 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bae69ad7-c720-46de-a4a7-a119b6353633-cni-path\") pod \"cilium-g9djf\" (UID: \"bae69ad7-c720-46de-a4a7-a119b6353633\") " pod="kube-system/cilium-g9djf" Jul 14 22:10:15.669473 kubelet[2698]: I0714 22:10:15.669474 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bae69ad7-c720-46de-a4a7-a119b6353633-xtables-lock\") pod \"cilium-g9djf\" (UID: \"bae69ad7-c720-46de-a4a7-a119b6353633\") " pod="kube-system/cilium-g9djf" Jul 14 22:10:15.671009 kubelet[2698]: I0714 22:10:15.669498 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bae69ad7-c720-46de-a4a7-a119b6353633-bpf-maps\") pod \"cilium-g9djf\" (UID: \"bae69ad7-c720-46de-a4a7-a119b6353633\") " pod="kube-system/cilium-g9djf" Jul 14 22:10:15.671009 kubelet[2698]: I0714 22:10:15.669513 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bae69ad7-c720-46de-a4a7-a119b6353633-hostproc\") pod \"cilium-g9djf\" (UID: \"bae69ad7-c720-46de-a4a7-a119b6353633\") " pod="kube-system/cilium-g9djf" Jul 14 22:10:15.671009 kubelet[2698]: I0714 22:10:15.669566 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bae69ad7-c720-46de-a4a7-a119b6353633-cilium-config-path\") pod \"cilium-g9djf\" (UID: \"bae69ad7-c720-46de-a4a7-a119b6353633\") " pod="kube-system/cilium-g9djf" Jul 14 22:10:15.671009 kubelet[2698]: I0714 22:10:15.669605 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bae69ad7-c720-46de-a4a7-a119b6353633-lib-modules\") pod \"cilium-g9djf\" (UID: \"bae69ad7-c720-46de-a4a7-a119b6353633\") " pod="kube-system/cilium-g9djf" Jul 14 22:10:15.671009 kubelet[2698]: I0714 22:10:15.669630 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bae69ad7-c720-46de-a4a7-a119b6353633-hubble-tls\") pod \"cilium-g9djf\" (UID: \"bae69ad7-c720-46de-a4a7-a119b6353633\") " pod="kube-system/cilium-g9djf" Jul 14 22:10:15.671009 kubelet[2698]: I0714 22:10:15.669647 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bae69ad7-c720-46de-a4a7-a119b6353633-host-proc-sys-net\") pod \"cilium-g9djf\" (UID: \"bae69ad7-c720-46de-a4a7-a119b6353633\") " pod="kube-system/cilium-g9djf" Jul 14 22:10:15.671151 kubelet[2698]: I0714 22:10:15.669662 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bae69ad7-c720-46de-a4a7-a119b6353633-host-proc-sys-kernel\") pod \"cilium-g9djf\" (UID: \"bae69ad7-c720-46de-a4a7-a119b6353633\") " pod="kube-system/cilium-g9djf" Jul 14 22:10:15.671151 kubelet[2698]: I0714 22:10:15.669676 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bae69ad7-c720-46de-a4a7-a119b6353633-clustermesh-secrets\") pod \"cilium-g9djf\" (UID: \"bae69ad7-c720-46de-a4a7-a119b6353633\") " pod="kube-system/cilium-g9djf" Jul 14 22:10:15.671151 kubelet[2698]: I0714 22:10:15.669692 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bae69ad7-c720-46de-a4a7-a119b6353633-etc-cni-netd\") pod \"cilium-g9djf\" (UID: \"bae69ad7-c720-46de-a4a7-a119b6353633\") " pod="kube-system/cilium-g9djf" Jul 14 22:10:15.671151 kubelet[2698]: I0714 22:10:15.669713 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bae69ad7-c720-46de-a4a7-a119b6353633-cilium-ipsec-secrets\") pod \"cilium-g9djf\" (UID: \"bae69ad7-c720-46de-a4a7-a119b6353633\") " pod="kube-system/cilium-g9djf" Jul 14 22:10:15.671151 kubelet[2698]: I0714 22:10:15.669732 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnrkx\" (UniqueName: \"kubernetes.io/projected/bae69ad7-c720-46de-a4a7-a119b6353633-kube-api-access-gnrkx\") pod \"cilium-g9djf\" (UID: \"bae69ad7-c720-46de-a4a7-a119b6353633\") " pod="kube-system/cilium-g9djf" Jul 14 22:10:15.672964 kubelet[2698]: I0714 22:10:15.672011 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bae69ad7-c720-46de-a4a7-a119b6353633-cilium-cgroup\") pod \"cilium-g9djf\" (UID: \"bae69ad7-c720-46de-a4a7-a119b6353633\") " pod="kube-system/cilium-g9djf" Jul 14 22:10:15.672964 kubelet[2698]: I0714 22:10:15.672061 2698 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bae69ad7-c720-46de-a4a7-a119b6353633-cilium-run\") pod \"cilium-g9djf\" (UID: \"bae69ad7-c720-46de-a4a7-a119b6353633\") " pod="kube-system/cilium-g9djf" Jul 14 22:10:15.688555 sshd[4551]: Accepted publickey for core from 10.0.0.1 port 53278 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:10:15.690772 sshd[4551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:10:15.694504 systemd-logind[1523]: New session 26 of user core. Jul 14 22:10:15.703205 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 14 22:10:15.753997 sshd[4551]: pam_unix(sshd:session): session closed for user core Jul 14 22:10:15.766133 systemd[1]: Started sshd@26-10.0.0.114:22-10.0.0.1:53286.service - OpenSSH per-connection server daemon (10.0.0.1:53286). Jul 14 22:10:15.766537 systemd[1]: sshd@25-10.0.0.114:22-10.0.0.1:53278.service: Deactivated successfully. Jul 14 22:10:15.768606 systemd-logind[1523]: Session 26 logged out. Waiting for processes to exit. Jul 14 22:10:15.769173 systemd[1]: session-26.scope: Deactivated successfully. Jul 14 22:10:15.770975 systemd-logind[1523]: Removed session 26. Jul 14 22:10:15.808273 sshd[4560]: Accepted publickey for core from 10.0.0.1 port 53286 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:10:15.809797 sshd[4560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:10:15.818084 systemd-logind[1523]: New session 27 of user core. Jul 14 22:10:15.827241 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 14 22:10:15.941451 kubelet[2698]: E0714 22:10:15.941395 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:10:15.942978 containerd[1543]: time="2025-07-14T22:10:15.942765887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g9djf,Uid:bae69ad7-c720-46de-a4a7-a119b6353633,Namespace:kube-system,Attempt:0,}" Jul 14 22:10:15.966223 containerd[1543]: time="2025-07-14T22:10:15.966080342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:10:15.966223 containerd[1543]: time="2025-07-14T22:10:15.966191102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:10:15.966223 containerd[1543]: time="2025-07-14T22:10:15.966213742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:10:15.966539 containerd[1543]: time="2025-07-14T22:10:15.966489943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:10:16.001000 containerd[1543]: time="2025-07-14T22:10:16.000959843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g9djf,Uid:bae69ad7-c720-46de-a4a7-a119b6353633,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a0c887f36a86b599f8c246993d778190b3daec54bef9f5a792010c91da7db6c\"" Jul 14 22:10:16.001840 kubelet[2698]: E0714 22:10:16.001811 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:10:16.006996 containerd[1543]: time="2025-07-14T22:10:16.006816907Z" level=info msg="CreateContainer within sandbox \"9a0c887f36a86b599f8c246993d778190b3daec54bef9f5a792010c91da7db6c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 14 22:10:16.018894 containerd[1543]: time="2025-07-14T22:10:16.018752356Z" level=info msg="CreateContainer within sandbox \"9a0c887f36a86b599f8c246993d778190b3daec54bef9f5a792010c91da7db6c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"993685fd52551c8734ea0245367607bab80bf22217c92edcb4ac581a7b5e7d54\"" Jul 14 22:10:16.020110 containerd[1543]: time="2025-07-14T22:10:16.019347078Z" level=info msg="StartContainer for \"993685fd52551c8734ea0245367607bab80bf22217c92edcb4ac581a7b5e7d54\"" Jul 14 22:10:16.063694 containerd[1543]: time="2025-07-14T22:10:16.063643818Z" level=info msg="StartContainer for \"993685fd52551c8734ea0245367607bab80bf22217c92edcb4ac581a7b5e7d54\" returns successfully" Jul 14 22:10:16.105882 containerd[1543]: time="2025-07-14T22:10:16.105729709Z" level=info msg="shim disconnected" id=993685fd52551c8734ea0245367607bab80bf22217c92edcb4ac581a7b5e7d54 namespace=k8s.io Jul 14 22:10:16.105882 containerd[1543]: time="2025-07-14T22:10:16.105780830Z" level=warning msg="cleaning up after shim disconnected" id=993685fd52551c8734ea0245367607bab80bf22217c92edcb4ac581a7b5e7d54 namespace=k8s.io Jul 14 22:10:16.105882 containerd[1543]: time="2025-07-14T22:10:16.105789190Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 22:10:16.465811 kubelet[2698]: E0714 22:10:16.465772 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:10:16.469500 containerd[1543]: time="2025-07-14T22:10:16.469283068Z" level=info msg="CreateContainer within sandbox \"9a0c887f36a86b599f8c246993d778190b3daec54bef9f5a792010c91da7db6c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 14 22:10:16.483733 containerd[1543]: time="2025-07-14T22:10:16.482948083Z" level=info msg="CreateContainer within sandbox \"9a0c887f36a86b599f8c246993d778190b3daec54bef9f5a792010c91da7db6c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3b21365950d04bd1c778aaf67cbc3375f7740d818867ff562147c0e1d01adac9\"" Jul 14 22:10:16.484322 containerd[1543]: time="2025-07-14T22:10:16.484297529Z" level=info msg="StartContainer for \"3b21365950d04bd1c778aaf67cbc3375f7740d818867ff562147c0e1d01adac9\"" Jul 14 22:10:16.539964 containerd[1543]: time="2025-07-14T22:10:16.539905235Z" level=info msg="StartContainer for \"3b21365950d04bd1c778aaf67cbc3375f7740d818867ff562147c0e1d01adac9\" returns successfully" Jul 14 22:10:16.574421 containerd[1543]: time="2025-07-14T22:10:16.574348375Z" level=info msg="shim disconnected" id=3b21365950d04bd1c778aaf67cbc3375f7740d818867ff562147c0e1d01adac9 namespace=k8s.io Jul 14 22:10:16.574712 containerd[1543]: time="2025-07-14T22:10:16.574641896Z" level=warning msg="cleaning up after shim disconnected" id=3b21365950d04bd1c778aaf67cbc3375f7740d818867ff562147c0e1d01adac9 namespace=k8s.io Jul 14 22:10:16.574712 containerd[1543]: time="2025-07-14T22:10:16.574659656Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 22:10:17.468547 kubelet[2698]: E0714 22:10:17.468496 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:10:17.471505 containerd[1543]: time="2025-07-14T22:10:17.471182906Z" level=info msg="CreateContainer within sandbox \"9a0c887f36a86b599f8c246993d778190b3daec54bef9f5a792010c91da7db6c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 14 22:10:17.484317 containerd[1543]: time="2025-07-14T22:10:17.484207519Z" level=info msg="CreateContainer within sandbox \"9a0c887f36a86b599f8c246993d778190b3daec54bef9f5a792010c91da7db6c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"326ec115546a1ab370c52c0d99e6267ae021884c61c7142e86fa9e1542f0f61b\"" Jul 14 22:10:17.485024 containerd[1543]: time="2025-07-14T22:10:17.484861682Z" level=info msg="StartContainer for \"326ec115546a1ab370c52c0d99e6267ae021884c61c7142e86fa9e1542f0f61b\"" Jul 14 22:10:17.540475 containerd[1543]: time="2025-07-14T22:10:17.540432468Z" level=info msg="StartContainer for \"326ec115546a1ab370c52c0d99e6267ae021884c61c7142e86fa9e1542f0f61b\" returns successfully" Jul 14 22:10:17.559541 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-326ec115546a1ab370c52c0d99e6267ae021884c61c7142e86fa9e1542f0f61b-rootfs.mount: Deactivated successfully. Jul 14 22:10:17.563215 containerd[1543]: time="2025-07-14T22:10:17.563160121Z" level=info msg="shim disconnected" id=326ec115546a1ab370c52c0d99e6267ae021884c61c7142e86fa9e1542f0f61b namespace=k8s.io Jul 14 22:10:17.563215 containerd[1543]: time="2025-07-14T22:10:17.563214201Z" level=warning msg="cleaning up after shim disconnected" id=326ec115546a1ab370c52c0d99e6267ae021884c61c7142e86fa9e1542f0f61b namespace=k8s.io Jul 14 22:10:17.563351 containerd[1543]: time="2025-07-14T22:10:17.563222681Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 22:10:18.175871 kubelet[2698]: E0714 22:10:18.175452 2698 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 14 22:10:18.472812 kubelet[2698]: E0714 22:10:18.472500 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:10:18.474599 containerd[1543]: time="2025-07-14T22:10:18.474514838Z" level=info msg="CreateContainer within sandbox \"9a0c887f36a86b599f8c246993d778190b3daec54bef9f5a792010c91da7db6c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 14 22:10:18.497839 containerd[1543]: time="2025-07-14T22:10:18.497786213Z" level=info msg="CreateContainer within sandbox \"9a0c887f36a86b599f8c246993d778190b3daec54bef9f5a792010c91da7db6c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"88e939e7a6c740a7a9e9726c679394e4bd5f885d593372b75bb45f587f4025e6\"" Jul 14 22:10:18.498593 containerd[1543]: time="2025-07-14T22:10:18.498365095Z" level=info msg="StartContainer for \"88e939e7a6c740a7a9e9726c679394e4bd5f885d593372b75bb45f587f4025e6\"" Jul 14 22:10:18.566368 containerd[1543]: time="2025-07-14T22:10:18.566317412Z" level=info msg="StartContainer for \"88e939e7a6c740a7a9e9726c679394e4bd5f885d593372b75bb45f587f4025e6\" returns successfully" Jul 14 22:10:18.579968 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-88e939e7a6c740a7a9e9726c679394e4bd5f885d593372b75bb45f587f4025e6-rootfs.mount: Deactivated successfully. Jul 14 22:10:18.585751 containerd[1543]: time="2025-07-14T22:10:18.585596011Z" level=info msg="shim disconnected" id=88e939e7a6c740a7a9e9726c679394e4bd5f885d593372b75bb45f587f4025e6 namespace=k8s.io Jul 14 22:10:18.585751 containerd[1543]: time="2025-07-14T22:10:18.585655571Z" level=warning msg="cleaning up after shim disconnected" id=88e939e7a6c740a7a9e9726c679394e4bd5f885d593372b75bb45f587f4025e6 namespace=k8s.io Jul 14 22:10:18.585751 containerd[1543]: time="2025-07-14T22:10:18.585666211Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 22:10:19.479342 kubelet[2698]: E0714 22:10:19.476638 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:10:19.486999 containerd[1543]: time="2025-07-14T22:10:19.482070674Z" level=info msg="CreateContainer within sandbox \"9a0c887f36a86b599f8c246993d778190b3daec54bef9f5a792010c91da7db6c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 14 22:10:19.496867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1467514107.mount: Deactivated successfully. Jul 14 22:10:19.498274 containerd[1543]: time="2025-07-14T22:10:19.498221700Z" level=info msg="CreateContainer within sandbox \"9a0c887f36a86b599f8c246993d778190b3daec54bef9f5a792010c91da7db6c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c93dc6faa50c1ec240332bb182ea2fd92a397cdb518b64298da0596319748bf3\"" Jul 14 22:10:19.499231 containerd[1543]: time="2025-07-14T22:10:19.499181704Z" level=info msg="StartContainer for \"c93dc6faa50c1ec240332bb182ea2fd92a397cdb518b64298da0596319748bf3\"" Jul 14 22:10:19.556001 containerd[1543]: time="2025-07-14T22:10:19.555176933Z" level=info msg="StartContainer for \"c93dc6faa50c1ec240332bb182ea2fd92a397cdb518b64298da0596319748bf3\" returns successfully" Jul 14 22:10:19.821077 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 14 22:10:20.479969 kubelet[2698]: E0714 22:10:20.479926 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:10:20.494019 kubelet[2698]: I0714 22:10:20.493957 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-g9djf" podStartSLOduration=5.493938736 podStartE2EDuration="5.493938736s" podCreationTimestamp="2025-07-14 22:10:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:10:20.493473574 +0000 UTC m=+147.476252209" watchObservedRunningTime="2025-07-14 22:10:20.493938736 +0000 UTC m=+147.476717371" Jul 14 22:10:21.942878 kubelet[2698]: E0714 22:10:21.942806 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:10:22.166183 systemd[1]: run-containerd-runc-k8s.io-c93dc6faa50c1ec240332bb182ea2fd92a397cdb518b64298da0596319748bf3-runc.iun2yw.mount: Deactivated successfully. Jul 14 22:10:31.107081 kubelet[2698]: E0714 22:10:31.107041 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:10:37.107140 kubelet[2698]: E0714 22:10:37.107100 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:10:38.106996 kubelet[2698]: E0714 22:10:38.106919 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:10:41.107796 kubelet[2698]: E0714 22:10:41.106662 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:10:45.942472 kubelet[2698]: E0714 22:10:45.942438 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:10:49.106547 kubelet[2698]: E0714 22:10:49.106501 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:10:52.106434 kubelet[2698]: E0714 22:10:52.106401 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:10:52.165142 update_engine[1531]: I20250714 22:10:52.164283 1531 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 14 22:10:52.165142 update_engine[1531]: I20250714 22:10:52.164327 1531 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 14 22:10:52.165142 update_engine[1531]: I20250714 22:10:52.164562 1531 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 14 22:10:52.165142 update_engine[1531]: I20250714 22:10:52.164938 1531 omaha_request_params.cc:62] Current group set to lts Jul 14 22:10:52.165142 update_engine[1531]: I20250714 22:10:52.165030 1531 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 14 22:10:52.165142 update_engine[1531]: I20250714 22:10:52.165038 1531 update_attempter.cc:643] Scheduling an action processor start. Jul 14 22:10:52.165142 update_engine[1531]: I20250714 22:10:52.165053 1531 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 14 22:10:52.167955 update_engine[1531]: I20250714 22:10:52.167902 1531 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 14 22:10:52.168048 update_engine[1531]: I20250714 22:10:52.168002 1531 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 14 22:10:52.168048 update_engine[1531]: I20250714 22:10:52.168011 1531 omaha_request_action.cc:272] Request: Jul 14 22:10:52.168048 update_engine[1531]: Jul 14 22:10:52.168048 update_engine[1531]: Jul 14 22:10:52.168048 update_engine[1531]: Jul 14 22:10:52.168048 update_engine[1531]: Jul 14 22:10:52.168048 update_engine[1531]: Jul 14 22:10:52.168048 update_engine[1531]: Jul 14 22:10:52.168048 update_engine[1531]: Jul 14 22:10:52.168048 update_engine[1531]: Jul 14 22:10:52.168048 update_engine[1531]: I20250714 22:10:52.168018 1531 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 14 22:10:52.171472 locksmithd[1578]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 14 22:10:52.172051 update_engine[1531]: I20250714 22:10:52.172021 1531 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 14 22:10:52.172332 update_engine[1531]: I20250714 22:10:52.172312 1531 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 14 22:10:53.103180 containerd[1543]: time="2025-07-14T22:10:53.100033786Z" level=info msg="StopPodSandbox for \"c33b4bb04f33dc6eabc27918a9faf98d50eb0d039dac4545583807ee1ad26a36\"" Jul 14 22:10:53.104120 containerd[1543]: time="2025-07-14T22:10:53.103261720Z" level=info msg="TearDown network for sandbox \"c33b4bb04f33dc6eabc27918a9faf98d50eb0d039dac4545583807ee1ad26a36\" successfully" Jul 14 22:10:53.104120 containerd[1543]: time="2025-07-14T22:10:53.103282000Z" level=info msg="StopPodSandbox for \"c33b4bb04f33dc6eabc27918a9faf98d50eb0d039dac4545583807ee1ad26a36\" returns successfully" Jul 14 22:10:53.104120 containerd[1543]: time="2025-07-14T22:10:53.103896683Z" level=info msg="RemovePodSandbox for \"c33b4bb04f33dc6eabc27918a9faf98d50eb0d039dac4545583807ee1ad26a36\"" Jul 14 22:10:53.104120 containerd[1543]: time="2025-07-14T22:10:53.103932603Z" level=info msg="Forcibly stopping sandbox \"c33b4bb04f33dc6eabc27918a9faf98d50eb0d039dac4545583807ee1ad26a36\"" Jul 14 22:10:53.104120 containerd[1543]: time="2025-07-14T22:10:53.103993403Z" level=info msg="TearDown network for sandbox \"c33b4bb04f33dc6eabc27918a9faf98d50eb0d039dac4545583807ee1ad26a36\" successfully" Jul 14 22:10:53.109430 containerd[1543]: time="2025-07-14T22:10:53.109385066Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c33b4bb04f33dc6eabc27918a9faf98d50eb0d039dac4545583807ee1ad26a36\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:10:53.109510 containerd[1543]: time="2025-07-14T22:10:53.109443186Z" level=info msg="RemovePodSandbox \"c33b4bb04f33dc6eabc27918a9faf98d50eb0d039dac4545583807ee1ad26a36\" returns successfully" Jul 14 22:10:53.110596 containerd[1543]: time="2025-07-14T22:10:53.110401990Z" level=info msg="StopPodSandbox for \"90bb74cbf6fe882b179f3a0c81518b71f832a9affa16f759378dd13450f14eea\"" Jul 14 22:10:53.110596 containerd[1543]: time="2025-07-14T22:10:53.110466671Z" level=info msg="TearDown network for sandbox \"90bb74cbf6fe882b179f3a0c81518b71f832a9affa16f759378dd13450f14eea\" successfully" Jul 14 22:10:53.110596 containerd[1543]: time="2025-07-14T22:10:53.110476111Z" level=info msg="StopPodSandbox for \"90bb74cbf6fe882b179f3a0c81518b71f832a9affa16f759378dd13450f14eea\" returns successfully" Jul 14 22:10:53.110769 containerd[1543]: time="2025-07-14T22:10:53.110737032Z" level=info msg="RemovePodSandbox for \"90bb74cbf6fe882b179f3a0c81518b71f832a9affa16f759378dd13450f14eea\"" Jul 14 22:10:53.110769 containerd[1543]: time="2025-07-14T22:10:53.110760832Z" level=info msg="Forcibly stopping sandbox \"90bb74cbf6fe882b179f3a0c81518b71f832a9affa16f759378dd13450f14eea\"" Jul 14 22:10:53.110834 containerd[1543]: time="2025-07-14T22:10:53.110798792Z" level=info msg="TearDown network for sandbox \"90bb74cbf6fe882b179f3a0c81518b71f832a9affa16f759378dd13450f14eea\" successfully" Jul 14 22:10:53.117040 containerd[1543]: time="2025-07-14T22:10:53.116987618Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"90bb74cbf6fe882b179f3a0c81518b71f832a9affa16f759378dd13450f14eea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:10:53.117118 containerd[1543]: time="2025-07-14T22:10:53.117046338Z" level=info msg="RemovePodSandbox \"90bb74cbf6fe882b179f3a0c81518b71f832a9affa16f759378dd13450f14eea\" returns successfully" Jul 14 22:10:53.725915 systemd[1]: run-containerd-runc-k8s.io-c93dc6faa50c1ec240332bb182ea2fd92a397cdb518b64298da0596319748bf3-runc.PBywep.mount: Deactivated successfully. Jul 14 22:10:53.776121 kubelet[2698]: E0714 22:10:53.776080 2698 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:56010->127.0.0.1:38097: write tcp 127.0.0.1:56010->127.0.0.1:38097: write: broken pipe Jul 14 22:10:54.708555 systemd-networkd[1231]: lxc_health: Link UP Jul 14 22:10:54.715466 systemd-networkd[1231]: lxc_health: Gained carrier Jul 14 22:10:55.943727 kubelet[2698]: E0714 22:10:55.943422 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:10:56.291453 systemd-networkd[1231]: lxc_health: Gained IPv6LL Jul 14 22:10:56.551006 kubelet[2698]: E0714 22:10:56.550514 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:10:57.150611 update_engine[1531]: E20250714 22:10:57.150542 1531 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 14 22:10:57.151012 update_engine[1531]: I20250714 22:10:57.150677 1531 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 14 22:11:02.211604 sshd[4560]: pam_unix(sshd:session): session closed for user core Jul 14 22:11:02.214179 systemd[1]: sshd@26-10.0.0.114:22-10.0.0.1:53286.service: Deactivated successfully. Jul 14 22:11:02.217050 systemd-logind[1523]: Session 27 logged out. Waiting for processes to exit. Jul 14 22:11:02.217947 systemd[1]: session-27.scope: Deactivated successfully. Jul 14 22:11:02.219392 systemd-logind[1523]: Removed session 27.