Jul 12 00:10:21.908509 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 12 00:10:21.908530 kernel: Linux version 6.6.96-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Fri Jul 11 22:29:52 -00 2025 Jul 12 00:10:21.908540 kernel: KASLR enabled Jul 12 00:10:21.908546 kernel: efi: EFI v2.7 by EDK II Jul 12 00:10:21.908551 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Jul 12 00:10:21.908557 kernel: random: crng init done Jul 12 00:10:21.908564 kernel: secureboot: Secure boot disabled Jul 12 00:10:21.908569 kernel: ACPI: Early table checksum verification disabled Jul 12 00:10:21.908575 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Jul 12 00:10:21.908583 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 12 00:10:21.908589 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:10:21.908594 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:10:21.908600 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:10:21.908606 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:10:21.908613 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:10:21.908621 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:10:21.908627 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:10:21.908633 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:10:21.908639 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:10:21.908645 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 12 00:10:21.908651 kernel: NUMA: Failed to initialise from firmware Jul 12 00:10:21.908658 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 12 00:10:21.908664 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jul 12 00:10:21.908670 kernel: Zone ranges: Jul 12 00:10:21.908676 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 12 00:10:21.908683 kernel: DMA32 empty Jul 12 00:10:21.908689 kernel: Normal empty Jul 12 00:10:21.908695 kernel: Movable zone start for each node Jul 12 00:10:21.908701 kernel: Early memory node ranges Jul 12 00:10:21.908707 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Jul 12 00:10:21.908714 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Jul 12 00:10:21.908720 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Jul 12 00:10:21.908726 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 12 00:10:21.908732 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 12 00:10:21.908738 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 12 00:10:21.908744 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 12 00:10:21.908750 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 12 00:10:21.908757 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 12 00:10:21.908763 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 12 00:10:21.908770 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 12 00:10:21.908779 kernel: psci: probing for conduit method from ACPI. Jul 12 00:10:21.908785 kernel: psci: PSCIv1.1 detected in firmware. Jul 12 00:10:21.908792 kernel: psci: Using standard PSCI v0.2 function IDs Jul 12 00:10:21.908800 kernel: psci: Trusted OS migration not required Jul 12 00:10:21.908806 kernel: psci: SMC Calling Convention v1.1 Jul 12 00:10:21.908813 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 12 00:10:21.908819 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 12 00:10:21.908826 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 12 00:10:21.908832 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 12 00:10:21.908839 kernel: Detected PIPT I-cache on CPU0 Jul 12 00:10:21.908845 kernel: CPU features: detected: GIC system register CPU interface Jul 12 00:10:21.908858 kernel: CPU features: detected: Hardware dirty bit management Jul 12 00:10:21.908866 kernel: CPU features: detected: Spectre-v4 Jul 12 00:10:21.908889 kernel: CPU features: detected: Spectre-BHB Jul 12 00:10:21.908898 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 12 00:10:21.908905 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 12 00:10:21.908911 kernel: CPU features: detected: ARM erratum 1418040 Jul 12 00:10:21.908918 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 12 00:10:21.908924 kernel: alternatives: applying boot alternatives Jul 12 00:10:21.908932 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=adc4ba1ad7e0b99fe6e3cbb6e6cc39706890fbb6c462e92a648376904967703c Jul 12 00:10:21.908939 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 12 00:10:21.908946 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 12 00:10:21.908953 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 12 00:10:21.908959 kernel: Fallback order for Node 0: 0 Jul 12 00:10:21.908967 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 12 00:10:21.908974 kernel: Policy zone: DMA Jul 12 00:10:21.908981 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 12 00:10:21.908987 kernel: software IO TLB: area num 4. Jul 12 00:10:21.908994 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 12 00:10:21.909000 kernel: Memory: 2387476K/2572288K available (10368K kernel code, 2186K rwdata, 8104K rodata, 38336K init, 897K bss, 184812K reserved, 0K cma-reserved) Jul 12 00:10:21.909007 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 12 00:10:21.909013 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 12 00:10:21.909020 kernel: rcu: RCU event tracing is enabled. Jul 12 00:10:21.909027 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 12 00:10:21.909034 kernel: Trampoline variant of Tasks RCU enabled. Jul 12 00:10:21.909040 kernel: Tracing variant of Tasks RCU enabled. Jul 12 00:10:21.909048 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 12 00:10:21.909055 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 12 00:10:21.909061 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 12 00:10:21.909067 kernel: GICv3: 256 SPIs implemented Jul 12 00:10:21.909074 kernel: GICv3: 0 Extended SPIs implemented Jul 12 00:10:21.909080 kernel: Root IRQ handler: gic_handle_irq Jul 12 00:10:21.909087 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 12 00:10:21.909094 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 12 00:10:21.909100 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 12 00:10:21.909107 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jul 12 00:10:21.909113 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jul 12 00:10:21.909121 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 12 00:10:21.909128 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 12 00:10:21.909134 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 12 00:10:21.909141 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:10:21.909147 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 12 00:10:21.909154 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 12 00:10:21.909160 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 12 00:10:21.909167 kernel: arm-pv: using stolen time PV Jul 12 00:10:21.909173 kernel: Console: colour dummy device 80x25 Jul 12 00:10:21.909180 kernel: ACPI: Core revision 20230628 Jul 12 00:10:21.909187 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 12 00:10:21.909195 kernel: pid_max: default: 32768 minimum: 301 Jul 12 00:10:21.909201 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 12 00:10:21.909208 kernel: landlock: Up and running. Jul 12 00:10:21.909215 kernel: SELinux: Initializing. Jul 12 00:10:21.909221 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:10:21.909228 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:10:21.909235 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 12 00:10:21.909241 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 12 00:10:21.909248 kernel: rcu: Hierarchical SRCU implementation. Jul 12 00:10:21.909256 kernel: rcu: Max phase no-delay instances is 400. Jul 12 00:10:21.909263 kernel: Platform MSI: ITS@0x8080000 domain created Jul 12 00:10:21.909270 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 12 00:10:21.909277 kernel: Remapping and enabling EFI services. Jul 12 00:10:21.909283 kernel: smp: Bringing up secondary CPUs ... Jul 12 00:10:21.909290 kernel: Detected PIPT I-cache on CPU1 Jul 12 00:10:21.909297 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 12 00:10:21.909304 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 12 00:10:21.909310 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:10:21.909318 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 12 00:10:21.909325 kernel: Detected PIPT I-cache on CPU2 Jul 12 00:10:21.909337 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 12 00:10:21.909345 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 12 00:10:21.909352 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:10:21.909359 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 12 00:10:21.909366 kernel: Detected PIPT I-cache on CPU3 Jul 12 00:10:21.909373 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 12 00:10:21.909381 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 12 00:10:21.909389 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:10:21.909396 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 12 00:10:21.909403 kernel: smp: Brought up 1 node, 4 CPUs Jul 12 00:10:21.909411 kernel: SMP: Total of 4 processors activated. Jul 12 00:10:21.909418 kernel: CPU features: detected: 32-bit EL0 Support Jul 12 00:10:21.909425 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 12 00:10:21.909432 kernel: CPU features: detected: Common not Private translations Jul 12 00:10:21.909439 kernel: CPU features: detected: CRC32 instructions Jul 12 00:10:21.909447 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 12 00:10:21.909454 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 12 00:10:21.909461 kernel: CPU features: detected: LSE atomic instructions Jul 12 00:10:21.909468 kernel: CPU features: detected: Privileged Access Never Jul 12 00:10:21.909475 kernel: CPU features: detected: RAS Extension Support Jul 12 00:10:21.909482 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 12 00:10:21.909488 kernel: CPU: All CPU(s) started at EL1 Jul 12 00:10:21.909496 kernel: alternatives: applying system-wide alternatives Jul 12 00:10:21.909502 kernel: devtmpfs: initialized Jul 12 00:10:21.909510 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 12 00:10:21.909518 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 12 00:10:21.909525 kernel: pinctrl core: initialized pinctrl subsystem Jul 12 00:10:21.909532 kernel: SMBIOS 3.0.0 present. Jul 12 00:10:21.909539 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jul 12 00:10:21.909546 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 12 00:10:21.909552 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 12 00:10:21.909559 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 12 00:10:21.909567 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 12 00:10:21.909575 kernel: audit: initializing netlink subsys (disabled) Jul 12 00:10:21.909582 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Jul 12 00:10:21.909589 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 12 00:10:21.909596 kernel: cpuidle: using governor menu Jul 12 00:10:21.909603 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 12 00:10:21.909610 kernel: ASID allocator initialised with 32768 entries Jul 12 00:10:21.909617 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 12 00:10:21.909624 kernel: Serial: AMBA PL011 UART driver Jul 12 00:10:21.909631 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 12 00:10:21.909638 kernel: Modules: 0 pages in range for non-PLT usage Jul 12 00:10:21.909650 kernel: Modules: 509264 pages in range for PLT usage Jul 12 00:10:21.909658 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 12 00:10:21.909669 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 12 00:10:21.909679 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 12 00:10:21.909688 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 12 00:10:21.909697 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 12 00:10:21.909705 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 12 00:10:21.909712 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 12 00:10:21.909719 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 12 00:10:21.909727 kernel: ACPI: Added _OSI(Module Device) Jul 12 00:10:21.909734 kernel: ACPI: Added _OSI(Processor Device) Jul 12 00:10:21.909741 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 12 00:10:21.909748 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 12 00:10:21.909762 kernel: ACPI: Interpreter enabled Jul 12 00:10:21.909769 kernel: ACPI: Using GIC for interrupt routing Jul 12 00:10:21.909776 kernel: ACPI: MCFG table detected, 1 entries Jul 12 00:10:21.909783 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 12 00:10:21.909790 kernel: printk: console [ttyAMA0] enabled Jul 12 00:10:21.909799 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 12 00:10:21.909949 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 12 00:10:21.910027 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 12 00:10:21.910092 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 12 00:10:21.910154 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 12 00:10:21.910228 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 12 00:10:21.910239 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 12 00:10:21.910253 kernel: PCI host bridge to bus 0000:00 Jul 12 00:10:21.910324 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 12 00:10:21.910384 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 12 00:10:21.910443 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 12 00:10:21.910500 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 12 00:10:21.910579 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 12 00:10:21.910657 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 12 00:10:21.910724 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 12 00:10:21.910789 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 12 00:10:21.910864 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 12 00:10:21.910951 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 12 00:10:21.911020 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 12 00:10:21.911089 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 12 00:10:21.911155 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 12 00:10:21.911215 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 12 00:10:21.911274 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 12 00:10:21.911283 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 12 00:10:21.911291 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 12 00:10:21.911298 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 12 00:10:21.911305 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 12 00:10:21.911312 kernel: iommu: Default domain type: Translated Jul 12 00:10:21.911321 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 12 00:10:21.911328 kernel: efivars: Registered efivars operations Jul 12 00:10:21.911335 kernel: vgaarb: loaded Jul 12 00:10:21.911342 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 12 00:10:21.911348 kernel: VFS: Disk quotas dquot_6.6.0 Jul 12 00:10:21.911356 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 12 00:10:21.911363 kernel: pnp: PnP ACPI init Jul 12 00:10:21.911440 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 12 00:10:21.911450 kernel: pnp: PnP ACPI: found 1 devices Jul 12 00:10:21.911459 kernel: NET: Registered PF_INET protocol family Jul 12 00:10:21.911467 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 12 00:10:21.911474 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 12 00:10:21.911482 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 12 00:10:21.911489 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 12 00:10:21.911496 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 12 00:10:21.911504 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 12 00:10:21.911525 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:10:21.911534 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:10:21.911541 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 12 00:10:21.911549 kernel: PCI: CLS 0 bytes, default 64 Jul 12 00:10:21.911556 kernel: kvm [1]: HYP mode not available Jul 12 00:10:21.911563 kernel: Initialise system trusted keyrings Jul 12 00:10:21.911570 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 12 00:10:21.911577 kernel: Key type asymmetric registered Jul 12 00:10:21.911584 kernel: Asymmetric key parser 'x509' registered Jul 12 00:10:21.911591 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 12 00:10:21.911598 kernel: io scheduler mq-deadline registered Jul 12 00:10:21.911606 kernel: io scheduler kyber registered Jul 12 00:10:21.911613 kernel: io scheduler bfq registered Jul 12 00:10:21.911621 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 12 00:10:21.911628 kernel: ACPI: button: Power Button [PWRB] Jul 12 00:10:21.911635 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 12 00:10:21.911702 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 12 00:10:21.911711 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 12 00:10:21.911718 kernel: thunder_xcv, ver 1.0 Jul 12 00:10:21.911725 kernel: thunder_bgx, ver 1.0 Jul 12 00:10:21.911734 kernel: nicpf, ver 1.0 Jul 12 00:10:21.911741 kernel: nicvf, ver 1.0 Jul 12 00:10:21.911813 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 12 00:10:21.911899 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-12T00:10:21 UTC (1752279021) Jul 12 00:10:21.911909 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 12 00:10:21.911916 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 12 00:10:21.911924 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 12 00:10:21.911931 kernel: watchdog: Hard watchdog permanently disabled Jul 12 00:10:21.911940 kernel: NET: Registered PF_INET6 protocol family Jul 12 00:10:21.911947 kernel: Segment Routing with IPv6 Jul 12 00:10:21.911954 kernel: In-situ OAM (IOAM) with IPv6 Jul 12 00:10:21.911961 kernel: NET: Registered PF_PACKET protocol family Jul 12 00:10:21.911968 kernel: Key type dns_resolver registered Jul 12 00:10:21.911975 kernel: registered taskstats version 1 Jul 12 00:10:21.911982 kernel: Loading compiled-in X.509 certificates Jul 12 00:10:21.911989 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.96-flatcar: 5e210e579240b0a06f627a5c4af1974a7efe4cdb' Jul 12 00:10:21.911996 kernel: Key type .fscrypt registered Jul 12 00:10:21.912005 kernel: Key type fscrypt-provisioning registered Jul 12 00:10:21.912012 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 12 00:10:21.912019 kernel: ima: Allocated hash algorithm: sha1 Jul 12 00:10:21.912026 kernel: ima: No architecture policies found Jul 12 00:10:21.912033 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 12 00:10:21.912040 kernel: clk: Disabling unused clocks Jul 12 00:10:21.912047 kernel: Freeing unused kernel memory: 38336K Jul 12 00:10:21.912054 kernel: Run /init as init process Jul 12 00:10:21.912061 kernel: with arguments: Jul 12 00:10:21.912070 kernel: /init Jul 12 00:10:21.912077 kernel: with environment: Jul 12 00:10:21.912084 kernel: HOME=/ Jul 12 00:10:21.912090 kernel: TERM=linux Jul 12 00:10:21.912097 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 12 00:10:21.912105 systemd[1]: Successfully made /usr/ read-only. Jul 12 00:10:21.912115 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 12 00:10:21.912125 systemd[1]: Detected virtualization kvm. Jul 12 00:10:21.912132 systemd[1]: Detected architecture arm64. Jul 12 00:10:21.912140 systemd[1]: Running in initrd. Jul 12 00:10:21.912147 systemd[1]: No hostname configured, using default hostname. Jul 12 00:10:21.912155 systemd[1]: Hostname set to . Jul 12 00:10:21.912163 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:10:21.912170 systemd[1]: Queued start job for default target initrd.target. Jul 12 00:10:21.912178 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:10:21.912187 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:10:21.912195 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 12 00:10:21.912203 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 00:10:21.912211 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 12 00:10:21.912219 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 12 00:10:21.912228 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 12 00:10:21.912236 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 12 00:10:21.912245 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:10:21.912253 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:10:21.912260 systemd[1]: Reached target paths.target - Path Units. Jul 12 00:10:21.912268 systemd[1]: Reached target slices.target - Slice Units. Jul 12 00:10:21.912276 systemd[1]: Reached target swap.target - Swaps. Jul 12 00:10:21.912283 systemd[1]: Reached target timers.target - Timer Units. Jul 12 00:10:21.912291 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:10:21.912298 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:10:21.912306 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 12 00:10:21.912315 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 12 00:10:21.912322 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:10:21.912330 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 00:10:21.912338 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:10:21.912346 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 00:10:21.912353 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 12 00:10:21.912361 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 00:10:21.912368 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 12 00:10:21.912378 systemd[1]: Starting systemd-fsck-usr.service... Jul 12 00:10:21.912385 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 00:10:21.912393 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 00:10:21.912401 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:10:21.912408 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 12 00:10:21.912416 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:10:21.912425 systemd[1]: Finished systemd-fsck-usr.service. Jul 12 00:10:21.912433 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 12 00:10:21.912446 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:10:21.912454 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:10:21.912462 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 00:10:21.912486 systemd-journald[238]: Collecting audit messages is disabled. Jul 12 00:10:21.912506 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 12 00:10:21.912513 kernel: Bridge firewalling registered Jul 12 00:10:21.912521 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 00:10:21.912529 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 00:10:21.912538 systemd-journald[238]: Journal started Jul 12 00:10:21.912557 systemd-journald[238]: Runtime Journal (/run/log/journal/2dc6a3f4c9514a6fbb9d2dcc90b300b1) is 5.9M, max 47.3M, 41.4M free. Jul 12 00:10:21.891128 systemd-modules-load[239]: Inserted module 'overlay' Jul 12 00:10:21.910209 systemd-modules-load[239]: Inserted module 'br_netfilter' Jul 12 00:10:21.915986 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 00:10:21.919370 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:10:21.920720 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 00:10:21.921887 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:10:21.923924 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:10:21.926948 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 12 00:10:21.929943 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:10:21.946086 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:10:21.948702 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 00:10:21.955500 dracut-cmdline[271]: dracut-dracut-053 Jul 12 00:10:21.957968 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=adc4ba1ad7e0b99fe6e3cbb6e6cc39706890fbb6c462e92a648376904967703c Jul 12 00:10:21.986553 systemd-resolved[278]: Positive Trust Anchors: Jul 12 00:10:21.986573 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:10:21.986603 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 00:10:21.991181 systemd-resolved[278]: Defaulting to hostname 'linux'. Jul 12 00:10:21.995982 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 00:10:21.996836 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:10:22.026912 kernel: SCSI subsystem initialized Jul 12 00:10:22.030900 kernel: Loading iSCSI transport class v2.0-870. Jul 12 00:10:22.037898 kernel: iscsi: registered transport (tcp) Jul 12 00:10:22.050896 kernel: iscsi: registered transport (qla4xxx) Jul 12 00:10:22.050911 kernel: QLogic iSCSI HBA Driver Jul 12 00:10:22.091342 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 12 00:10:22.106054 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 12 00:10:22.123242 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 12 00:10:22.123308 kernel: device-mapper: uevent: version 1.0.3 Jul 12 00:10:22.123321 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 12 00:10:22.167895 kernel: raid6: neonx8 gen() 14251 MB/s Jul 12 00:10:22.184894 kernel: raid6: neonx4 gen() 15647 MB/s Jul 12 00:10:22.201898 kernel: raid6: neonx2 gen() 12566 MB/s Jul 12 00:10:22.218896 kernel: raid6: neonx1 gen() 10368 MB/s Jul 12 00:10:22.235894 kernel: raid6: int64x8 gen() 6771 MB/s Jul 12 00:10:22.252892 kernel: raid6: int64x4 gen() 7311 MB/s Jul 12 00:10:22.269898 kernel: raid6: int64x2 gen() 5685 MB/s Jul 12 00:10:22.286894 kernel: raid6: int64x1 gen() 4995 MB/s Jul 12 00:10:22.286909 kernel: raid6: using algorithm neonx4 gen() 15647 MB/s Jul 12 00:10:22.303896 kernel: raid6: .... xor() 12436 MB/s, rmw enabled Jul 12 00:10:22.303909 kernel: raid6: using neon recovery algorithm Jul 12 00:10:22.308896 kernel: xor: measuring software checksum speed Jul 12 00:10:22.308917 kernel: 8regs : 21613 MB/sec Jul 12 00:10:22.310188 kernel: 32regs : 19754 MB/sec Jul 12 00:10:22.310202 kernel: arm64_neon : 27974 MB/sec Jul 12 00:10:22.310211 kernel: xor: using function: arm64_neon (27974 MB/sec) Jul 12 00:10:22.360897 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 12 00:10:22.372965 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:10:22.386044 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:10:22.399585 systemd-udevd[460]: Using default interface naming scheme 'v255'. Jul 12 00:10:22.403443 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:10:22.416061 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 12 00:10:22.428670 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Jul 12 00:10:22.461911 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:10:22.475073 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 00:10:22.520311 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:10:22.532093 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 12 00:10:22.543930 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 12 00:10:22.545109 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:10:22.546677 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:10:22.548544 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 00:10:22.562057 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 12 00:10:22.571318 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:10:22.579893 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 12 00:10:22.580906 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 12 00:10:22.588151 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:10:22.588222 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:10:22.594599 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 12 00:10:22.594620 kernel: GPT:9289727 != 19775487 Jul 12 00:10:22.594629 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 12 00:10:22.594639 kernel: GPT:9289727 != 19775487 Jul 12 00:10:22.594647 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 12 00:10:22.594656 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:10:22.589406 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:10:22.593671 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:10:22.593786 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:10:22.596557 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:10:22.604051 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:10:22.614009 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (508) Jul 12 00:10:22.615124 kernel: BTRFS: device fsid efd17141-91f9-4035-ad86-74f7e29ff0e8 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (517) Jul 12 00:10:22.624161 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 12 00:10:22.625229 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:10:22.633219 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 12 00:10:22.648989 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 12 00:10:22.654804 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 12 00:10:22.655709 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 12 00:10:22.671014 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 12 00:10:22.672974 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:10:22.677697 disk-uuid[552]: Primary Header is updated. Jul 12 00:10:22.677697 disk-uuid[552]: Secondary Entries is updated. Jul 12 00:10:22.677697 disk-uuid[552]: Secondary Header is updated. Jul 12 00:10:22.683904 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:10:22.689061 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:10:23.692918 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:10:23.693590 disk-uuid[553]: The operation has completed successfully. Jul 12 00:10:23.715076 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 12 00:10:23.715967 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 12 00:10:23.756044 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 12 00:10:23.758950 sh[574]: Success Jul 12 00:10:23.778097 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 12 00:10:23.816241 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 12 00:10:23.817218 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 12 00:10:23.819695 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 12 00:10:23.839172 kernel: BTRFS info (device dm-0): first mount of filesystem efd17141-91f9-4035-ad86-74f7e29ff0e8 Jul 12 00:10:23.839228 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:10:23.839238 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 12 00:10:23.839903 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 12 00:10:23.840895 kernel: BTRFS info (device dm-0): using free space tree Jul 12 00:10:23.845166 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 12 00:10:23.846314 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 12 00:10:23.856063 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 12 00:10:23.857606 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 12 00:10:23.871368 kernel: BTRFS info (device vda6): first mount of filesystem 4655e16e-9009-448d-b51d-e03cc33fa270 Jul 12 00:10:23.871423 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:10:23.871433 kernel: BTRFS info (device vda6): using free space tree Jul 12 00:10:23.873910 kernel: BTRFS info (device vda6): auto enabling async discard Jul 12 00:10:23.878950 kernel: BTRFS info (device vda6): last unmount of filesystem 4655e16e-9009-448d-b51d-e03cc33fa270 Jul 12 00:10:23.883505 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 12 00:10:23.891143 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 12 00:10:23.980952 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:10:23.998146 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 00:10:24.068383 systemd-networkd[764]: lo: Link UP Jul 12 00:10:24.068395 systemd-networkd[764]: lo: Gained carrier Jul 12 00:10:24.069437 systemd-networkd[764]: Enumeration completed Jul 12 00:10:24.069555 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 00:10:24.070269 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:10:24.070273 systemd-networkd[764]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:10:24.070911 systemd[1]: Reached target network.target - Network. Jul 12 00:10:24.071219 systemd-networkd[764]: eth0: Link UP Jul 12 00:10:24.071222 systemd-networkd[764]: eth0: Gained carrier Jul 12 00:10:24.071230 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:10:24.086087 ignition[660]: Ignition 2.20.0 Jul 12 00:10:24.086098 ignition[660]: Stage: fetch-offline Jul 12 00:10:24.086144 ignition[660]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:10:24.086154 ignition[660]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:10:24.086481 ignition[660]: parsed url from cmdline: "" Jul 12 00:10:24.086485 ignition[660]: no config URL provided Jul 12 00:10:24.086490 ignition[660]: reading system config file "/usr/lib/ignition/user.ign" Jul 12 00:10:24.086497 ignition[660]: no config at "/usr/lib/ignition/user.ign" Jul 12 00:10:24.086521 ignition[660]: op(1): [started] loading QEMU firmware config module Jul 12 00:10:24.086526 ignition[660]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 12 00:10:24.097048 ignition[660]: op(1): [finished] loading QEMU firmware config module Jul 12 00:10:24.105945 systemd-networkd[764]: eth0: DHCPv4 address 10.0.0.137/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 12 00:10:24.135701 ignition[660]: parsing config with SHA512: b6c551c1696da061364e1455deb834f11a8f374dc5d84aa4c60d7c12a4c56f20ca214993712fa699df7e3815c37ba64699e30aaac0740ab0de39e9e5551583db Jul 12 00:10:24.140810 unknown[660]: fetched base config from "system" Jul 12 00:10:24.140820 unknown[660]: fetched user config from "qemu" Jul 12 00:10:24.141337 ignition[660]: fetch-offline: fetch-offline passed Jul 12 00:10:24.142204 ignition[660]: Ignition finished successfully Jul 12 00:10:24.143933 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:10:24.145021 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 12 00:10:24.156088 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 12 00:10:24.168687 ignition[773]: Ignition 2.20.0 Jul 12 00:10:24.168696 ignition[773]: Stage: kargs Jul 12 00:10:24.168871 ignition[773]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:10:24.168897 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:10:24.169768 ignition[773]: kargs: kargs passed Jul 12 00:10:24.169811 ignition[773]: Ignition finished successfully Jul 12 00:10:24.171797 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 12 00:10:24.179078 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 12 00:10:24.188451 ignition[782]: Ignition 2.20.0 Jul 12 00:10:24.188462 ignition[782]: Stage: disks Jul 12 00:10:24.188634 ignition[782]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:10:24.188644 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:10:24.189555 ignition[782]: disks: disks passed Jul 12 00:10:24.189601 ignition[782]: Ignition finished successfully Jul 12 00:10:24.192823 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 12 00:10:24.194511 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 12 00:10:24.196152 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 12 00:10:24.197032 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 00:10:24.198962 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 00:10:24.199895 systemd[1]: Reached target basic.target - Basic System. Jul 12 00:10:24.216112 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 12 00:10:24.226534 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 12 00:10:24.229652 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 12 00:10:24.231866 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 12 00:10:24.275123 kernel: EXT4-fs (vda9): mounted filesystem ce5906be-3b69-42c2-8b1d-4576a0749077 r/w with ordered data mode. Quota mode: none. Jul 12 00:10:24.275627 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 12 00:10:24.276699 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 12 00:10:24.290970 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:10:24.292544 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 12 00:10:24.293567 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 12 00:10:24.293612 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 12 00:10:24.298897 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (801) Jul 12 00:10:24.293638 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:10:24.299281 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 12 00:10:24.302848 kernel: BTRFS info (device vda6): first mount of filesystem 4655e16e-9009-448d-b51d-e03cc33fa270 Jul 12 00:10:24.302868 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:10:24.302888 kernel: BTRFS info (device vda6): using free space tree Jul 12 00:10:24.302714 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 12 00:10:24.305110 kernel: BTRFS info (device vda6): auto enabling async discard Jul 12 00:10:24.306239 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:10:24.344838 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Jul 12 00:10:24.348958 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Jul 12 00:10:24.351831 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Jul 12 00:10:24.355069 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Jul 12 00:10:24.422353 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 12 00:10:24.442018 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 12 00:10:24.444383 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 12 00:10:24.447897 kernel: BTRFS info (device vda6): last unmount of filesystem 4655e16e-9009-448d-b51d-e03cc33fa270 Jul 12 00:10:24.461491 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 12 00:10:24.466046 ignition[914]: INFO : Ignition 2.20.0 Jul 12 00:10:24.466046 ignition[914]: INFO : Stage: mount Jul 12 00:10:24.467328 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:10:24.467328 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:10:24.467328 ignition[914]: INFO : mount: mount passed Jul 12 00:10:24.467328 ignition[914]: INFO : Ignition finished successfully Jul 12 00:10:24.468351 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 12 00:10:24.476985 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 12 00:10:25.008425 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 12 00:10:25.018064 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:10:25.025580 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (927) Jul 12 00:10:25.025623 kernel: BTRFS info (device vda6): first mount of filesystem 4655e16e-9009-448d-b51d-e03cc33fa270 Jul 12 00:10:25.025634 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:10:25.026260 kernel: BTRFS info (device vda6): using free space tree Jul 12 00:10:25.028893 kernel: BTRFS info (device vda6): auto enabling async discard Jul 12 00:10:25.030232 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:10:25.046596 ignition[944]: INFO : Ignition 2.20.0 Jul 12 00:10:25.046596 ignition[944]: INFO : Stage: files Jul 12 00:10:25.047885 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:10:25.047885 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:10:25.047885 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Jul 12 00:10:25.050523 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 12 00:10:25.050523 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 12 00:10:25.052496 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 12 00:10:25.052496 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 12 00:10:25.052496 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 12 00:10:25.052496 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 12 00:10:25.052496 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 12 00:10:25.051165 unknown[944]: wrote ssh authorized keys file for user: core Jul 12 00:10:25.166441 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 12 00:10:25.581011 systemd-networkd[764]: eth0: Gained IPv6LL Jul 12 00:10:27.137009 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 12 00:10:27.137009 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 12 00:10:27.139973 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 12 00:10:27.477923 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 12 00:10:27.571306 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 12 00:10:27.572759 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 12 00:10:27.572759 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 12 00:10:27.572759 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:10:27.572759 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:10:27.572759 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:10:27.572759 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:10:27.572759 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:10:27.572759 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:10:27.572759 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:10:27.572759 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:10:27.572759 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 00:10:27.572759 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 00:10:27.572759 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 00:10:27.572759 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 12 00:10:27.852762 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 12 00:10:28.352517 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 00:10:28.352517 ignition[944]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 12 00:10:28.355545 ignition[944]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:10:28.355545 ignition[944]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:10:28.355545 ignition[944]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 12 00:10:28.355545 ignition[944]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 12 00:10:28.355545 ignition[944]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 12 00:10:28.355545 ignition[944]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 12 00:10:28.355545 ignition[944]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 12 00:10:28.355545 ignition[944]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 12 00:10:28.379974 ignition[944]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 12 00:10:28.383528 ignition[944]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 12 00:10:28.385699 ignition[944]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 12 00:10:28.385699 ignition[944]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 12 00:10:28.385699 ignition[944]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 12 00:10:28.385699 ignition[944]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:10:28.385699 ignition[944]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:10:28.385699 ignition[944]: INFO : files: files passed Jul 12 00:10:28.385699 ignition[944]: INFO : Ignition finished successfully Jul 12 00:10:28.386436 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 12 00:10:28.398077 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 12 00:10:28.400611 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 12 00:10:28.402170 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 12 00:10:28.402264 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 12 00:10:28.408660 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory Jul 12 00:10:28.411372 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:10:28.411372 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:10:28.413931 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:10:28.413758 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:10:28.415455 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 12 00:10:28.427109 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 12 00:10:28.447469 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 12 00:10:28.447579 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 12 00:10:28.449507 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 12 00:10:28.450304 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 12 00:10:28.451836 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 12 00:10:28.462063 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 12 00:10:28.475274 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:10:28.477795 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 12 00:10:28.489594 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:10:28.490680 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:10:28.492357 systemd[1]: Stopped target timers.target - Timer Units. Jul 12 00:10:28.493746 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 12 00:10:28.493895 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:10:28.495896 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 12 00:10:28.497512 systemd[1]: Stopped target basic.target - Basic System. Jul 12 00:10:28.498766 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 12 00:10:28.500078 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:10:28.501504 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 12 00:10:28.502921 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 12 00:10:28.504424 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:10:28.505949 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 12 00:10:28.507447 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 12 00:10:28.508680 systemd[1]: Stopped target swap.target - Swaps. Jul 12 00:10:28.509842 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 12 00:10:28.509989 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:10:28.511703 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:10:28.513198 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:10:28.514656 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 12 00:10:28.514737 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:10:28.516280 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 12 00:10:28.516406 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 12 00:10:28.518734 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 12 00:10:28.518862 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:10:28.520461 systemd[1]: Stopped target paths.target - Path Units. Jul 12 00:10:28.521646 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 12 00:10:28.524915 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:10:28.525975 systemd[1]: Stopped target slices.target - Slice Units. Jul 12 00:10:28.527585 systemd[1]: Stopped target sockets.target - Socket Units. Jul 12 00:10:28.528723 systemd[1]: iscsid.socket: Deactivated successfully. Jul 12 00:10:28.528807 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:10:28.529942 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 12 00:10:28.530020 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:10:28.531249 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 12 00:10:28.531364 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:10:28.532701 systemd[1]: ignition-files.service: Deactivated successfully. Jul 12 00:10:28.532802 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 12 00:10:28.548088 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 12 00:10:28.549525 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 12 00:10:28.550183 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 12 00:10:28.550301 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:10:28.551621 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 12 00:10:28.551713 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:10:28.557505 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 12 00:10:28.558385 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 12 00:10:28.561118 ignition[999]: INFO : Ignition 2.20.0 Jul 12 00:10:28.561118 ignition[999]: INFO : Stage: umount Jul 12 00:10:28.561118 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:10:28.561118 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:10:28.561118 ignition[999]: INFO : umount: umount passed Jul 12 00:10:28.566238 ignition[999]: INFO : Ignition finished successfully Jul 12 00:10:28.562946 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 12 00:10:28.564264 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 12 00:10:28.565936 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 12 00:10:28.567297 systemd[1]: Stopped target network.target - Network. Jul 12 00:10:28.568220 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 12 00:10:28.568305 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 12 00:10:28.569584 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 12 00:10:28.569643 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 12 00:10:28.571784 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 12 00:10:28.571848 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 12 00:10:28.573060 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 12 00:10:28.573100 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 12 00:10:28.575084 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 12 00:10:28.577112 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 12 00:10:28.579643 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 12 00:10:28.579747 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 12 00:10:28.583725 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 12 00:10:28.584041 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 12 00:10:28.584082 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:10:28.586516 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 12 00:10:28.586699 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 12 00:10:28.586790 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 12 00:10:28.589012 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 12 00:10:28.589482 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 12 00:10:28.589559 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:10:28.598019 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 12 00:10:28.599498 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 12 00:10:28.599571 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:10:28.601297 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:10:28.601343 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:10:28.603646 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 12 00:10:28.603694 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 12 00:10:28.604691 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:10:28.608410 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 12 00:10:28.615727 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 12 00:10:28.615955 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 12 00:10:28.625602 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 12 00:10:28.625746 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:10:28.628205 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 12 00:10:28.628253 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 12 00:10:28.629178 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 12 00:10:28.629215 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:10:28.630602 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 12 00:10:28.630648 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:10:28.633115 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 12 00:10:28.633164 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 12 00:10:28.635212 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:10:28.635265 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:10:28.642034 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 12 00:10:28.642785 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 12 00:10:28.642849 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:10:28.645164 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 12 00:10:28.645205 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 00:10:28.646819 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 12 00:10:28.646866 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:10:28.648480 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:10:28.648524 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:10:28.651217 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 12 00:10:28.651305 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 12 00:10:28.652550 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 12 00:10:28.652649 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 12 00:10:28.654615 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 12 00:10:28.655934 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 12 00:10:28.655995 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 12 00:10:28.663041 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 12 00:10:28.669776 systemd[1]: Switching root. Jul 12 00:10:28.701989 systemd-journald[238]: Journal stopped Jul 12 00:10:29.500183 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jul 12 00:10:29.500243 kernel: SELinux: policy capability network_peer_controls=1 Jul 12 00:10:29.500255 kernel: SELinux: policy capability open_perms=1 Jul 12 00:10:29.500269 kernel: SELinux: policy capability extended_socket_class=1 Jul 12 00:10:29.500280 kernel: SELinux: policy capability always_check_network=0 Jul 12 00:10:29.500291 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 12 00:10:29.500303 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 12 00:10:29.500314 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 12 00:10:29.500328 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 12 00:10:29.500338 kernel: audit: type=1403 audit(1752279028.862:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 12 00:10:29.500349 systemd[1]: Successfully loaded SELinux policy in 31.951ms. Jul 12 00:10:29.500368 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.587ms. Jul 12 00:10:29.500380 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 12 00:10:29.500391 systemd[1]: Detected virtualization kvm. Jul 12 00:10:29.500401 systemd[1]: Detected architecture arm64. Jul 12 00:10:29.500411 systemd[1]: Detected first boot. Jul 12 00:10:29.500422 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:10:29.500433 zram_generator::config[1047]: No configuration found. Jul 12 00:10:29.500445 kernel: NET: Registered PF_VSOCK protocol family Jul 12 00:10:29.500454 systemd[1]: Populated /etc with preset unit settings. Jul 12 00:10:29.500466 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 12 00:10:29.500476 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 12 00:10:29.500487 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 12 00:10:29.500497 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 12 00:10:29.500508 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 12 00:10:29.500520 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 12 00:10:29.500531 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 12 00:10:29.500541 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 12 00:10:29.500551 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 12 00:10:29.500562 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 12 00:10:29.500573 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 12 00:10:29.500583 systemd[1]: Created slice user.slice - User and Session Slice. Jul 12 00:10:29.500595 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:10:29.500605 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:10:29.500618 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 12 00:10:29.500628 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 12 00:10:29.500639 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 12 00:10:29.500650 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 00:10:29.500660 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 12 00:10:29.500670 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:10:29.500681 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 12 00:10:29.500694 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 12 00:10:29.500704 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 12 00:10:29.500715 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 12 00:10:29.500725 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:10:29.500736 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 00:10:29.500746 systemd[1]: Reached target slices.target - Slice Units. Jul 12 00:10:29.500756 systemd[1]: Reached target swap.target - Swaps. Jul 12 00:10:29.500767 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 12 00:10:29.500777 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 12 00:10:29.500789 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 12 00:10:29.500799 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:10:29.500810 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 00:10:29.500821 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:10:29.500838 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 12 00:10:29.500852 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 12 00:10:29.500862 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 12 00:10:29.500873 systemd[1]: Mounting media.mount - External Media Directory... Jul 12 00:10:29.500891 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 12 00:10:29.500904 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 12 00:10:29.500915 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 12 00:10:29.500926 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 12 00:10:29.500937 systemd[1]: Reached target machines.target - Containers. Jul 12 00:10:29.500947 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 12 00:10:29.500958 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:10:29.500969 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 00:10:29.500980 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 12 00:10:29.500992 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:10:29.501004 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 00:10:29.501014 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:10:29.501025 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 12 00:10:29.501036 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:10:29.501046 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 12 00:10:29.501057 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 12 00:10:29.501069 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 12 00:10:29.501080 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 12 00:10:29.501092 systemd[1]: Stopped systemd-fsck-usr.service. Jul 12 00:10:29.501103 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 12 00:10:29.501114 kernel: fuse: init (API version 7.39) Jul 12 00:10:29.501123 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 00:10:29.501133 kernel: ACPI: bus type drm_connector registered Jul 12 00:10:29.501143 kernel: loop: module loaded Jul 12 00:10:29.501153 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 00:10:29.501164 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 12 00:10:29.501174 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 12 00:10:29.501186 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 12 00:10:29.501197 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 00:10:29.501208 systemd[1]: verity-setup.service: Deactivated successfully. Jul 12 00:10:29.501219 systemd[1]: Stopped verity-setup.service. Jul 12 00:10:29.501249 systemd-journald[1112]: Collecting audit messages is disabled. Jul 12 00:10:29.501274 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 12 00:10:29.501285 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 12 00:10:29.501296 systemd-journald[1112]: Journal started Jul 12 00:10:29.501319 systemd-journald[1112]: Runtime Journal (/run/log/journal/2dc6a3f4c9514a6fbb9d2dcc90b300b1) is 5.9M, max 47.3M, 41.4M free. Jul 12 00:10:29.297247 systemd[1]: Queued start job for default target multi-user.target. Jul 12 00:10:29.313847 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 12 00:10:29.314263 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 12 00:10:29.504464 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 00:10:29.505283 systemd[1]: Mounted media.mount - External Media Directory. Jul 12 00:10:29.506266 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 12 00:10:29.507296 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 12 00:10:29.508353 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 12 00:10:29.509471 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 12 00:10:29.511917 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:10:29.513188 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 12 00:10:29.513371 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 12 00:10:29.514603 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:10:29.514792 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:10:29.516008 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:10:29.516171 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 00:10:29.517273 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:10:29.517442 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:10:29.518648 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 12 00:10:29.518806 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 12 00:10:29.520003 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:10:29.520185 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:10:29.521415 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 00:10:29.524315 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 12 00:10:29.525597 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 12 00:10:29.526983 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 12 00:10:29.539841 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 12 00:10:29.546001 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 12 00:10:29.547959 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 12 00:10:29.548975 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 12 00:10:29.549027 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 00:10:29.550739 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 12 00:10:29.552940 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 12 00:10:29.554904 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 12 00:10:29.555801 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:10:29.557231 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 12 00:10:29.562063 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 12 00:10:29.563702 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:10:29.564757 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 12 00:10:29.565888 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 00:10:29.566901 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:10:29.574608 systemd-journald[1112]: Time spent on flushing to /var/log/journal/2dc6a3f4c9514a6fbb9d2dcc90b300b1 is 22.174ms for 869 entries. Jul 12 00:10:29.574608 systemd-journald[1112]: System Journal (/var/log/journal/2dc6a3f4c9514a6fbb9d2dcc90b300b1) is 8M, max 195.6M, 187.6M free. Jul 12 00:10:29.610950 systemd-journald[1112]: Received client request to flush runtime journal. Jul 12 00:10:29.611008 kernel: loop0: detected capacity change from 0 to 113512 Jul 12 00:10:29.572177 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 12 00:10:29.575705 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 12 00:10:29.579292 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:10:29.580461 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 12 00:10:29.583075 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 12 00:10:29.584555 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 12 00:10:29.590664 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 12 00:10:29.593670 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 12 00:10:29.603067 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 12 00:10:29.606384 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 12 00:10:29.609920 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:10:29.612816 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. Jul 12 00:10:29.612840 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. Jul 12 00:10:29.614418 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 12 00:10:29.617906 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 12 00:10:29.622198 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 00:10:29.632114 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 12 00:10:29.637909 kernel: loop1: detected capacity change from 0 to 207008 Jul 12 00:10:29.637374 udevadm[1175]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 12 00:10:29.639928 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 12 00:10:29.658674 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 12 00:10:29.671396 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 00:10:29.674900 kernel: loop2: detected capacity change from 0 to 123192 Jul 12 00:10:29.684219 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Jul 12 00:10:29.684549 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Jul 12 00:10:29.689160 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:10:29.721983 kernel: loop3: detected capacity change from 0 to 113512 Jul 12 00:10:29.727019 kernel: loop4: detected capacity change from 0 to 207008 Jul 12 00:10:29.733136 kernel: loop5: detected capacity change from 0 to 123192 Jul 12 00:10:29.736145 (sd-merge)[1193]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 12 00:10:29.736537 (sd-merge)[1193]: Merged extensions into '/usr'. Jul 12 00:10:29.742399 systemd[1]: Reload requested from client PID 1164 ('systemd-sysext') (unit systemd-sysext.service)... Jul 12 00:10:29.742415 systemd[1]: Reloading... Jul 12 00:10:29.802947 zram_generator::config[1218]: No configuration found. Jul 12 00:10:29.873124 ldconfig[1159]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 12 00:10:29.893020 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:10:29.943399 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 12 00:10:29.943510 systemd[1]: Reloading finished in 200 ms. Jul 12 00:10:29.960786 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 12 00:10:29.962164 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 12 00:10:29.976334 systemd[1]: Starting ensure-sysext.service... Jul 12 00:10:29.978066 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 00:10:29.990524 systemd[1]: Reload requested from client PID 1256 ('systemctl') (unit ensure-sysext.service)... Jul 12 00:10:29.990542 systemd[1]: Reloading... Jul 12 00:10:29.996272 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 12 00:10:29.996478 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 12 00:10:29.997216 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 12 00:10:29.997423 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Jul 12 00:10:29.997475 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Jul 12 00:10:30.000531 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 00:10:30.000545 systemd-tmpfiles[1257]: Skipping /boot Jul 12 00:10:30.010025 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 00:10:30.010042 systemd-tmpfiles[1257]: Skipping /boot Jul 12 00:10:30.041901 zram_generator::config[1289]: No configuration found. Jul 12 00:10:30.124632 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:10:30.174811 systemd[1]: Reloading finished in 183 ms. Jul 12 00:10:30.190709 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 12 00:10:30.209926 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:10:30.217706 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 12 00:10:30.220930 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 12 00:10:30.223175 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 12 00:10:30.227186 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 00:10:30.233731 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:10:30.237777 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 12 00:10:30.242151 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:10:30.243480 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:10:30.245971 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:10:30.251583 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:10:30.252751 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:10:30.252900 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 12 00:10:30.255591 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 12 00:10:30.257535 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:10:30.258952 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:10:30.261294 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 12 00:10:30.263204 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:10:30.263364 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:10:30.264638 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:10:30.264789 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:10:30.278425 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:10:30.279200 systemd-udevd[1328]: Using default interface naming scheme 'v255'. Jul 12 00:10:30.291256 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:10:30.296268 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:10:30.300325 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:10:30.301334 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:10:30.301510 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 12 00:10:30.304208 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 12 00:10:30.309574 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 12 00:10:30.312470 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:10:30.312654 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:10:30.314119 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:10:30.314347 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:10:30.315835 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:10:30.316025 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:10:30.321220 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 12 00:10:30.324621 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 12 00:10:30.326621 augenrules[1363]: No rules Jul 12 00:10:30.329533 systemd[1]: audit-rules.service: Deactivated successfully. Jul 12 00:10:30.329722 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 12 00:10:30.334139 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 12 00:10:30.335606 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:10:30.343225 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:10:30.353271 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:10:30.356078 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 00:10:30.359060 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:10:30.360974 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:10:30.362094 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:10:30.362142 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 12 00:10:30.364088 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 00:10:30.367291 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:10:30.369229 systemd[1]: Finished ensure-sysext.service. Jul 12 00:10:30.370542 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:10:30.370704 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:10:30.371918 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:10:30.374099 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 00:10:30.375203 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:10:30.375355 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:10:30.381764 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:10:30.382043 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:10:30.393804 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:10:30.393912 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 00:10:30.396348 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 12 00:10:30.398386 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 12 00:10:30.412987 systemd-resolved[1325]: Positive Trust Anchors: Jul 12 00:10:30.413005 systemd-resolved[1325]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:10:30.413042 systemd-resolved[1325]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 00:10:30.419674 systemd-resolved[1325]: Defaulting to hostname 'linux'. Jul 12 00:10:30.421347 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 00:10:30.422485 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:10:30.445914 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1389) Jul 12 00:10:30.473164 systemd-networkd[1399]: lo: Link UP Jul 12 00:10:30.473172 systemd-networkd[1399]: lo: Gained carrier Jul 12 00:10:30.474150 systemd-networkd[1399]: Enumeration completed Jul 12 00:10:30.474256 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 00:10:30.474645 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:10:30.474654 systemd-networkd[1399]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:10:30.475860 systemd[1]: Reached target network.target - Network. Jul 12 00:10:30.478845 systemd-networkd[1399]: eth0: Link UP Jul 12 00:10:30.478854 systemd-networkd[1399]: eth0: Gained carrier Jul 12 00:10:30.478890 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:10:30.482308 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 12 00:10:30.484683 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 12 00:10:30.485773 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 12 00:10:30.487448 systemd[1]: Reached target time-set.target - System Time Set. Jul 12 00:10:30.492078 systemd-networkd[1399]: eth0: DHCPv4 address 10.0.0.137/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 12 00:10:30.492365 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 12 00:10:30.493446 systemd-timesyncd[1406]: Network configuration changed, trying to establish connection. Jul 12 00:10:30.495359 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 12 00:10:30.497027 systemd-timesyncd[1406]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 12 00:10:30.497083 systemd-timesyncd[1406]: Initial clock synchronization to Sat 2025-07-12 00:10:30.779645 UTC. Jul 12 00:10:30.498617 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 12 00:10:30.514621 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 12 00:10:30.558184 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:10:30.573407 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 12 00:10:30.576946 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 12 00:10:30.589967 lvm[1427]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:10:30.605481 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:10:30.624496 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 12 00:10:30.625677 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:10:30.626580 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 00:10:30.627522 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 12 00:10:30.628439 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 12 00:10:30.629500 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 12 00:10:30.630370 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 12 00:10:30.631330 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 12 00:10:30.632251 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 12 00:10:30.632283 systemd[1]: Reached target paths.target - Path Units. Jul 12 00:10:30.632918 systemd[1]: Reached target timers.target - Timer Units. Jul 12 00:10:30.634504 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 12 00:10:30.636729 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 12 00:10:30.640367 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 12 00:10:30.641480 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 12 00:10:30.642469 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 12 00:10:30.648925 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 12 00:10:30.650459 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 12 00:10:30.652751 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 12 00:10:30.654416 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 12 00:10:30.655416 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 00:10:30.656181 systemd[1]: Reached target basic.target - Basic System. Jul 12 00:10:30.656867 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 12 00:10:30.656910 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 12 00:10:30.657988 systemd[1]: Starting containerd.service - containerd container runtime... Jul 12 00:10:30.659802 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 12 00:10:30.662895 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:10:30.663076 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 12 00:10:30.669075 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 12 00:10:30.670302 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 12 00:10:30.673111 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 12 00:10:30.678203 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 12 00:10:30.681416 jq[1437]: false Jul 12 00:10:30.684759 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 12 00:10:30.686987 extend-filesystems[1438]: Found loop3 Jul 12 00:10:30.686987 extend-filesystems[1438]: Found loop4 Jul 12 00:10:30.686987 extend-filesystems[1438]: Found loop5 Jul 12 00:10:30.686987 extend-filesystems[1438]: Found vda Jul 12 00:10:30.686987 extend-filesystems[1438]: Found vda1 Jul 12 00:10:30.686987 extend-filesystems[1438]: Found vda2 Jul 12 00:10:30.692474 extend-filesystems[1438]: Found vda3 Jul 12 00:10:30.692474 extend-filesystems[1438]: Found usr Jul 12 00:10:30.692474 extend-filesystems[1438]: Found vda4 Jul 12 00:10:30.692474 extend-filesystems[1438]: Found vda6 Jul 12 00:10:30.692474 extend-filesystems[1438]: Found vda7 Jul 12 00:10:30.692474 extend-filesystems[1438]: Found vda9 Jul 12 00:10:30.692474 extend-filesystems[1438]: Checking size of /dev/vda9 Jul 12 00:10:30.690107 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 12 00:10:30.697132 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 12 00:10:30.700127 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 12 00:10:30.700802 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 12 00:10:30.701695 systemd[1]: Starting update-engine.service - Update Engine... Jul 12 00:10:30.705520 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 12 00:10:30.707560 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 12 00:10:30.709814 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 12 00:10:30.710991 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 12 00:10:30.712066 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 12 00:10:30.712268 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 12 00:10:30.715831 jq[1455]: true Jul 12 00:10:30.717751 systemd[1]: motdgen.service: Deactivated successfully. Jul 12 00:10:30.718017 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 12 00:10:30.725029 extend-filesystems[1438]: Resized partition /dev/vda9 Jul 12 00:10:30.729773 dbus-daemon[1436]: [system] SELinux support is enabled Jul 12 00:10:30.730091 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 12 00:10:30.735477 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 12 00:10:30.735753 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 12 00:10:30.737361 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 12 00:10:30.737393 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 12 00:10:30.737681 (ntainerd)[1467]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 12 00:10:30.739793 jq[1460]: true Jul 12 00:10:30.752884 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1384) Jul 12 00:10:30.757670 extend-filesystems[1470]: resize2fs 1.47.1 (20-May-2024) Jul 12 00:10:30.763905 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 12 00:10:30.780444 tar[1458]: linux-arm64/LICENSE Jul 12 00:10:30.780727 tar[1458]: linux-arm64/helm Jul 12 00:10:30.782617 update_engine[1454]: I20250712 00:10:30.782457 1454 main.cc:92] Flatcar Update Engine starting Jul 12 00:10:30.789128 update_engine[1454]: I20250712 00:10:30.789072 1454 update_check_scheduler.cc:74] Next update check in 7m42s Jul 12 00:10:30.789123 systemd[1]: Started update-engine.service - Update Engine. Jul 12 00:10:30.802892 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 12 00:10:30.804143 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 12 00:10:30.835682 systemd-logind[1448]: Watching system buttons on /dev/input/event0 (Power Button) Jul 12 00:10:30.838850 systemd-logind[1448]: New seat seat0. Jul 12 00:10:30.841176 systemd[1]: Started systemd-logind.service - User Login Management. Jul 12 00:10:30.842158 extend-filesystems[1470]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 12 00:10:30.842158 extend-filesystems[1470]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 12 00:10:30.842158 extend-filesystems[1470]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 12 00:10:30.848473 extend-filesystems[1438]: Resized filesystem in /dev/vda9 Jul 12 00:10:30.849194 bash[1489]: Updated "/home/core/.ssh/authorized_keys" Jul 12 00:10:30.844380 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 12 00:10:30.844587 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 12 00:10:30.851288 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 12 00:10:30.853124 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 12 00:10:30.867533 locksmithd[1490]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 12 00:10:31.007892 containerd[1467]: time="2025-07-12T00:10:31.007233698Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jul 12 00:10:31.038818 containerd[1467]: time="2025-07-12T00:10:31.038680653Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:10:31.041637 containerd[1467]: time="2025-07-12T00:10:31.040287040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:10:31.041637 containerd[1467]: time="2025-07-12T00:10:31.041519754Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 12 00:10:31.041637 containerd[1467]: time="2025-07-12T00:10:31.041558848Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 12 00:10:31.042021 containerd[1467]: time="2025-07-12T00:10:31.041998038Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 12 00:10:31.042198 containerd[1467]: time="2025-07-12T00:10:31.042079664Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 12 00:10:31.042300 containerd[1467]: time="2025-07-12T00:10:31.042276751Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:10:31.042455 containerd[1467]: time="2025-07-12T00:10:31.042358874Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:10:31.043442 containerd[1467]: time="2025-07-12T00:10:31.042751805Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:10:31.043442 containerd[1467]: time="2025-07-12T00:10:31.042773671Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 12 00:10:31.043442 containerd[1467]: time="2025-07-12T00:10:31.042787876Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:10:31.043442 containerd[1467]: time="2025-07-12T00:10:31.042797774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 12 00:10:31.043442 containerd[1467]: time="2025-07-12T00:10:31.042887268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:10:31.043442 containerd[1467]: time="2025-07-12T00:10:31.043113427Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:10:31.043442 containerd[1467]: time="2025-07-12T00:10:31.043243756Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:10:31.043442 containerd[1467]: time="2025-07-12T00:10:31.043258250Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 12 00:10:31.043442 containerd[1467]: time="2025-07-12T00:10:31.043359092Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 12 00:10:31.043442 containerd[1467]: time="2025-07-12T00:10:31.043403363Z" level=info msg="metadata content store policy set" policy=shared Jul 12 00:10:31.047734 containerd[1467]: time="2025-07-12T00:10:31.047705478Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 12 00:10:31.047937 containerd[1467]: time="2025-07-12T00:10:31.047903476Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 12 00:10:31.048034 containerd[1467]: time="2025-07-12T00:10:31.048020221Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 12 00:10:31.048161 containerd[1467]: time="2025-07-12T00:10:31.048146076Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 12 00:10:31.048247 containerd[1467]: time="2025-07-12T00:10:31.048231512Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 12 00:10:31.048503 containerd[1467]: time="2025-07-12T00:10:31.048482644Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 12 00:10:31.048966 containerd[1467]: time="2025-07-12T00:10:31.048944735Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 12 00:10:31.049218 containerd[1467]: time="2025-07-12T00:10:31.049198062Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 12 00:10:31.049367 containerd[1467]: time="2025-07-12T00:10:31.049348848Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 12 00:10:31.049433 containerd[1467]: time="2025-07-12T00:10:31.049419375Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 12 00:10:31.050485 containerd[1467]: time="2025-07-12T00:10:31.049488536Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 12 00:10:31.050485 containerd[1467]: time="2025-07-12T00:10:31.049510485Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 12 00:10:31.050485 containerd[1467]: time="2025-07-12T00:10:31.049524980Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 12 00:10:31.050485 containerd[1467]: time="2025-07-12T00:10:31.049551484Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 12 00:10:31.050485 containerd[1467]: time="2025-07-12T00:10:31.049567718Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 12 00:10:31.050485 containerd[1467]: time="2025-07-12T00:10:31.049582462Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 12 00:10:31.050485 containerd[1467]: time="2025-07-12T00:10:31.049595673Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 12 00:10:31.050485 containerd[1467]: time="2025-07-12T00:10:31.049607641Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 12 00:10:31.050485 containerd[1467]: time="2025-07-12T00:10:31.049630915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 12 00:10:31.050485 containerd[1467]: time="2025-07-12T00:10:31.049644748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 12 00:10:31.050485 containerd[1467]: time="2025-07-12T00:10:31.049657296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 12 00:10:31.050485 containerd[1467]: time="2025-07-12T00:10:31.049669513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 12 00:10:31.050485 containerd[1467]: time="2025-07-12T00:10:31.049681357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 12 00:10:31.050485 containerd[1467]: time="2025-07-12T00:10:31.049695023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 12 00:10:31.050771 containerd[1467]: time="2025-07-12T00:10:31.049706743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 12 00:10:31.050771 containerd[1467]: time="2025-07-12T00:10:31.049719623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 12 00:10:31.050771 containerd[1467]: time="2025-07-12T00:10:31.049733331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 12 00:10:31.050771 containerd[1467]: time="2025-07-12T00:10:31.049749234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 12 00:10:31.050771 containerd[1467]: time="2025-07-12T00:10:31.049762942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 12 00:10:31.050771 containerd[1467]: time="2025-07-12T00:10:31.049775407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 12 00:10:31.050771 containerd[1467]: time="2025-07-12T00:10:31.049788038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 12 00:10:31.050771 containerd[1467]: time="2025-07-12T00:10:31.049805183Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 12 00:10:31.050771 containerd[1467]: time="2025-07-12T00:10:31.049827671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 12 00:10:31.050771 containerd[1467]: time="2025-07-12T00:10:31.049841379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 12 00:10:31.050771 containerd[1467]: time="2025-07-12T00:10:31.049852850Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 12 00:10:31.051466 containerd[1467]: time="2025-07-12T00:10:31.051429876Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 12 00:10:31.051582 containerd[1467]: time="2025-07-12T00:10:31.051562813Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 12 00:10:31.051635 containerd[1467]: time="2025-07-12T00:10:31.051623235Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 12 00:10:31.051693 containerd[1467]: time="2025-07-12T00:10:31.051679061Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 12 00:10:31.051741 containerd[1467]: time="2025-07-12T00:10:31.051728840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 12 00:10:31.051808 containerd[1467]: time="2025-07-12T00:10:31.051795225Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 12 00:10:31.051858 containerd[1467]: time="2025-07-12T00:10:31.051847158Z" level=info msg="NRI interface is disabled by configuration." Jul 12 00:10:31.052054 containerd[1467]: time="2025-07-12T00:10:31.051898014Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 12 00:10:31.052436 containerd[1467]: time="2025-07-12T00:10:31.052382551Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 12 00:10:31.052639 containerd[1467]: time="2025-07-12T00:10:31.052617490Z" level=info msg="Connect containerd service" Jul 12 00:10:31.052732 containerd[1467]: time="2025-07-12T00:10:31.052718787Z" level=info msg="using legacy CRI server" Jul 12 00:10:31.052789 containerd[1467]: time="2025-07-12T00:10:31.052776518Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 12 00:10:31.053241 containerd[1467]: time="2025-07-12T00:10:31.053219683Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 12 00:10:31.054329 containerd[1467]: time="2025-07-12T00:10:31.054297552Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:10:31.054531 containerd[1467]: time="2025-07-12T00:10:31.054502383Z" level=info msg="Start subscribing containerd event" Jul 12 00:10:31.055571 containerd[1467]: time="2025-07-12T00:10:31.055551387Z" level=info msg="Start recovering state" Jul 12 00:10:31.055736 containerd[1467]: time="2025-07-12T00:10:31.055719153Z" level=info msg="Start event monitor" Jul 12 00:10:31.055794 containerd[1467]: time="2025-07-12T00:10:31.055782143Z" level=info msg="Start snapshots syncer" Jul 12 00:10:31.055843 containerd[1467]: time="2025-07-12T00:10:31.055831425Z" level=info msg="Start cni network conf syncer for default" Jul 12 00:10:31.055888 containerd[1467]: time="2025-07-12T00:10:31.055876938Z" level=info msg="Start streaming server" Jul 12 00:10:31.056503 containerd[1467]: time="2025-07-12T00:10:31.055513618Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 12 00:10:31.057039 containerd[1467]: time="2025-07-12T00:10:31.057019205Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 12 00:10:31.057307 systemd[1]: Started containerd.service - containerd container runtime. Jul 12 00:10:31.059064 containerd[1467]: time="2025-07-12T00:10:31.058937808Z" level=info msg="containerd successfully booted in 0.054883s" Jul 12 00:10:31.201987 tar[1458]: linux-arm64/README.md Jul 12 00:10:31.212618 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 12 00:10:31.226736 sshd_keygen[1457]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 12 00:10:31.249013 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 12 00:10:31.268280 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 12 00:10:31.274563 systemd[1]: issuegen.service: Deactivated successfully. Jul 12 00:10:31.274833 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 12 00:10:31.279191 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 12 00:10:31.290595 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 12 00:10:31.293929 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 12 00:10:31.296107 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 12 00:10:31.297254 systemd[1]: Reached target getty.target - Login Prompts. Jul 12 00:10:32.429467 systemd-networkd[1399]: eth0: Gained IPv6LL Jul 12 00:10:32.431779 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 12 00:10:32.433338 systemd[1]: Reached target network-online.target - Network is Online. Jul 12 00:10:32.448138 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 12 00:10:32.450327 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:10:32.452136 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 12 00:10:32.467239 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 12 00:10:32.467537 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 12 00:10:32.469875 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 12 00:10:32.474528 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 12 00:10:33.026402 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:10:33.027704 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 12 00:10:33.029885 (kubelet)[1550]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:10:33.030135 systemd[1]: Startup finished in 522ms (kernel) + 7.172s (initrd) + 4.201s (userspace) = 11.896s. Jul 12 00:10:33.449748 kubelet[1550]: E0712 00:10:33.449608 1550 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:10:33.451953 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:10:33.452105 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:10:33.454010 systemd[1]: kubelet.service: Consumed 799ms CPU time, 259.9M memory peak. Jul 12 00:10:34.666271 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 12 00:10:34.667396 systemd[1]: Started sshd@0-10.0.0.137:22-10.0.0.1:54582.service - OpenSSH per-connection server daemon (10.0.0.1:54582). Jul 12 00:10:34.730638 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 54582 ssh2: RSA SHA256:n0+tZ6jsx4rUqm31x+f+3vpltD7bP0WQQLUre7Cw/6U Jul 12 00:10:34.732451 sshd-session[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:34.742278 systemd-logind[1448]: New session 1 of user core. Jul 12 00:10:34.743261 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 12 00:10:34.758129 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 12 00:10:34.769026 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 12 00:10:34.771217 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 12 00:10:34.777291 (systemd)[1568]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:10:34.779290 systemd-logind[1448]: New session c1 of user core. Jul 12 00:10:34.899748 systemd[1568]: Queued start job for default target default.target. Jul 12 00:10:34.912866 systemd[1568]: Created slice app.slice - User Application Slice. Jul 12 00:10:34.912916 systemd[1568]: Reached target paths.target - Paths. Jul 12 00:10:34.912963 systemd[1568]: Reached target timers.target - Timers. Jul 12 00:10:34.914258 systemd[1568]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 12 00:10:34.923739 systemd[1568]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 12 00:10:34.923812 systemd[1568]: Reached target sockets.target - Sockets. Jul 12 00:10:34.923852 systemd[1568]: Reached target basic.target - Basic System. Jul 12 00:10:34.923884 systemd[1568]: Reached target default.target - Main User Target. Jul 12 00:10:34.923926 systemd[1568]: Startup finished in 139ms. Jul 12 00:10:34.924093 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 12 00:10:34.925317 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 12 00:10:34.987050 systemd[1]: Started sshd@1-10.0.0.137:22-10.0.0.1:54594.service - OpenSSH per-connection server daemon (10.0.0.1:54594). Jul 12 00:10:35.038850 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 54594 ssh2: RSA SHA256:n0+tZ6jsx4rUqm31x+f+3vpltD7bP0WQQLUre7Cw/6U Jul 12 00:10:35.040261 sshd-session[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:35.044256 systemd-logind[1448]: New session 2 of user core. Jul 12 00:10:35.056059 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 12 00:10:35.109473 sshd[1581]: Connection closed by 10.0.0.1 port 54594 Jul 12 00:10:35.110019 sshd-session[1579]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:35.120763 systemd[1]: Started sshd@2-10.0.0.137:22-10.0.0.1:54606.service - OpenSSH per-connection server daemon (10.0.0.1:54606). Jul 12 00:10:35.121206 systemd[1]: sshd@1-10.0.0.137:22-10.0.0.1:54594.service: Deactivated successfully. Jul 12 00:10:35.122766 systemd[1]: session-2.scope: Deactivated successfully. Jul 12 00:10:35.124120 systemd-logind[1448]: Session 2 logged out. Waiting for processes to exit. Jul 12 00:10:35.125349 systemd-logind[1448]: Removed session 2. Jul 12 00:10:35.168407 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 54606 ssh2: RSA SHA256:n0+tZ6jsx4rUqm31x+f+3vpltD7bP0WQQLUre7Cw/6U Jul 12 00:10:35.169853 sshd-session[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:35.174367 systemd-logind[1448]: New session 3 of user core. Jul 12 00:10:35.189108 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 12 00:10:35.239045 sshd[1589]: Connection closed by 10.0.0.1 port 54606 Jul 12 00:10:35.239391 sshd-session[1584]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:35.248780 systemd[1]: sshd@2-10.0.0.137:22-10.0.0.1:54606.service: Deactivated successfully. Jul 12 00:10:35.250242 systemd[1]: session-3.scope: Deactivated successfully. Jul 12 00:10:35.252372 systemd-logind[1448]: Session 3 logged out. Waiting for processes to exit. Jul 12 00:10:35.252770 systemd[1]: Started sshd@3-10.0.0.137:22-10.0.0.1:54620.service - OpenSSH per-connection server daemon (10.0.0.1:54620). Jul 12 00:10:35.253948 systemd-logind[1448]: Removed session 3. Jul 12 00:10:35.296866 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 54620 ssh2: RSA SHA256:n0+tZ6jsx4rUqm31x+f+3vpltD7bP0WQQLUre7Cw/6U Jul 12 00:10:35.298129 sshd-session[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:35.301969 systemd-logind[1448]: New session 4 of user core. Jul 12 00:10:35.311064 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 12 00:10:35.363566 sshd[1597]: Connection closed by 10.0.0.1 port 54620 Jul 12 00:10:35.363901 sshd-session[1594]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:35.379514 systemd[1]: sshd@3-10.0.0.137:22-10.0.0.1:54620.service: Deactivated successfully. Jul 12 00:10:35.381297 systemd[1]: session-4.scope: Deactivated successfully. Jul 12 00:10:35.381970 systemd-logind[1448]: Session 4 logged out. Waiting for processes to exit. Jul 12 00:10:35.398260 systemd[1]: Started sshd@4-10.0.0.137:22-10.0.0.1:54630.service - OpenSSH per-connection server daemon (10.0.0.1:54630). Jul 12 00:10:35.399656 systemd-logind[1448]: Removed session 4. Jul 12 00:10:35.440164 sshd[1602]: Accepted publickey for core from 10.0.0.1 port 54630 ssh2: RSA SHA256:n0+tZ6jsx4rUqm31x+f+3vpltD7bP0WQQLUre7Cw/6U Jul 12 00:10:35.441243 sshd-session[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:35.445721 systemd-logind[1448]: New session 5 of user core. Jul 12 00:10:35.457050 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 12 00:10:35.529010 sudo[1606]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 12 00:10:35.529320 sudo[1606]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:10:35.547785 sudo[1606]: pam_unix(sudo:session): session closed for user root Jul 12 00:10:35.549286 sshd[1605]: Connection closed by 10.0.0.1 port 54630 Jul 12 00:10:35.549869 sshd-session[1602]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:35.560442 systemd[1]: sshd@4-10.0.0.137:22-10.0.0.1:54630.service: Deactivated successfully. Jul 12 00:10:35.561888 systemd[1]: session-5.scope: Deactivated successfully. Jul 12 00:10:35.562607 systemd-logind[1448]: Session 5 logged out. Waiting for processes to exit. Jul 12 00:10:35.574208 systemd[1]: Started sshd@5-10.0.0.137:22-10.0.0.1:54644.service - OpenSSH per-connection server daemon (10.0.0.1:54644). Jul 12 00:10:35.574804 systemd-logind[1448]: Removed session 5. Jul 12 00:10:35.615059 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 54644 ssh2: RSA SHA256:n0+tZ6jsx4rUqm31x+f+3vpltD7bP0WQQLUre7Cw/6U Jul 12 00:10:35.616281 sshd-session[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:35.624430 systemd-logind[1448]: New session 6 of user core. Jul 12 00:10:35.632108 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 12 00:10:35.684254 sudo[1616]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 12 00:10:35.684952 sudo[1616]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:10:35.688167 sudo[1616]: pam_unix(sudo:session): session closed for user root Jul 12 00:10:35.692844 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 12 00:10:35.693151 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:10:35.712187 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 12 00:10:35.734520 augenrules[1638]: No rules Jul 12 00:10:35.735651 systemd[1]: audit-rules.service: Deactivated successfully. Jul 12 00:10:35.735855 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 12 00:10:35.736787 sudo[1615]: pam_unix(sudo:session): session closed for user root Jul 12 00:10:35.737892 sshd[1614]: Connection closed by 10.0.0.1 port 54644 Jul 12 00:10:35.738443 sshd-session[1611]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:35.752049 systemd[1]: sshd@5-10.0.0.137:22-10.0.0.1:54644.service: Deactivated successfully. Jul 12 00:10:35.753440 systemd[1]: session-6.scope: Deactivated successfully. Jul 12 00:10:35.755587 systemd-logind[1448]: Session 6 logged out. Waiting for processes to exit. Jul 12 00:10:35.765225 systemd[1]: Started sshd@6-10.0.0.137:22-10.0.0.1:54652.service - OpenSSH per-connection server daemon (10.0.0.1:54652). Jul 12 00:10:35.766507 systemd-logind[1448]: Removed session 6. Jul 12 00:10:35.807858 sshd[1646]: Accepted publickey for core from 10.0.0.1 port 54652 ssh2: RSA SHA256:n0+tZ6jsx4rUqm31x+f+3vpltD7bP0WQQLUre7Cw/6U Jul 12 00:10:35.809118 sshd-session[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:35.813357 systemd-logind[1448]: New session 7 of user core. Jul 12 00:10:35.832073 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 12 00:10:35.887866 sudo[1650]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 12 00:10:35.888164 sudo[1650]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:10:36.260160 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 12 00:10:36.260248 (dockerd)[1672]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 12 00:10:36.521011 dockerd[1672]: time="2025-07-12T00:10:36.520853093Z" level=info msg="Starting up" Jul 12 00:10:36.713253 dockerd[1672]: time="2025-07-12T00:10:36.713194153Z" level=info msg="Loading containers: start." Jul 12 00:10:36.850979 kernel: Initializing XFRM netlink socket Jul 12 00:10:36.916821 systemd-networkd[1399]: docker0: Link UP Jul 12 00:10:36.955211 dockerd[1672]: time="2025-07-12T00:10:36.955150912Z" level=info msg="Loading containers: done." Jul 12 00:10:36.969811 dockerd[1672]: time="2025-07-12T00:10:36.969757784Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 12 00:10:36.969996 dockerd[1672]: time="2025-07-12T00:10:36.969855769Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jul 12 00:10:36.970068 dockerd[1672]: time="2025-07-12T00:10:36.970048357Z" level=info msg="Daemon has completed initialization" Jul 12 00:10:36.996816 dockerd[1672]: time="2025-07-12T00:10:36.996737233Z" level=info msg="API listen on /run/docker.sock" Jul 12 00:10:36.996934 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 12 00:10:37.550095 containerd[1467]: time="2025-07-12T00:10:37.550056477Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 12 00:10:38.163352 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1986013998.mount: Deactivated successfully. Jul 12 00:10:38.980473 containerd[1467]: time="2025-07-12T00:10:38.980262481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:10:38.981394 containerd[1467]: time="2025-07-12T00:10:38.981133888Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=26328196" Jul 12 00:10:38.982224 containerd[1467]: time="2025-07-12T00:10:38.982184994Z" level=info msg="ImageCreate event name:\"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:10:38.985811 containerd[1467]: time="2025-07-12T00:10:38.985776023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:10:38.987014 containerd[1467]: time="2025-07-12T00:10:38.986877742Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"26324994\" in 1.436780973s" Jul 12 00:10:38.987014 containerd[1467]: time="2025-07-12T00:10:38.986926774Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jul 12 00:10:38.988089 containerd[1467]: time="2025-07-12T00:10:38.988025613Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 12 00:10:39.995339 containerd[1467]: time="2025-07-12T00:10:39.995260636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:10:39.996305 containerd[1467]: time="2025-07-12T00:10:39.996016706Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=22529230" Jul 12 00:10:39.999926 containerd[1467]: time="2025-07-12T00:10:39.999870660Z" level=info msg="ImageCreate event name:\"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:10:40.002407 containerd[1467]: time="2025-07-12T00:10:40.002340162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:10:40.003530 containerd[1467]: time="2025-07-12T00:10:40.003490778Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"24065018\" in 1.015433113s" Jul 12 00:10:40.003530 containerd[1467]: time="2025-07-12T00:10:40.003527888Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jul 12 00:10:40.004656 containerd[1467]: time="2025-07-12T00:10:40.004613905Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 12 00:10:41.123593 containerd[1467]: time="2025-07-12T00:10:41.123521577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:10:41.124084 containerd[1467]: time="2025-07-12T00:10:41.123994453Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=17484143" Jul 12 00:10:41.124942 containerd[1467]: time="2025-07-12T00:10:41.124908310Z" level=info msg="ImageCreate event name:\"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:10:41.127924 containerd[1467]: time="2025-07-12T00:10:41.127873703Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:10:41.129202 containerd[1467]: time="2025-07-12T00:10:41.129160879Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"19019949\" in 1.124510931s" Jul 12 00:10:41.129202 containerd[1467]: time="2025-07-12T00:10:41.129199515Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jul 12 00:10:41.129746 containerd[1467]: time="2025-07-12T00:10:41.129691123Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 12 00:10:42.105732 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1798147134.mount: Deactivated successfully. Jul 12 00:10:42.334743 containerd[1467]: time="2025-07-12T00:10:42.334680043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:10:42.335403 containerd[1467]: time="2025-07-12T00:10:42.335356904Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=27378408" Jul 12 00:10:42.335971 containerd[1467]: time="2025-07-12T00:10:42.335946582Z" level=info msg="ImageCreate event name:\"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:10:42.338668 containerd[1467]: time="2025-07-12T00:10:42.338622009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:10:42.339077 containerd[1467]: time="2025-07-12T00:10:42.339035384Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"27377425\" in 1.209308102s" Jul 12 00:10:42.339125 containerd[1467]: time="2025-07-12T00:10:42.339074782Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 12 00:10:42.339563 containerd[1467]: time="2025-07-12T00:10:42.339496102Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 12 00:10:43.163209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3780189192.mount: Deactivated successfully. Jul 12 00:10:43.659015 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 12 00:10:43.668127 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:10:43.781418 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:10:43.786191 (kubelet)[1999]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:10:43.827264 kubelet[1999]: E0712 00:10:43.827214 1999 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:10:43.832146 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:10:43.832310 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:10:43.834044 systemd[1]: kubelet.service: Consumed 147ms CPU time, 107.5M memory peak. Jul 12 00:10:44.094716 containerd[1467]: time="2025-07-12T00:10:44.094182713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:10:44.096638 containerd[1467]: time="2025-07-12T00:10:44.096583291Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 12 00:10:44.098112 containerd[1467]: time="2025-07-12T00:10:44.098057213Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:10:44.127866 containerd[1467]: time="2025-07-12T00:10:44.127720044Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:10:44.129760 containerd[1467]: time="2025-07-12T00:10:44.129653289Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.790122208s" Jul 12 00:10:44.129760 containerd[1467]: time="2025-07-12T00:10:44.129702554Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 12 00:10:44.130436 containerd[1467]: time="2025-07-12T00:10:44.130408041Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 12 00:10:44.570962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3479552181.mount: Deactivated successfully. Jul 12 00:10:44.575897 containerd[1467]: time="2025-07-12T00:10:44.575843595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:10:44.577627 containerd[1467]: time="2025-07-12T00:10:44.577563762Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 12 00:10:44.578526 containerd[1467]: time="2025-07-12T00:10:44.578498225Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:10:44.581355 containerd[1467]: time="2025-07-12T00:10:44.581317354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:10:44.582771 containerd[1467]: time="2025-07-12T00:10:44.582690855Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 452.24667ms" Jul 12 00:10:44.582771 containerd[1467]: time="2025-07-12T00:10:44.582720035Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 12 00:10:44.583364 containerd[1467]: time="2025-07-12T00:10:44.583178271Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 12 00:10:45.048259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3384985991.mount: Deactivated successfully. Jul 12 00:10:46.222246 containerd[1467]: time="2025-07-12T00:10:46.222191374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:10:46.222591 containerd[1467]: time="2025-07-12T00:10:46.222558115Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" Jul 12 00:10:46.224322 containerd[1467]: time="2025-07-12T00:10:46.223674372Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:10:46.227650 containerd[1467]: time="2025-07-12T00:10:46.227612018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:10:46.228550 containerd[1467]: time="2025-07-12T00:10:46.228449914Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 1.645236319s" Jul 12 00:10:46.228550 containerd[1467]: time="2025-07-12T00:10:46.228486850Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 12 00:10:52.354219 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:10:52.354360 systemd[1]: kubelet.service: Consumed 147ms CPU time, 107.5M memory peak. Jul 12 00:10:52.364091 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:10:52.384359 systemd[1]: Reload requested from client PID 2097 ('systemctl') (unit session-7.scope)... Jul 12 00:10:52.384377 systemd[1]: Reloading... Jul 12 00:10:52.459947 zram_generator::config[2144]: No configuration found. Jul 12 00:10:52.579825 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:10:52.656040 systemd[1]: Reloading finished in 271 ms. Jul 12 00:10:52.701473 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:10:52.706566 (kubelet)[2177]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:10:52.709053 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:10:52.709365 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 00:10:52.709608 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:10:52.709672 systemd[1]: kubelet.service: Consumed 93ms CPU time, 96.2M memory peak. Jul 12 00:10:52.714362 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:10:52.825449 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:10:52.829557 (kubelet)[2189]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:10:52.874021 kubelet[2189]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:10:52.874021 kubelet[2189]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 12 00:10:52.874021 kubelet[2189]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:10:52.874379 kubelet[2189]: I0712 00:10:52.874077 2189 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:10:53.767908 kubelet[2189]: I0712 00:10:53.767123 2189 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 12 00:10:53.767908 kubelet[2189]: I0712 00:10:53.767157 2189 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:10:53.767908 kubelet[2189]: I0712 00:10:53.767435 2189 server.go:954] "Client rotation is on, will bootstrap in background" Jul 12 00:10:53.794518 kubelet[2189]: E0712 00:10:53.794476 2189 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.137:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:10:53.797966 kubelet[2189]: I0712 00:10:53.797933 2189 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:10:53.804411 kubelet[2189]: E0712 00:10:53.804359 2189 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:10:53.804411 kubelet[2189]: I0712 00:10:53.804406 2189 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:10:53.807441 kubelet[2189]: I0712 00:10:53.807373 2189 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:10:53.807649 kubelet[2189]: I0712 00:10:53.807601 2189 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:10:53.807797 kubelet[2189]: I0712 00:10:53.807631 2189 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 00:10:53.807943 kubelet[2189]: I0712 00:10:53.807873 2189 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:10:53.807943 kubelet[2189]: I0712 00:10:53.807895 2189 container_manager_linux.go:304] "Creating device plugin manager" Jul 12 00:10:53.808145 kubelet[2189]: I0712 00:10:53.808130 2189 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:10:53.814123 kubelet[2189]: I0712 00:10:53.814094 2189 kubelet.go:446] "Attempting to sync node with API server" Jul 12 00:10:53.814171 kubelet[2189]: I0712 00:10:53.814126 2189 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:10:53.814171 kubelet[2189]: I0712 00:10:53.814146 2189 kubelet.go:352] "Adding apiserver pod source" Jul 12 00:10:53.814171 kubelet[2189]: I0712 00:10:53.814158 2189 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:10:53.818199 kubelet[2189]: W0712 00:10:53.817964 2189 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Jul 12 00:10:53.818199 kubelet[2189]: E0712 00:10:53.818041 2189 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:10:53.818199 kubelet[2189]: I0712 00:10:53.818096 2189 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 12 00:10:53.818199 kubelet[2189]: W0712 00:10:53.818110 2189 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Jul 12 00:10:53.818199 kubelet[2189]: E0712 00:10:53.818158 2189 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:10:53.818698 kubelet[2189]: I0712 00:10:53.818683 2189 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:10:53.818832 kubelet[2189]: W0712 00:10:53.818819 2189 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 12 00:10:53.819886 kubelet[2189]: I0712 00:10:53.819865 2189 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 12 00:10:53.819943 kubelet[2189]: I0712 00:10:53.819918 2189 server.go:1287] "Started kubelet" Jul 12 00:10:53.821332 kubelet[2189]: I0712 00:10:53.820010 2189 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:10:53.821332 kubelet[2189]: I0712 00:10:53.820902 2189 server.go:479] "Adding debug handlers to kubelet server" Jul 12 00:10:53.823489 kubelet[2189]: I0712 00:10:53.823417 2189 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:10:53.823859 kubelet[2189]: I0712 00:10:53.823832 2189 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:10:53.826123 kubelet[2189]: E0712 00:10:53.825844 2189 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.137:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.137:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18515883f64ade83 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-12 00:10:53.819895427 +0000 UTC m=+0.987219625,LastTimestamp:2025-07-12 00:10:53.819895427 +0000 UTC m=+0.987219625,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 12 00:10:53.826505 kubelet[2189]: I0712 00:10:53.826477 2189 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:10:53.826844 kubelet[2189]: I0712 00:10:53.826819 2189 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:10:53.828155 kubelet[2189]: I0712 00:10:53.828121 2189 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 12 00:10:53.828794 kubelet[2189]: E0712 00:10:53.828642 2189 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 00:10:53.828794 kubelet[2189]: I0712 00:10:53.828665 2189 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 12 00:10:53.828794 kubelet[2189]: I0712 00:10:53.828713 2189 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:10:53.828794 kubelet[2189]: E0712 00:10:53.828764 2189 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:53.829389 kubelet[2189]: E0712 00:10:53.829361 2189 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="200ms" Jul 12 00:10:53.829498 kubelet[2189]: W0712 00:10:53.829448 2189 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Jul 12 00:10:53.829704 kubelet[2189]: E0712 00:10:53.829573 2189 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:10:53.829704 kubelet[2189]: I0712 00:10:53.829628 2189 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:10:53.829773 kubelet[2189]: I0712 00:10:53.829715 2189 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:10:53.831345 kubelet[2189]: I0712 00:10:53.831324 2189 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:10:53.843665 kubelet[2189]: I0712 00:10:53.843636 2189 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 12 00:10:53.843665 kubelet[2189]: I0712 00:10:53.843658 2189 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 12 00:10:53.843665 kubelet[2189]: I0712 00:10:53.843677 2189 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:10:53.845910 kubelet[2189]: I0712 00:10:53.845793 2189 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:10:53.847033 kubelet[2189]: I0712 00:10:53.847001 2189 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:10:53.847033 kubelet[2189]: I0712 00:10:53.847035 2189 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 12 00:10:53.847151 kubelet[2189]: I0712 00:10:53.847059 2189 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 12 00:10:53.847151 kubelet[2189]: I0712 00:10:53.847067 2189 kubelet.go:2382] "Starting kubelet main sync loop" Jul 12 00:10:53.847151 kubelet[2189]: E0712 00:10:53.847124 2189 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:10:53.847908 kubelet[2189]: W0712 00:10:53.847624 2189 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Jul 12 00:10:53.847908 kubelet[2189]: E0712 00:10:53.847679 2189 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:10:53.908126 kubelet[2189]: I0712 00:10:53.908074 2189 policy_none.go:49] "None policy: Start" Jul 12 00:10:53.908126 kubelet[2189]: I0712 00:10:53.908124 2189 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 12 00:10:53.908126 kubelet[2189]: I0712 00:10:53.908142 2189 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:10:53.914254 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 12 00:10:53.929488 kubelet[2189]: E0712 00:10:53.929456 2189 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:53.930606 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 12 00:10:53.934715 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 12 00:10:53.947460 kubelet[2189]: I0712 00:10:53.946736 2189 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:10:53.947460 kubelet[2189]: I0712 00:10:53.946969 2189 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:10:53.947460 kubelet[2189]: I0712 00:10:53.946981 2189 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:10:53.948586 kubelet[2189]: I0712 00:10:53.948552 2189 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:10:53.949409 kubelet[2189]: E0712 00:10:53.949365 2189 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 12 00:10:53.949500 kubelet[2189]: E0712 00:10:53.949462 2189 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 12 00:10:53.957431 systemd[1]: Created slice kubepods-burstable-pod0d45604c48f1530587d9f3d2aad2dab9.slice - libcontainer container kubepods-burstable-pod0d45604c48f1530587d9f3d2aad2dab9.slice. Jul 12 00:10:53.968943 kubelet[2189]: E0712 00:10:53.968696 2189 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:10:53.971928 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jul 12 00:10:53.984417 kubelet[2189]: E0712 00:10:53.984376 2189 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:10:53.987198 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jul 12 00:10:53.988648 kubelet[2189]: E0712 00:10:53.988605 2189 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:10:54.030195 kubelet[2189]: E0712 00:10:54.030082 2189 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="400ms" Jul 12 00:10:54.050124 kubelet[2189]: I0712 00:10:54.050095 2189 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 00:10:54.050595 kubelet[2189]: E0712 00:10:54.050567 2189 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Jul 12 00:10:54.130254 kubelet[2189]: I0712 00:10:54.130190 2189 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0d45604c48f1530587d9f3d2aad2dab9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0d45604c48f1530587d9f3d2aad2dab9\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:10:54.130254 kubelet[2189]: I0712 00:10:54.130231 2189 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:10:54.130254 kubelet[2189]: I0712 00:10:54.130249 2189 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:10:54.130428 kubelet[2189]: I0712 00:10:54.130271 2189 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:10:54.130428 kubelet[2189]: I0712 00:10:54.130291 2189 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:10:54.130428 kubelet[2189]: I0712 00:10:54.130309 2189 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 12 00:10:54.130428 kubelet[2189]: I0712 00:10:54.130324 2189 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0d45604c48f1530587d9f3d2aad2dab9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0d45604c48f1530587d9f3d2aad2dab9\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:10:54.130428 kubelet[2189]: I0712 00:10:54.130338 2189 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0d45604c48f1530587d9f3d2aad2dab9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0d45604c48f1530587d9f3d2aad2dab9\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:10:54.130526 kubelet[2189]: I0712 00:10:54.130356 2189 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:10:54.251654 kubelet[2189]: I0712 00:10:54.251622 2189 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 00:10:54.251985 kubelet[2189]: E0712 00:10:54.251959 2189 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Jul 12 00:10:54.269267 kubelet[2189]: E0712 00:10:54.269243 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:10:54.269931 containerd[1467]: time="2025-07-12T00:10:54.269867502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0d45604c48f1530587d9f3d2aad2dab9,Namespace:kube-system,Attempt:0,}" Jul 12 00:10:54.285264 kubelet[2189]: E0712 00:10:54.285044 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:10:54.285662 containerd[1467]: time="2025-07-12T00:10:54.285458164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jul 12 00:10:54.289273 kubelet[2189]: E0712 00:10:54.289195 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:10:54.289589 containerd[1467]: time="2025-07-12T00:10:54.289560274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jul 12 00:10:54.431257 kubelet[2189]: E0712 00:10:54.431214 2189 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="800ms" Jul 12 00:10:54.654107 kubelet[2189]: I0712 00:10:54.653945 2189 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 00:10:54.654887 kubelet[2189]: E0712 00:10:54.654441 2189 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Jul 12 00:10:54.663079 kubelet[2189]: W0712 00:10:54.662976 2189 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Jul 12 00:10:54.663079 kubelet[2189]: E0712 00:10:54.663040 2189 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:10:54.921250 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1517596528.mount: Deactivated successfully. Jul 12 00:10:54.934237 containerd[1467]: time="2025-07-12T00:10:54.933928188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:10:54.935526 containerd[1467]: time="2025-07-12T00:10:54.935484414Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 12 00:10:54.937588 containerd[1467]: time="2025-07-12T00:10:54.937526875Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:10:54.941964 containerd[1467]: time="2025-07-12T00:10:54.939569215Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:10:54.941964 containerd[1467]: time="2025-07-12T00:10:54.940584516Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 12 00:10:54.942065 containerd[1467]: time="2025-07-12T00:10:54.941972907Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:10:54.942960 containerd[1467]: time="2025-07-12T00:10:54.942907315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:10:54.943588 containerd[1467]: time="2025-07-12T00:10:54.943556537Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 673.590633ms" Jul 12 00:10:54.946139 containerd[1467]: time="2025-07-12T00:10:54.946100739Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 12 00:10:54.950120 containerd[1467]: time="2025-07-12T00:10:54.949872148Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 660.243963ms" Jul 12 00:10:54.950643 containerd[1467]: time="2025-07-12T00:10:54.950617166Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 665.087565ms" Jul 12 00:10:55.092363 containerd[1467]: time="2025-07-12T00:10:55.090168921Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:10:55.092363 containerd[1467]: time="2025-07-12T00:10:55.090245751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:10:55.092363 containerd[1467]: time="2025-07-12T00:10:55.090258729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:10:55.092363 containerd[1467]: time="2025-07-12T00:10:55.092087026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:10:55.092553 containerd[1467]: time="2025-07-12T00:10:55.092217613Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:10:55.093225 containerd[1467]: time="2025-07-12T00:10:55.093178068Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:10:55.093363 containerd[1467]: time="2025-07-12T00:10:55.093323116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:10:55.093523 containerd[1467]: time="2025-07-12T00:10:55.093487191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:10:55.094958 containerd[1467]: time="2025-07-12T00:10:55.092012159Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:10:55.094958 containerd[1467]: time="2025-07-12T00:10:55.094935224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:10:55.094958 containerd[1467]: time="2025-07-12T00:10:55.094948563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:10:55.095074 containerd[1467]: time="2025-07-12T00:10:55.095030720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:10:55.116083 systemd[1]: Started cri-containerd-156cca5f7ecbf80b5cce47d39eb5adc851857deb828df9ea057c6e7a368bfa5b.scope - libcontainer container 156cca5f7ecbf80b5cce47d39eb5adc851857deb828df9ea057c6e7a368bfa5b. Jul 12 00:10:55.117319 systemd[1]: Started cri-containerd-599663c6172fcd83e742f62d40b3964ce6d1c9b3a7cedc4910092c5ac894daf3.scope - libcontainer container 599663c6172fcd83e742f62d40b3964ce6d1c9b3a7cedc4910092c5ac894daf3. Jul 12 00:10:55.119297 systemd[1]: Started cri-containerd-b08e8b1a4e3aa39d574324ff84b8d33d2ae6dfb2b29ece0db3f3bed1ca58540a.scope - libcontainer container b08e8b1a4e3aa39d574324ff84b8d33d2ae6dfb2b29ece0db3f3bed1ca58540a. Jul 12 00:10:55.154905 containerd[1467]: time="2025-07-12T00:10:55.154247371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0d45604c48f1530587d9f3d2aad2dab9,Namespace:kube-system,Attempt:0,} returns sandbox id \"599663c6172fcd83e742f62d40b3964ce6d1c9b3a7cedc4910092c5ac894daf3\"" Jul 12 00:10:55.156045 containerd[1467]: time="2025-07-12T00:10:55.156016504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"156cca5f7ecbf80b5cce47d39eb5adc851857deb828df9ea057c6e7a368bfa5b\"" Jul 12 00:10:55.156401 kubelet[2189]: E0712 00:10:55.156381 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:10:55.156830 containerd[1467]: time="2025-07-12T00:10:55.156777713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"b08e8b1a4e3aa39d574324ff84b8d33d2ae6dfb2b29ece0db3f3bed1ca58540a\"" Jul 12 00:10:55.158236 kubelet[2189]: E0712 00:10:55.156469 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:10:55.158236 kubelet[2189]: E0712 00:10:55.157652 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:10:55.158716 kubelet[2189]: W0712 00:10:55.158691 2189 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Jul 12 00:10:55.158830 kubelet[2189]: E0712 00:10:55.158812 2189 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:10:55.159338 containerd[1467]: time="2025-07-12T00:10:55.159302327Z" level=info msg="CreateContainer within sandbox \"599663c6172fcd83e742f62d40b3964ce6d1c9b3a7cedc4910092c5ac894daf3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 12 00:10:55.159407 containerd[1467]: time="2025-07-12T00:10:55.159325601Z" level=info msg="CreateContainer within sandbox \"156cca5f7ecbf80b5cce47d39eb5adc851857deb828df9ea057c6e7a368bfa5b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 12 00:10:55.161373 containerd[1467]: time="2025-07-12T00:10:55.161332554Z" level=info msg="CreateContainer within sandbox \"b08e8b1a4e3aa39d574324ff84b8d33d2ae6dfb2b29ece0db3f3bed1ca58540a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 12 00:10:55.181335 containerd[1467]: time="2025-07-12T00:10:55.181212492Z" level=info msg="CreateContainer within sandbox \"b08e8b1a4e3aa39d574324ff84b8d33d2ae6dfb2b29ece0db3f3bed1ca58540a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"731711c5c659389a16bd1f67b0ff16091215ffb263f51f9ac5e730b72604cd28\"" Jul 12 00:10:55.182702 containerd[1467]: time="2025-07-12T00:10:55.182667535Z" level=info msg="StartContainer for \"731711c5c659389a16bd1f67b0ff16091215ffb263f51f9ac5e730b72604cd28\"" Jul 12 00:10:55.182799 containerd[1467]: time="2025-07-12T00:10:55.182678952Z" level=info msg="CreateContainer within sandbox \"599663c6172fcd83e742f62d40b3964ce6d1c9b3a7cedc4910092c5ac894daf3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"82effd6f3424d50ba7278a304b62638ba0c6d49ddd71d4b380007cef4c77a253\"" Jul 12 00:10:55.183231 containerd[1467]: time="2025-07-12T00:10:55.183170816Z" level=info msg="StartContainer for \"82effd6f3424d50ba7278a304b62638ba0c6d49ddd71d4b380007cef4c77a253\"" Jul 12 00:10:55.184904 containerd[1467]: time="2025-07-12T00:10:55.184307443Z" level=info msg="CreateContainer within sandbox \"156cca5f7ecbf80b5cce47d39eb5adc851857deb828df9ea057c6e7a368bfa5b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5c341ae761e75ad47d5107455cb5f66f19ab7def4ef023b49b2ba87f7a429388\"" Jul 12 00:10:55.185391 containerd[1467]: time="2025-07-12T00:10:55.185344648Z" level=info msg="StartContainer for \"5c341ae761e75ad47d5107455cb5f66f19ab7def4ef023b49b2ba87f7a429388\"" Jul 12 00:10:55.222105 systemd[1]: Started cri-containerd-82effd6f3424d50ba7278a304b62638ba0c6d49ddd71d4b380007cef4c77a253.scope - libcontainer container 82effd6f3424d50ba7278a304b62638ba0c6d49ddd71d4b380007cef4c77a253. Jul 12 00:10:55.226995 systemd[1]: Started cri-containerd-5c341ae761e75ad47d5107455cb5f66f19ab7def4ef023b49b2ba87f7a429388.scope - libcontainer container 5c341ae761e75ad47d5107455cb5f66f19ab7def4ef023b49b2ba87f7a429388. Jul 12 00:10:55.228855 systemd[1]: Started cri-containerd-731711c5c659389a16bd1f67b0ff16091215ffb263f51f9ac5e730b72604cd28.scope - libcontainer container 731711c5c659389a16bd1f67b0ff16091215ffb263f51f9ac5e730b72604cd28. Jul 12 00:10:55.232681 kubelet[2189]: E0712 00:10:55.232624 2189 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="1.6s" Jul 12 00:10:55.267093 containerd[1467]: time="2025-07-12T00:10:55.267048009Z" level=info msg="StartContainer for \"82effd6f3424d50ba7278a304b62638ba0c6d49ddd71d4b380007cef4c77a253\" returns successfully" Jul 12 00:10:55.273244 containerd[1467]: time="2025-07-12T00:10:55.273092422Z" level=info msg="StartContainer for \"5c341ae761e75ad47d5107455cb5f66f19ab7def4ef023b49b2ba87f7a429388\" returns successfully" Jul 12 00:10:55.282692 containerd[1467]: time="2025-07-12T00:10:55.282551683Z" level=info msg="StartContainer for \"731711c5c659389a16bd1f67b0ff16091215ffb263f51f9ac5e730b72604cd28\" returns successfully" Jul 12 00:10:55.361165 kubelet[2189]: W0712 00:10:55.361060 2189 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Jul 12 00:10:55.361165 kubelet[2189]: E0712 00:10:55.361127 2189 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:10:55.441235 kubelet[2189]: W0712 00:10:55.441061 2189 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Jul 12 00:10:55.441235 kubelet[2189]: E0712 00:10:55.441127 2189 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:10:55.456834 kubelet[2189]: I0712 00:10:55.456401 2189 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 00:10:55.456834 kubelet[2189]: E0712 00:10:55.456730 2189 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Jul 12 00:10:55.854168 kubelet[2189]: E0712 00:10:55.854065 2189 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:10:55.854748 kubelet[2189]: E0712 00:10:55.854678 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:10:55.856598 kubelet[2189]: E0712 00:10:55.856569 2189 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:10:55.856714 kubelet[2189]: E0712 00:10:55.856694 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:10:55.859327 kubelet[2189]: E0712 00:10:55.859300 2189 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:10:55.859435 kubelet[2189]: E0712 00:10:55.859418 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:10:56.861167 kubelet[2189]: E0712 00:10:56.860918 2189 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:10:56.861167 kubelet[2189]: E0712 00:10:56.861038 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:10:56.864816 kubelet[2189]: E0712 00:10:56.863039 2189 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:10:56.864816 kubelet[2189]: E0712 00:10:56.863157 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:10:56.864816 kubelet[2189]: E0712 00:10:56.863734 2189 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:10:56.864816 kubelet[2189]: E0712 00:10:56.863838 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:10:57.004091 kubelet[2189]: E0712 00:10:57.004041 2189 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 12 00:10:57.058812 kubelet[2189]: I0712 00:10:57.058580 2189 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 00:10:57.075647 kubelet[2189]: I0712 00:10:57.075599 2189 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 12 00:10:57.075647 kubelet[2189]: E0712 00:10:57.075639 2189 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 12 00:10:57.093647 kubelet[2189]: E0712 00:10:57.093608 2189 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:57.194066 kubelet[2189]: E0712 00:10:57.194022 2189 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:57.294724 kubelet[2189]: E0712 00:10:57.294694 2189 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:57.395436 kubelet[2189]: E0712 00:10:57.395381 2189 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:57.529942 kubelet[2189]: I0712 00:10:57.529456 2189 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 12 00:10:57.546173 kubelet[2189]: E0712 00:10:57.546024 2189 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 12 00:10:57.546173 kubelet[2189]: I0712 00:10:57.546055 2189 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 12 00:10:57.548164 kubelet[2189]: E0712 00:10:57.548139 2189 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 12 00:10:57.548164 kubelet[2189]: I0712 00:10:57.548163 2189 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 12 00:10:57.549981 kubelet[2189]: E0712 00:10:57.549957 2189 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 12 00:10:57.816743 kubelet[2189]: I0712 00:10:57.816564 2189 apiserver.go:52] "Watching apiserver" Jul 12 00:10:57.829192 kubelet[2189]: I0712 00:10:57.829141 2189 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 12 00:10:59.297452 systemd[1]: Reload requested from client PID 2467 ('systemctl') (unit session-7.scope)... Jul 12 00:10:59.297469 systemd[1]: Reloading... Jul 12 00:10:59.385916 zram_generator::config[2514]: No configuration found. Jul 12 00:10:59.468213 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:10:59.552448 systemd[1]: Reloading finished in 254 ms. Jul 12 00:10:59.570128 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:10:59.592637 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 00:10:59.592857 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:10:59.592924 systemd[1]: kubelet.service: Consumed 1.396s CPU time, 130.6M memory peak. Jul 12 00:10:59.607459 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:10:59.713328 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:10:59.717788 (kubelet)[2553]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:10:59.760914 kubelet[2553]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:10:59.760914 kubelet[2553]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 12 00:10:59.760914 kubelet[2553]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:10:59.760914 kubelet[2553]: I0712 00:10:59.759501 2553 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:10:59.767214 kubelet[2553]: I0712 00:10:59.767184 2553 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 12 00:10:59.767316 kubelet[2553]: I0712 00:10:59.767307 2553 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:10:59.767613 kubelet[2553]: I0712 00:10:59.767594 2553 server.go:954] "Client rotation is on, will bootstrap in background" Jul 12 00:10:59.768901 kubelet[2553]: I0712 00:10:59.768855 2553 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 12 00:10:59.771218 kubelet[2553]: I0712 00:10:59.771181 2553 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:10:59.773815 kubelet[2553]: E0712 00:10:59.773791 2553 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:10:59.773815 kubelet[2553]: I0712 00:10:59.773815 2553 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:10:59.777755 kubelet[2553]: I0712 00:10:59.777731 2553 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:10:59.777950 kubelet[2553]: I0712 00:10:59.777927 2553 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:10:59.778105 kubelet[2553]: I0712 00:10:59.777952 2553 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 00:10:59.778105 kubelet[2553]: I0712 00:10:59.778103 2553 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:10:59.778214 kubelet[2553]: I0712 00:10:59.778112 2553 container_manager_linux.go:304] "Creating device plugin manager" Jul 12 00:10:59.778214 kubelet[2553]: I0712 00:10:59.778148 2553 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:10:59.778266 kubelet[2553]: I0712 00:10:59.778260 2553 kubelet.go:446] "Attempting to sync node with API server" Jul 12 00:10:59.778291 kubelet[2553]: I0712 00:10:59.778272 2553 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:10:59.778291 kubelet[2553]: I0712 00:10:59.778288 2553 kubelet.go:352] "Adding apiserver pod source" Jul 12 00:10:59.778334 kubelet[2553]: I0712 00:10:59.778296 2553 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:10:59.780917 kubelet[2553]: I0712 00:10:59.779231 2553 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 12 00:10:59.780917 kubelet[2553]: I0712 00:10:59.779635 2553 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:10:59.780917 kubelet[2553]: I0712 00:10:59.780004 2553 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 12 00:10:59.780917 kubelet[2553]: I0712 00:10:59.780028 2553 server.go:1287] "Started kubelet" Jul 12 00:10:59.781846 kubelet[2553]: I0712 00:10:59.781803 2553 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:10:59.782508 kubelet[2553]: I0712 00:10:59.782491 2553 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:10:59.783014 kubelet[2553]: I0712 00:10:59.782495 2553 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:10:59.783291 kubelet[2553]: E0712 00:10:59.783271 2553 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 00:10:59.784173 kubelet[2553]: I0712 00:10:59.784155 2553 server.go:479] "Adding debug handlers to kubelet server" Jul 12 00:10:59.784732 kubelet[2553]: I0712 00:10:59.784717 2553 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:10:59.784784 kubelet[2553]: I0712 00:10:59.784733 2553 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:10:59.784905 kubelet[2553]: I0712 00:10:59.784867 2553 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 12 00:10:59.788447 kubelet[2553]: I0712 00:10:59.788422 2553 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 12 00:10:59.790490 kubelet[2553]: E0712 00:10:59.790458 2553 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:59.791420 kubelet[2553]: I0712 00:10:59.791396 2553 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:10:59.791548 kubelet[2553]: I0712 00:10:59.791526 2553 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:10:59.792787 kubelet[2553]: I0712 00:10:59.792767 2553 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:10:59.792997 kubelet[2553]: I0712 00:10:59.792979 2553 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:10:59.805238 kubelet[2553]: I0712 00:10:59.805129 2553 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:10:59.807323 kubelet[2553]: I0712 00:10:59.807302 2553 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:10:59.807429 kubelet[2553]: I0712 00:10:59.807417 2553 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 12 00:10:59.807495 kubelet[2553]: I0712 00:10:59.807486 2553 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 12 00:10:59.807584 kubelet[2553]: I0712 00:10:59.807574 2553 kubelet.go:2382] "Starting kubelet main sync loop" Jul 12 00:10:59.807683 kubelet[2553]: E0712 00:10:59.807663 2553 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:10:59.838995 kubelet[2553]: I0712 00:10:59.838961 2553 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 12 00:10:59.838995 kubelet[2553]: I0712 00:10:59.838984 2553 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 12 00:10:59.838995 kubelet[2553]: I0712 00:10:59.839002 2553 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:10:59.839216 kubelet[2553]: I0712 00:10:59.839193 2553 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 12 00:10:59.839244 kubelet[2553]: I0712 00:10:59.839209 2553 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 12 00:10:59.839244 kubelet[2553]: I0712 00:10:59.839234 2553 policy_none.go:49] "None policy: Start" Jul 12 00:10:59.839244 kubelet[2553]: I0712 00:10:59.839244 2553 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 12 00:10:59.839312 kubelet[2553]: I0712 00:10:59.839253 2553 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:10:59.839554 kubelet[2553]: I0712 00:10:59.839528 2553 state_mem.go:75] "Updated machine memory state" Jul 12 00:10:59.844074 kubelet[2553]: I0712 00:10:59.844040 2553 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:10:59.844218 kubelet[2553]: I0712 00:10:59.844200 2553 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:10:59.844270 kubelet[2553]: I0712 00:10:59.844217 2553 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:10:59.844594 kubelet[2553]: I0712 00:10:59.844439 2553 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:10:59.845350 kubelet[2553]: E0712 00:10:59.845329 2553 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 12 00:10:59.908713 kubelet[2553]: I0712 00:10:59.908678 2553 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 12 00:10:59.909003 kubelet[2553]: I0712 00:10:59.908835 2553 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 12 00:10:59.909003 kubelet[2553]: I0712 00:10:59.908688 2553 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 12 00:10:59.948263 kubelet[2553]: I0712 00:10:59.948237 2553 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 00:10:59.954992 kubelet[2553]: I0712 00:10:59.954404 2553 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 12 00:10:59.954992 kubelet[2553]: I0712 00:10:59.954478 2553 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 12 00:10:59.994336 kubelet[2553]: I0712 00:10:59.994285 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0d45604c48f1530587d9f3d2aad2dab9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0d45604c48f1530587d9f3d2aad2dab9\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:10:59.994336 kubelet[2553]: I0712 00:10:59.994330 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:10:59.994336 kubelet[2553]: I0712 00:10:59.994351 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:10:59.994512 kubelet[2553]: I0712 00:10:59.994391 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 12 00:10:59.994512 kubelet[2553]: I0712 00:10:59.994429 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0d45604c48f1530587d9f3d2aad2dab9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0d45604c48f1530587d9f3d2aad2dab9\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:10:59.994512 kubelet[2553]: I0712 00:10:59.994448 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0d45604c48f1530587d9f3d2aad2dab9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0d45604c48f1530587d9f3d2aad2dab9\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:10:59.994512 kubelet[2553]: I0712 00:10:59.994465 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:10:59.994512 kubelet[2553]: I0712 00:10:59.994487 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:10:59.994631 kubelet[2553]: I0712 00:10:59.994503 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:11:00.214336 kubelet[2553]: E0712 00:11:00.214292 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:00.214448 kubelet[2553]: E0712 00:11:00.214305 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:00.214448 kubelet[2553]: E0712 00:11:00.214371 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:00.356449 sudo[2587]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 12 00:11:00.356720 sudo[2587]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 12 00:11:00.778663 kubelet[2553]: I0712 00:11:00.778623 2553 apiserver.go:52] "Watching apiserver" Jul 12 00:11:00.788661 kubelet[2553]: I0712 00:11:00.788620 2553 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 12 00:11:00.790272 sudo[2587]: pam_unix(sudo:session): session closed for user root Jul 12 00:11:00.827982 kubelet[2553]: I0712 00:11:00.827845 2553 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 12 00:11:00.828834 kubelet[2553]: E0712 00:11:00.828040 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:00.828834 kubelet[2553]: E0712 00:11:00.828329 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:00.833814 kubelet[2553]: E0712 00:11:00.833764 2553 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 12 00:11:00.834154 kubelet[2553]: E0712 00:11:00.833930 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:00.853762 kubelet[2553]: I0712 00:11:00.853657 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.853641852 podStartE2EDuration="1.853641852s" podCreationTimestamp="2025-07-12 00:10:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:11:00.845587134 +0000 UTC m=+1.123728814" watchObservedRunningTime="2025-07-12 00:11:00.853641852 +0000 UTC m=+1.131783452" Jul 12 00:11:00.861529 kubelet[2553]: I0712 00:11:00.861480 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.861465441 podStartE2EDuration="1.861465441s" podCreationTimestamp="2025-07-12 00:10:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:11:00.853778272 +0000 UTC m=+1.131919873" watchObservedRunningTime="2025-07-12 00:11:00.861465441 +0000 UTC m=+1.139607041" Jul 12 00:11:00.869740 kubelet[2553]: I0712 00:11:00.869693 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.8696792759999998 podStartE2EDuration="1.869679276s" podCreationTimestamp="2025-07-12 00:10:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:11:00.861903723 +0000 UTC m=+1.140045323" watchObservedRunningTime="2025-07-12 00:11:00.869679276 +0000 UTC m=+1.147820876" Jul 12 00:11:01.829046 kubelet[2553]: E0712 00:11:01.829008 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:01.829421 kubelet[2553]: E0712 00:11:01.829078 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:01.829421 kubelet[2553]: E0712 00:11:01.829147 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:02.572397 sudo[1650]: pam_unix(sudo:session): session closed for user root Jul 12 00:11:02.574063 sshd[1649]: Connection closed by 10.0.0.1 port 54652 Jul 12 00:11:02.574652 sshd-session[1646]: pam_unix(sshd:session): session closed for user core Jul 12 00:11:02.578021 systemd[1]: sshd@6-10.0.0.137:22-10.0.0.1:54652.service: Deactivated successfully. Jul 12 00:11:02.579772 systemd[1]: session-7.scope: Deactivated successfully. Jul 12 00:11:02.580013 systemd[1]: session-7.scope: Consumed 8.508s CPU time, 258M memory peak. Jul 12 00:11:02.581486 systemd-logind[1448]: Session 7 logged out. Waiting for processes to exit. Jul 12 00:11:02.582418 systemd-logind[1448]: Removed session 7. Jul 12 00:11:02.830411 kubelet[2553]: E0712 00:11:02.830308 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:04.149372 kubelet[2553]: I0712 00:11:04.149315 2553 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 12 00:11:04.149985 containerd[1467]: time="2025-07-12T00:11:04.149863879Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 12 00:11:04.150282 kubelet[2553]: I0712 00:11:04.150102 2553 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 12 00:11:04.964555 kubelet[2553]: E0712 00:11:04.964520 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:04.964891 systemd[1]: Created slice kubepods-besteffort-podc6ad9497_e09e_4d13_aca8_12f08290c8a2.slice - libcontainer container kubepods-besteffort-podc6ad9497_e09e_4d13_aca8_12f08290c8a2.slice. Jul 12 00:11:04.980638 systemd[1]: Created slice kubepods-burstable-podf40d3a62_ffe1_48b1_90a2_9b9253209bef.slice - libcontainer container kubepods-burstable-podf40d3a62_ffe1_48b1_90a2_9b9253209bef.slice. Jul 12 00:11:05.026927 kubelet[2553]: I0712 00:11:05.026668 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f40d3a62-ffe1-48b1-90a2-9b9253209bef-cni-path\") pod \"cilium-nkltq\" (UID: \"f40d3a62-ffe1-48b1-90a2-9b9253209bef\") " pod="kube-system/cilium-nkltq" Jul 12 00:11:05.026927 kubelet[2553]: I0712 00:11:05.026709 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f40d3a62-ffe1-48b1-90a2-9b9253209bef-cilium-config-path\") pod \"cilium-nkltq\" (UID: \"f40d3a62-ffe1-48b1-90a2-9b9253209bef\") " pod="kube-system/cilium-nkltq" Jul 12 00:11:05.026927 kubelet[2553]: I0712 00:11:05.026728 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf6xz\" (UniqueName: \"kubernetes.io/projected/f40d3a62-ffe1-48b1-90a2-9b9253209bef-kube-api-access-qf6xz\") pod \"cilium-nkltq\" (UID: \"f40d3a62-ffe1-48b1-90a2-9b9253209bef\") " pod="kube-system/cilium-nkltq" Jul 12 00:11:05.026927 kubelet[2553]: I0712 00:11:05.026745 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f40d3a62-ffe1-48b1-90a2-9b9253209bef-cilium-cgroup\") pod \"cilium-nkltq\" (UID: \"f40d3a62-ffe1-48b1-90a2-9b9253209bef\") " pod="kube-system/cilium-nkltq" Jul 12 00:11:05.026927 kubelet[2553]: I0712 00:11:05.026761 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f40d3a62-ffe1-48b1-90a2-9b9253209bef-host-proc-sys-net\") pod \"cilium-nkltq\" (UID: \"f40d3a62-ffe1-48b1-90a2-9b9253209bef\") " pod="kube-system/cilium-nkltq" Jul 12 00:11:05.026927 kubelet[2553]: I0712 00:11:05.026784 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f40d3a62-ffe1-48b1-90a2-9b9253209bef-lib-modules\") pod \"cilium-nkltq\" (UID: \"f40d3a62-ffe1-48b1-90a2-9b9253209bef\") " pod="kube-system/cilium-nkltq" Jul 12 00:11:05.027178 kubelet[2553]: I0712 00:11:05.026800 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f40d3a62-ffe1-48b1-90a2-9b9253209bef-host-proc-sys-kernel\") pod \"cilium-nkltq\" (UID: \"f40d3a62-ffe1-48b1-90a2-9b9253209bef\") " pod="kube-system/cilium-nkltq" Jul 12 00:11:05.027178 kubelet[2553]: I0712 00:11:05.026821 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f40d3a62-ffe1-48b1-90a2-9b9253209bef-hubble-tls\") pod \"cilium-nkltq\" (UID: \"f40d3a62-ffe1-48b1-90a2-9b9253209bef\") " pod="kube-system/cilium-nkltq" Jul 12 00:11:05.027178 kubelet[2553]: I0712 00:11:05.026841 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c6ad9497-e09e-4d13-aca8-12f08290c8a2-kube-proxy\") pod \"kube-proxy-mmvlm\" (UID: \"c6ad9497-e09e-4d13-aca8-12f08290c8a2\") " pod="kube-system/kube-proxy-mmvlm" Jul 12 00:11:05.027178 kubelet[2553]: I0712 00:11:05.026896 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk7vj\" (UniqueName: \"kubernetes.io/projected/c6ad9497-e09e-4d13-aca8-12f08290c8a2-kube-api-access-lk7vj\") pod \"kube-proxy-mmvlm\" (UID: \"c6ad9497-e09e-4d13-aca8-12f08290c8a2\") " pod="kube-system/kube-proxy-mmvlm" Jul 12 00:11:05.027178 kubelet[2553]: I0712 00:11:05.026913 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f40d3a62-ffe1-48b1-90a2-9b9253209bef-clustermesh-secrets\") pod \"cilium-nkltq\" (UID: \"f40d3a62-ffe1-48b1-90a2-9b9253209bef\") " pod="kube-system/cilium-nkltq" Jul 12 00:11:05.027279 kubelet[2553]: I0712 00:11:05.026930 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f40d3a62-ffe1-48b1-90a2-9b9253209bef-cilium-run\") pod \"cilium-nkltq\" (UID: \"f40d3a62-ffe1-48b1-90a2-9b9253209bef\") " pod="kube-system/cilium-nkltq" Jul 12 00:11:05.027279 kubelet[2553]: I0712 00:11:05.026947 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6ad9497-e09e-4d13-aca8-12f08290c8a2-xtables-lock\") pod \"kube-proxy-mmvlm\" (UID: \"c6ad9497-e09e-4d13-aca8-12f08290c8a2\") " pod="kube-system/kube-proxy-mmvlm" Jul 12 00:11:05.027279 kubelet[2553]: I0712 00:11:05.026974 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f40d3a62-ffe1-48b1-90a2-9b9253209bef-xtables-lock\") pod \"cilium-nkltq\" (UID: \"f40d3a62-ffe1-48b1-90a2-9b9253209bef\") " pod="kube-system/cilium-nkltq" Jul 12 00:11:05.027279 kubelet[2553]: I0712 00:11:05.026993 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f40d3a62-ffe1-48b1-90a2-9b9253209bef-hostproc\") pod \"cilium-nkltq\" (UID: \"f40d3a62-ffe1-48b1-90a2-9b9253209bef\") " pod="kube-system/cilium-nkltq" Jul 12 00:11:05.027279 kubelet[2553]: I0712 00:11:05.027014 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f40d3a62-ffe1-48b1-90a2-9b9253209bef-bpf-maps\") pod \"cilium-nkltq\" (UID: \"f40d3a62-ffe1-48b1-90a2-9b9253209bef\") " pod="kube-system/cilium-nkltq" Jul 12 00:11:05.027279 kubelet[2553]: I0712 00:11:05.027034 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6ad9497-e09e-4d13-aca8-12f08290c8a2-lib-modules\") pod \"kube-proxy-mmvlm\" (UID: \"c6ad9497-e09e-4d13-aca8-12f08290c8a2\") " pod="kube-system/kube-proxy-mmvlm" Jul 12 00:11:05.027393 kubelet[2553]: I0712 00:11:05.027062 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f40d3a62-ffe1-48b1-90a2-9b9253209bef-etc-cni-netd\") pod \"cilium-nkltq\" (UID: \"f40d3a62-ffe1-48b1-90a2-9b9253209bef\") " pod="kube-system/cilium-nkltq" Jul 12 00:11:05.212014 systemd[1]: Created slice kubepods-besteffort-pod19e30619_8e6b_4e1d_a358_6ba654a268c3.slice - libcontainer container kubepods-besteffort-pod19e30619_8e6b_4e1d_a358_6ba654a268c3.slice. Jul 12 00:11:05.228968 kubelet[2553]: I0712 00:11:05.228833 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19e30619-8e6b-4e1d-a358-6ba654a268c3-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-m7gx8\" (UID: \"19e30619-8e6b-4e1d-a358-6ba654a268c3\") " pod="kube-system/cilium-operator-6c4d7847fc-m7gx8" Jul 12 00:11:05.228968 kubelet[2553]: I0712 00:11:05.228908 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqhjd\" (UniqueName: \"kubernetes.io/projected/19e30619-8e6b-4e1d-a358-6ba654a268c3-kube-api-access-wqhjd\") pod \"cilium-operator-6c4d7847fc-m7gx8\" (UID: \"19e30619-8e6b-4e1d-a358-6ba654a268c3\") " pod="kube-system/cilium-operator-6c4d7847fc-m7gx8" Jul 12 00:11:05.277501 kubelet[2553]: E0712 00:11:05.277458 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:05.278149 containerd[1467]: time="2025-07-12T00:11:05.278097086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mmvlm,Uid:c6ad9497-e09e-4d13-aca8-12f08290c8a2,Namespace:kube-system,Attempt:0,}" Jul 12 00:11:05.283638 kubelet[2553]: E0712 00:11:05.283594 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:05.284350 containerd[1467]: time="2025-07-12T00:11:05.284309982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nkltq,Uid:f40d3a62-ffe1-48b1-90a2-9b9253209bef,Namespace:kube-system,Attempt:0,}" Jul 12 00:11:05.351996 containerd[1467]: time="2025-07-12T00:11:05.351901329Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:11:05.351996 containerd[1467]: time="2025-07-12T00:11:05.351981335Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:11:05.351996 containerd[1467]: time="2025-07-12T00:11:05.351998104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:11:05.352609 containerd[1467]: time="2025-07-12T00:11:05.352317328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:11:05.352848 containerd[1467]: time="2025-07-12T00:11:05.352733808Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:11:05.353368 containerd[1467]: time="2025-07-12T00:11:05.353315343Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:11:05.353368 containerd[1467]: time="2025-07-12T00:11:05.353347481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:11:05.355214 containerd[1467]: time="2025-07-12T00:11:05.353552119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:11:05.370117 systemd[1]: Started cri-containerd-0cf11b489a1fcf3fd7563b6fe896e3abc2972cffc98084ca2922ca92e0e6bd9f.scope - libcontainer container 0cf11b489a1fcf3fd7563b6fe896e3abc2972cffc98084ca2922ca92e0e6bd9f. Jul 12 00:11:05.372467 systemd[1]: Started cri-containerd-6e90128a78b9ae6b81e46631cd1d0f5fb763ed297c04eff53d607f7628187989.scope - libcontainer container 6e90128a78b9ae6b81e46631cd1d0f5fb763ed297c04eff53d607f7628187989. Jul 12 00:11:05.395401 containerd[1467]: time="2025-07-12T00:11:05.395349138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mmvlm,Uid:c6ad9497-e09e-4d13-aca8-12f08290c8a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"0cf11b489a1fcf3fd7563b6fe896e3abc2972cffc98084ca2922ca92e0e6bd9f\"" Jul 12 00:11:05.396441 kubelet[2553]: E0712 00:11:05.396372 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:05.398936 containerd[1467]: time="2025-07-12T00:11:05.398898821Z" level=info msg="CreateContainer within sandbox \"0cf11b489a1fcf3fd7563b6fe896e3abc2972cffc98084ca2922ca92e0e6bd9f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 12 00:11:05.400182 containerd[1467]: time="2025-07-12T00:11:05.400102995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nkltq,Uid:f40d3a62-ffe1-48b1-90a2-9b9253209bef,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e90128a78b9ae6b81e46631cd1d0f5fb763ed297c04eff53d607f7628187989\"" Jul 12 00:11:05.400697 kubelet[2553]: E0712 00:11:05.400661 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:05.403404 containerd[1467]: time="2025-07-12T00:11:05.403363952Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 12 00:11:05.414668 containerd[1467]: time="2025-07-12T00:11:05.414621111Z" level=info msg="CreateContainer within sandbox \"0cf11b489a1fcf3fd7563b6fe896e3abc2972cffc98084ca2922ca92e0e6bd9f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5d3f585259095069b8f9311413ced855d6e20e40f60b990389b611a66ed9c050\"" Jul 12 00:11:05.415167 containerd[1467]: time="2025-07-12T00:11:05.415135047Z" level=info msg="StartContainer for \"5d3f585259095069b8f9311413ced855d6e20e40f60b990389b611a66ed9c050\"" Jul 12 00:11:05.446066 systemd[1]: Started cri-containerd-5d3f585259095069b8f9311413ced855d6e20e40f60b990389b611a66ed9c050.scope - libcontainer container 5d3f585259095069b8f9311413ced855d6e20e40f60b990389b611a66ed9c050. Jul 12 00:11:05.471306 containerd[1467]: time="2025-07-12T00:11:05.471258753Z" level=info msg="StartContainer for \"5d3f585259095069b8f9311413ced855d6e20e40f60b990389b611a66ed9c050\" returns successfully" Jul 12 00:11:05.520095 kubelet[2553]: E0712 00:11:05.519978 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:05.520845 containerd[1467]: time="2025-07-12T00:11:05.520795948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-m7gx8,Uid:19e30619-8e6b-4e1d-a358-6ba654a268c3,Namespace:kube-system,Attempt:0,}" Jul 12 00:11:05.552973 containerd[1467]: time="2025-07-12T00:11:05.552856963Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:11:05.552973 containerd[1467]: time="2025-07-12T00:11:05.552930405Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:11:05.552973 containerd[1467]: time="2025-07-12T00:11:05.552942452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:11:05.553355 containerd[1467]: time="2025-07-12T00:11:05.553305661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:11:05.573027 systemd[1]: Started cri-containerd-c508a2859e30a7977f5c1a8f9d181e62c630bb496aa1bdc46da1c59046238940.scope - libcontainer container c508a2859e30a7977f5c1a8f9d181e62c630bb496aa1bdc46da1c59046238940. Jul 12 00:11:05.602535 containerd[1467]: time="2025-07-12T00:11:05.602490533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-m7gx8,Uid:19e30619-8e6b-4e1d-a358-6ba654a268c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"c508a2859e30a7977f5c1a8f9d181e62c630bb496aa1bdc46da1c59046238940\"" Jul 12 00:11:05.603915 kubelet[2553]: E0712 00:11:05.603177 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:05.839317 kubelet[2553]: E0712 00:11:05.839197 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:05.850027 kubelet[2553]: I0712 00:11:05.849976 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mmvlm" podStartSLOduration=1.8499598210000001 podStartE2EDuration="1.849959821s" podCreationTimestamp="2025-07-12 00:11:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:11:05.84971608 +0000 UTC m=+6.127857680" watchObservedRunningTime="2025-07-12 00:11:05.849959821 +0000 UTC m=+6.128101381" Jul 12 00:11:09.516026 kubelet[2553]: E0712 00:11:09.515991 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:09.846680 kubelet[2553]: E0712 00:11:09.846484 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:10.856389 kubelet[2553]: E0712 00:11:10.855815 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:11.311730 kubelet[2553]: E0712 00:11:11.310551 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:14.971606 kubelet[2553]: E0712 00:11:14.971558 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:15.883274 kubelet[2553]: E0712 00:11:15.883235 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:16.145258 update_engine[1454]: I20250712 00:11:16.144954 1454 update_attempter.cc:509] Updating boot flags... Jul 12 00:11:16.256932 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2935) Jul 12 00:11:16.316908 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2936) Jul 12 00:11:16.384167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3231808165.mount: Deactivated successfully. Jul 12 00:11:17.533699 containerd[1467]: time="2025-07-12T00:11:17.533648430Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:11:17.534635 containerd[1467]: time="2025-07-12T00:11:17.534115572Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 12 00:11:17.534923 containerd[1467]: time="2025-07-12T00:11:17.534893328Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:11:17.536583 containerd[1467]: time="2025-07-12T00:11:17.536555074Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 12.133145739s" Jul 12 00:11:17.536827 containerd[1467]: time="2025-07-12T00:11:17.536721164Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 12 00:11:17.540422 containerd[1467]: time="2025-07-12T00:11:17.539698549Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 12 00:11:17.541914 containerd[1467]: time="2025-07-12T00:11:17.541856966Z" level=info msg="CreateContainer within sandbox \"6e90128a78b9ae6b81e46631cd1d0f5fb763ed297c04eff53d607f7628187989\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 12 00:11:17.580897 containerd[1467]: time="2025-07-12T00:11:17.580770597Z" level=info msg="CreateContainer within sandbox \"6e90128a78b9ae6b81e46631cd1d0f5fb763ed297c04eff53d607f7628187989\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"eea4867f3d0283eddc38bb38359c6ab395c8a215013eb9c45a3ac1974ced3297\"" Jul 12 00:11:17.584025 containerd[1467]: time="2025-07-12T00:11:17.583209659Z" level=info msg="StartContainer for \"eea4867f3d0283eddc38bb38359c6ab395c8a215013eb9c45a3ac1974ced3297\"" Jul 12 00:11:17.612053 systemd[1]: Started cri-containerd-eea4867f3d0283eddc38bb38359c6ab395c8a215013eb9c45a3ac1974ced3297.scope - libcontainer container eea4867f3d0283eddc38bb38359c6ab395c8a215013eb9c45a3ac1974ced3297. Jul 12 00:11:17.633141 containerd[1467]: time="2025-07-12T00:11:17.633095387Z" level=info msg="StartContainer for \"eea4867f3d0283eddc38bb38359c6ab395c8a215013eb9c45a3ac1974ced3297\" returns successfully" Jul 12 00:11:17.690344 systemd[1]: cri-containerd-eea4867f3d0283eddc38bb38359c6ab395c8a215013eb9c45a3ac1974ced3297.scope: Deactivated successfully. Jul 12 00:11:17.866478 containerd[1467]: time="2025-07-12T00:11:17.855987637Z" level=info msg="shim disconnected" id=eea4867f3d0283eddc38bb38359c6ab395c8a215013eb9c45a3ac1974ced3297 namespace=k8s.io Jul 12 00:11:17.866478 containerd[1467]: time="2025-07-12T00:11:17.866415408Z" level=warning msg="cleaning up after shim disconnected" id=eea4867f3d0283eddc38bb38359c6ab395c8a215013eb9c45a3ac1974ced3297 namespace=k8s.io Jul 12 00:11:17.866478 containerd[1467]: time="2025-07-12T00:11:17.866430292Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:11:17.882493 kubelet[2553]: E0712 00:11:17.882442 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:17.885214 containerd[1467]: time="2025-07-12T00:11:17.884991736Z" level=info msg="CreateContainer within sandbox \"6e90128a78b9ae6b81e46631cd1d0f5fb763ed297c04eff53d607f7628187989\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 12 00:11:17.898732 containerd[1467]: time="2025-07-12T00:11:17.898642086Z" level=info msg="CreateContainer within sandbox \"6e90128a78b9ae6b81e46631cd1d0f5fb763ed297c04eff53d607f7628187989\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"71c06c6fcda89d659dc4ade1586cbc776b8b72f2056d801ab8b0911bef2d8471\"" Jul 12 00:11:17.899344 containerd[1467]: time="2025-07-12T00:11:17.899308529Z" level=info msg="StartContainer for \"71c06c6fcda89d659dc4ade1586cbc776b8b72f2056d801ab8b0911bef2d8471\"" Jul 12 00:11:17.919072 systemd[1]: Started cri-containerd-71c06c6fcda89d659dc4ade1586cbc776b8b72f2056d801ab8b0911bef2d8471.scope - libcontainer container 71c06c6fcda89d659dc4ade1586cbc776b8b72f2056d801ab8b0911bef2d8471. Jul 12 00:11:17.941409 containerd[1467]: time="2025-07-12T00:11:17.941350872Z" level=info msg="StartContainer for \"71c06c6fcda89d659dc4ade1586cbc776b8b72f2056d801ab8b0911bef2d8471\" returns successfully" Jul 12 00:11:17.970729 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:11:17.970980 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:11:17.971138 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:11:17.981242 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:11:17.981493 systemd[1]: cri-containerd-71c06c6fcda89d659dc4ade1586cbc776b8b72f2056d801ab8b0911bef2d8471.scope: Deactivated successfully. Jul 12 00:11:17.994936 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:11:18.014510 containerd[1467]: time="2025-07-12T00:11:18.014327465Z" level=info msg="shim disconnected" id=71c06c6fcda89d659dc4ade1586cbc776b8b72f2056d801ab8b0911bef2d8471 namespace=k8s.io Jul 12 00:11:18.014510 containerd[1467]: time="2025-07-12T00:11:18.014379520Z" level=warning msg="cleaning up after shim disconnected" id=71c06c6fcda89d659dc4ade1586cbc776b8b72f2056d801ab8b0911bef2d8471 namespace=k8s.io Jul 12 00:11:18.014510 containerd[1467]: time="2025-07-12T00:11:18.014388323Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:11:18.577998 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eea4867f3d0283eddc38bb38359c6ab395c8a215013eb9c45a3ac1974ced3297-rootfs.mount: Deactivated successfully. Jul 12 00:11:18.592592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2650539030.mount: Deactivated successfully. Jul 12 00:11:18.886961 kubelet[2553]: E0712 00:11:18.886498 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:18.893506 containerd[1467]: time="2025-07-12T00:11:18.892723069Z" level=info msg="CreateContainer within sandbox \"6e90128a78b9ae6b81e46631cd1d0f5fb763ed297c04eff53d607f7628187989\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 12 00:11:18.933632 containerd[1467]: time="2025-07-12T00:11:18.933580898Z" level=info msg="CreateContainer within sandbox \"6e90128a78b9ae6b81e46631cd1d0f5fb763ed297c04eff53d607f7628187989\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e5bd6a329381734a96f2795c9a04ffcbcfa8091127b62b299c9771b35aa09ebb\"" Jul 12 00:11:18.934220 containerd[1467]: time="2025-07-12T00:11:18.934182593Z" level=info msg="StartContainer for \"e5bd6a329381734a96f2795c9a04ffcbcfa8091127b62b299c9771b35aa09ebb\"" Jul 12 00:11:18.948437 containerd[1467]: time="2025-07-12T00:11:18.948377462Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 12 00:11:18.948437 containerd[1467]: time="2025-07-12T00:11:18.948386985Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:11:18.949358 containerd[1467]: time="2025-07-12T00:11:18.949327497Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:11:18.951118 containerd[1467]: time="2025-07-12T00:11:18.951070002Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.411335441s" Jul 12 00:11:18.951163 containerd[1467]: time="2025-07-12T00:11:18.951109693Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 12 00:11:18.959235 containerd[1467]: time="2025-07-12T00:11:18.958463903Z" level=info msg="CreateContainer within sandbox \"c508a2859e30a7977f5c1a8f9d181e62c630bb496aa1bdc46da1c59046238940\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 12 00:11:18.968960 containerd[1467]: time="2025-07-12T00:11:18.968920850Z" level=info msg="CreateContainer within sandbox \"c508a2859e30a7977f5c1a8f9d181e62c630bb496aa1bdc46da1c59046238940\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"37dcffde59a49a81f48cd216838af272a7eb0c5008a86b993c7440c26623c62d\"" Jul 12 00:11:18.970522 containerd[1467]: time="2025-07-12T00:11:18.969662225Z" level=info msg="StartContainer for \"37dcffde59a49a81f48cd216838af272a7eb0c5008a86b993c7440c26623c62d\"" Jul 12 00:11:18.983136 systemd[1]: Started cri-containerd-e5bd6a329381734a96f2795c9a04ffcbcfa8091127b62b299c9771b35aa09ebb.scope - libcontainer container e5bd6a329381734a96f2795c9a04ffcbcfa8091127b62b299c9771b35aa09ebb. Jul 12 00:11:19.004039 systemd[1]: Started cri-containerd-37dcffde59a49a81f48cd216838af272a7eb0c5008a86b993c7440c26623c62d.scope - libcontainer container 37dcffde59a49a81f48cd216838af272a7eb0c5008a86b993c7440c26623c62d. Jul 12 00:11:19.036231 containerd[1467]: time="2025-07-12T00:11:19.036185045Z" level=info msg="StartContainer for \"e5bd6a329381734a96f2795c9a04ffcbcfa8091127b62b299c9771b35aa09ebb\" returns successfully" Jul 12 00:11:19.036777 containerd[1467]: time="2025-07-12T00:11:19.036746600Z" level=info msg="StartContainer for \"37dcffde59a49a81f48cd216838af272a7eb0c5008a86b993c7440c26623c62d\" returns successfully" Jul 12 00:11:19.047189 systemd[1]: cri-containerd-e5bd6a329381734a96f2795c9a04ffcbcfa8091127b62b299c9771b35aa09ebb.scope: Deactivated successfully. Jul 12 00:11:19.112461 containerd[1467]: time="2025-07-12T00:11:19.112383710Z" level=info msg="shim disconnected" id=e5bd6a329381734a96f2795c9a04ffcbcfa8091127b62b299c9771b35aa09ebb namespace=k8s.io Jul 12 00:11:19.112461 containerd[1467]: time="2025-07-12T00:11:19.112451448Z" level=warning msg="cleaning up after shim disconnected" id=e5bd6a329381734a96f2795c9a04ffcbcfa8091127b62b299c9771b35aa09ebb namespace=k8s.io Jul 12 00:11:19.112461 containerd[1467]: time="2025-07-12T00:11:19.112461651Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:11:19.890373 kubelet[2553]: E0712 00:11:19.890339 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:19.893666 containerd[1467]: time="2025-07-12T00:11:19.892799923Z" level=info msg="CreateContainer within sandbox \"6e90128a78b9ae6b81e46631cd1d0f5fb763ed297c04eff53d607f7628187989\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 12 00:11:19.894156 kubelet[2553]: E0712 00:11:19.892968 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:19.918172 containerd[1467]: time="2025-07-12T00:11:19.918124150Z" level=info msg="CreateContainer within sandbox \"6e90128a78b9ae6b81e46631cd1d0f5fb763ed297c04eff53d607f7628187989\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"587c849f72f5c89f29e4541be8974dbe8e87d2a52598313cb545c16f7356502d\"" Jul 12 00:11:19.918913 containerd[1467]: time="2025-07-12T00:11:19.918591319Z" level=info msg="StartContainer for \"587c849f72f5c89f29e4541be8974dbe8e87d2a52598313cb545c16f7356502d\"" Jul 12 00:11:19.956070 systemd[1]: Started cri-containerd-587c849f72f5c89f29e4541be8974dbe8e87d2a52598313cb545c16f7356502d.scope - libcontainer container 587c849f72f5c89f29e4541be8974dbe8e87d2a52598313cb545c16f7356502d. Jul 12 00:11:19.978145 systemd[1]: cri-containerd-587c849f72f5c89f29e4541be8974dbe8e87d2a52598313cb545c16f7356502d.scope: Deactivated successfully. Jul 12 00:11:19.982851 containerd[1467]: time="2025-07-12T00:11:19.982732377Z" level=info msg="StartContainer for \"587c849f72f5c89f29e4541be8974dbe8e87d2a52598313cb545c16f7356502d\" returns successfully" Jul 12 00:11:20.001734 containerd[1467]: time="2025-07-12T00:11:20.001669672Z" level=info msg="shim disconnected" id=587c849f72f5c89f29e4541be8974dbe8e87d2a52598313cb545c16f7356502d namespace=k8s.io Jul 12 00:11:20.001734 containerd[1467]: time="2025-07-12T00:11:20.001722406Z" level=warning msg="cleaning up after shim disconnected" id=587c849f72f5c89f29e4541be8974dbe8e87d2a52598313cb545c16f7356502d namespace=k8s.io Jul 12 00:11:20.001734 containerd[1467]: time="2025-07-12T00:11:20.001730688Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:11:20.578714 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-587c849f72f5c89f29e4541be8974dbe8e87d2a52598313cb545c16f7356502d-rootfs.mount: Deactivated successfully. Jul 12 00:11:20.896982 kubelet[2553]: E0712 00:11:20.896721 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:20.897644 kubelet[2553]: E0712 00:11:20.897349 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:20.900769 containerd[1467]: time="2025-07-12T00:11:20.900495567Z" level=info msg="CreateContainer within sandbox \"6e90128a78b9ae6b81e46631cd1d0f5fb763ed297c04eff53d607f7628187989\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 12 00:11:20.915819 kubelet[2553]: I0712 00:11:20.915592 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-m7gx8" podStartSLOduration=2.568342681 podStartE2EDuration="15.915574935s" podCreationTimestamp="2025-07-12 00:11:05 +0000 UTC" firstStartedPulling="2025-07-12 00:11:05.60453635 +0000 UTC m=+5.882677910" lastFinishedPulling="2025-07-12 00:11:18.951768564 +0000 UTC m=+19.229910164" observedRunningTime="2025-07-12 00:11:19.923774389 +0000 UTC m=+20.201915989" watchObservedRunningTime="2025-07-12 00:11:20.915574935 +0000 UTC m=+21.193716615" Jul 12 00:11:20.916914 containerd[1467]: time="2025-07-12T00:11:20.916354380Z" level=info msg="CreateContainer within sandbox \"6e90128a78b9ae6b81e46631cd1d0f5fb763ed297c04eff53d607f7628187989\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f318f1085974a0fd32342894c9e917b924f4081a914ed1563320de632402e93c\"" Jul 12 00:11:20.916914 containerd[1467]: time="2025-07-12T00:11:20.916847310Z" level=info msg="StartContainer for \"f318f1085974a0fd32342894c9e917b924f4081a914ed1563320de632402e93c\"" Jul 12 00:11:20.950065 systemd[1]: Started cri-containerd-f318f1085974a0fd32342894c9e917b924f4081a914ed1563320de632402e93c.scope - libcontainer container f318f1085974a0fd32342894c9e917b924f4081a914ed1563320de632402e93c. Jul 12 00:11:20.976231 containerd[1467]: time="2025-07-12T00:11:20.976184965Z" level=info msg="StartContainer for \"f318f1085974a0fd32342894c9e917b924f4081a914ed1563320de632402e93c\" returns successfully" Jul 12 00:11:21.159059 kubelet[2553]: I0712 00:11:21.159014 2553 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 12 00:11:21.197725 systemd[1]: Created slice kubepods-burstable-podf9e05f32_3674_4167_a6d5_1d2035735573.slice - libcontainer container kubepods-burstable-podf9e05f32_3674_4167_a6d5_1d2035735573.slice. Jul 12 00:11:21.206697 systemd[1]: Created slice kubepods-burstable-podfb320027_77c1_461e_a9a7_060a4872c311.slice - libcontainer container kubepods-burstable-podfb320027_77c1_461e_a9a7_060a4872c311.slice. Jul 12 00:11:21.245749 kubelet[2553]: I0712 00:11:21.245658 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb320027-77c1-461e-a9a7-060a4872c311-config-volume\") pod \"coredns-668d6bf9bc-84j52\" (UID: \"fb320027-77c1-461e-a9a7-060a4872c311\") " pod="kube-system/coredns-668d6bf9bc-84j52" Jul 12 00:11:21.245749 kubelet[2553]: I0712 00:11:21.245697 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lngt4\" (UniqueName: \"kubernetes.io/projected/fb320027-77c1-461e-a9a7-060a4872c311-kube-api-access-lngt4\") pod \"coredns-668d6bf9bc-84j52\" (UID: \"fb320027-77c1-461e-a9a7-060a4872c311\") " pod="kube-system/coredns-668d6bf9bc-84j52" Jul 12 00:11:21.245749 kubelet[2553]: I0712 00:11:21.245729 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwsz9\" (UniqueName: \"kubernetes.io/projected/f9e05f32-3674-4167-a6d5-1d2035735573-kube-api-access-rwsz9\") pod \"coredns-668d6bf9bc-9wqh2\" (UID: \"f9e05f32-3674-4167-a6d5-1d2035735573\") " pod="kube-system/coredns-668d6bf9bc-9wqh2" Jul 12 00:11:21.246036 kubelet[2553]: I0712 00:11:21.245789 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f9e05f32-3674-4167-a6d5-1d2035735573-config-volume\") pod \"coredns-668d6bf9bc-9wqh2\" (UID: \"f9e05f32-3674-4167-a6d5-1d2035735573\") " pod="kube-system/coredns-668d6bf9bc-9wqh2" Jul 12 00:11:21.501662 kubelet[2553]: E0712 00:11:21.501475 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:21.503395 containerd[1467]: time="2025-07-12T00:11:21.502713883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9wqh2,Uid:f9e05f32-3674-4167-a6d5-1d2035735573,Namespace:kube-system,Attempt:0,}" Jul 12 00:11:21.508908 kubelet[2553]: E0712 00:11:21.508869 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:21.509863 containerd[1467]: time="2025-07-12T00:11:21.509412886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-84j52,Uid:fb320027-77c1-461e-a9a7-060a4872c311,Namespace:kube-system,Attempt:0,}" Jul 12 00:11:21.902850 kubelet[2553]: E0712 00:11:21.902729 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:22.905221 kubelet[2553]: E0712 00:11:22.905168 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:23.179034 systemd-networkd[1399]: cilium_host: Link UP Jul 12 00:11:23.179153 systemd-networkd[1399]: cilium_net: Link UP Jul 12 00:11:23.179155 systemd-networkd[1399]: cilium_net: Gained carrier Jul 12 00:11:23.179322 systemd-networkd[1399]: cilium_host: Gained carrier Jul 12 00:11:23.239983 systemd-networkd[1399]: cilium_net: Gained IPv6LL Jul 12 00:11:23.271971 systemd-networkd[1399]: cilium_vxlan: Link UP Jul 12 00:11:23.271977 systemd-networkd[1399]: cilium_vxlan: Gained carrier Jul 12 00:11:23.585970 kernel: NET: Registered PF_ALG protocol family Jul 12 00:11:23.907375 kubelet[2553]: E0712 00:11:23.906960 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:24.206061 systemd-networkd[1399]: cilium_host: Gained IPv6LL Jul 12 00:11:24.227266 systemd-networkd[1399]: lxc_health: Link UP Jul 12 00:11:24.232049 systemd-networkd[1399]: lxc_health: Gained carrier Jul 12 00:11:24.356919 kernel: eth0: renamed from tmp819d0 Jul 12 00:11:24.362041 kernel: eth0: renamed from tmp08e97 Jul 12 00:11:24.369365 systemd-networkd[1399]: lxc789b9af96665: Link UP Jul 12 00:11:24.371664 systemd-networkd[1399]: lxc08d8d2044a72: Link UP Jul 12 00:11:24.373147 systemd-networkd[1399]: lxc08d8d2044a72: Gained carrier Jul 12 00:11:24.373326 systemd-networkd[1399]: lxc789b9af96665: Gained carrier Jul 12 00:11:24.654007 systemd-networkd[1399]: cilium_vxlan: Gained IPv6LL Jul 12 00:11:25.301266 kubelet[2553]: E0712 00:11:25.301219 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:25.319576 kubelet[2553]: I0712 00:11:25.319469 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nkltq" podStartSLOduration=9.18048705 podStartE2EDuration="21.31886687s" podCreationTimestamp="2025-07-12 00:11:04 +0000 UTC" firstStartedPulling="2025-07-12 00:11:05.401153479 +0000 UTC m=+5.679295079" lastFinishedPulling="2025-07-12 00:11:17.539533299 +0000 UTC m=+17.817674899" observedRunningTime="2025-07-12 00:11:21.918644403 +0000 UTC m=+22.196786003" watchObservedRunningTime="2025-07-12 00:11:25.31886687 +0000 UTC m=+25.597008430" Jul 12 00:11:25.613044 systemd-networkd[1399]: lxc_health: Gained IPv6LL Jul 12 00:11:25.912915 kubelet[2553]: E0712 00:11:25.910350 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:26.125078 systemd-networkd[1399]: lxc789b9af96665: Gained IPv6LL Jul 12 00:11:26.125655 systemd-networkd[1399]: lxc08d8d2044a72: Gained IPv6LL Jul 12 00:11:26.237454 systemd[1]: Started sshd@7-10.0.0.137:22-10.0.0.1:55224.service - OpenSSH per-connection server daemon (10.0.0.1:55224). Jul 12 00:11:26.300245 sshd[3790]: Accepted publickey for core from 10.0.0.1 port 55224 ssh2: RSA SHA256:n0+tZ6jsx4rUqm31x+f+3vpltD7bP0WQQLUre7Cw/6U Jul 12 00:11:26.301639 sshd-session[3790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:11:26.307803 systemd-logind[1448]: New session 8 of user core. Jul 12 00:11:26.313091 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 12 00:11:26.459909 sshd[3792]: Connection closed by 10.0.0.1 port 55224 Jul 12 00:11:26.459683 sshd-session[3790]: pam_unix(sshd:session): session closed for user core Jul 12 00:11:26.463653 systemd[1]: sshd@7-10.0.0.137:22-10.0.0.1:55224.service: Deactivated successfully. Jul 12 00:11:26.466144 systemd[1]: session-8.scope: Deactivated successfully. Jul 12 00:11:26.467197 systemd-logind[1448]: Session 8 logged out. Waiting for processes to exit. Jul 12 00:11:26.468407 systemd-logind[1448]: Removed session 8. Jul 12 00:11:28.153252 containerd[1467]: time="2025-07-12T00:11:28.153105412Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:11:28.153252 containerd[1467]: time="2025-07-12T00:11:28.153171905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:11:28.153252 containerd[1467]: time="2025-07-12T00:11:28.153184067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:11:28.154430 containerd[1467]: time="2025-07-12T00:11:28.154359926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:11:28.154726 containerd[1467]: time="2025-07-12T00:11:28.154525836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:11:28.154726 containerd[1467]: time="2025-07-12T00:11:28.154577166Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:11:28.154726 containerd[1467]: time="2025-07-12T00:11:28.154587848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:11:28.154726 containerd[1467]: time="2025-07-12T00:11:28.154687426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:11:28.170380 systemd[1]: run-containerd-runc-k8s.io-819d0d7501f2b922f57f56973385c18c843634a954236d13d98ea82cb3a72cd3-runc.5Q9lfU.mount: Deactivated successfully. Jul 12 00:11:28.187121 systemd[1]: Started cri-containerd-08e9723fb05b3b06dde1ca1832a53827a0f99b0aea1e2e50d47922db9d15202f.scope - libcontainer container 08e9723fb05b3b06dde1ca1832a53827a0f99b0aea1e2e50d47922db9d15202f. Jul 12 00:11:28.188476 systemd[1]: Started cri-containerd-819d0d7501f2b922f57f56973385c18c843634a954236d13d98ea82cb3a72cd3.scope - libcontainer container 819d0d7501f2b922f57f56973385c18c843634a954236d13d98ea82cb3a72cd3. Jul 12 00:11:28.202650 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:11:28.203925 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:11:28.223764 containerd[1467]: time="2025-07-12T00:11:28.223627727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9wqh2,Uid:f9e05f32-3674-4167-a6d5-1d2035735573,Namespace:kube-system,Attempt:0,} returns sandbox id \"08e9723fb05b3b06dde1ca1832a53827a0f99b0aea1e2e50d47922db9d15202f\"" Jul 12 00:11:28.225701 containerd[1467]: time="2025-07-12T00:11:28.225661945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-84j52,Uid:fb320027-77c1-461e-a9a7-060a4872c311,Namespace:kube-system,Attempt:0,} returns sandbox id \"819d0d7501f2b922f57f56973385c18c843634a954236d13d98ea82cb3a72cd3\"" Jul 12 00:11:28.226507 kubelet[2553]: E0712 00:11:28.226484 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:28.229155 containerd[1467]: time="2025-07-12T00:11:28.229114387Z" level=info msg="CreateContainer within sandbox \"819d0d7501f2b922f57f56973385c18c843634a954236d13d98ea82cb3a72cd3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:11:28.229247 kubelet[2553]: E0712 00:11:28.229207 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:28.231570 containerd[1467]: time="2025-07-12T00:11:28.231237862Z" level=info msg="CreateContainer within sandbox \"08e9723fb05b3b06dde1ca1832a53827a0f99b0aea1e2e50d47922db9d15202f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:11:28.251000 containerd[1467]: time="2025-07-12T00:11:28.250930084Z" level=info msg="CreateContainer within sandbox \"819d0d7501f2b922f57f56973385c18c843634a954236d13d98ea82cb3a72cd3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6b131170fcb4a669629bc0235bb7e1234842e46ee6941c5e73f9ade9fe0c8b8b\"" Jul 12 00:11:28.251555 containerd[1467]: time="2025-07-12T00:11:28.251478106Z" level=info msg="StartContainer for \"6b131170fcb4a669629bc0235bb7e1234842e46ee6941c5e73f9ade9fe0c8b8b\"" Jul 12 00:11:28.253163 containerd[1467]: time="2025-07-12T00:11:28.253056679Z" level=info msg="CreateContainer within sandbox \"08e9723fb05b3b06dde1ca1832a53827a0f99b0aea1e2e50d47922db9d15202f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"37226ecfcccaf3d5018955404ce0e566c9942d9330bc50b6e53db67ffb0bab3d\"" Jul 12 00:11:28.253900 containerd[1467]: time="2025-07-12T00:11:28.253861149Z" level=info msg="StartContainer for \"37226ecfcccaf3d5018955404ce0e566c9942d9330bc50b6e53db67ffb0bab3d\"" Jul 12 00:11:28.279106 systemd[1]: Started cri-containerd-6b131170fcb4a669629bc0235bb7e1234842e46ee6941c5e73f9ade9fe0c8b8b.scope - libcontainer container 6b131170fcb4a669629bc0235bb7e1234842e46ee6941c5e73f9ade9fe0c8b8b. Jul 12 00:11:28.282955 systemd[1]: Started cri-containerd-37226ecfcccaf3d5018955404ce0e566c9942d9330bc50b6e53db67ffb0bab3d.scope - libcontainer container 37226ecfcccaf3d5018955404ce0e566c9942d9330bc50b6e53db67ffb0bab3d. Jul 12 00:11:28.305860 containerd[1467]: time="2025-07-12T00:11:28.305813610Z" level=info msg="StartContainer for \"6b131170fcb4a669629bc0235bb7e1234842e46ee6941c5e73f9ade9fe0c8b8b\" returns successfully" Jul 12 00:11:28.337783 containerd[1467]: time="2025-07-12T00:11:28.337732065Z" level=info msg="StartContainer for \"37226ecfcccaf3d5018955404ce0e566c9942d9330bc50b6e53db67ffb0bab3d\" returns successfully" Jul 12 00:11:28.918419 kubelet[2553]: E0712 00:11:28.918350 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:28.920430 kubelet[2553]: E0712 00:11:28.920405 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:28.931948 kubelet[2553]: I0712 00:11:28.931842 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-84j52" podStartSLOduration=23.931818942 podStartE2EDuration="23.931818942s" podCreationTimestamp="2025-07-12 00:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:11:28.930933177 +0000 UTC m=+29.209074777" watchObservedRunningTime="2025-07-12 00:11:28.931818942 +0000 UTC m=+29.209960502" Jul 12 00:11:29.922502 kubelet[2553]: E0712 00:11:29.922466 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:30.923918 kubelet[2553]: E0712 00:11:30.923871 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:31.474223 systemd[1]: Started sshd@8-10.0.0.137:22-10.0.0.1:55234.service - OpenSSH per-connection server daemon (10.0.0.1:55234). Jul 12 00:11:31.524936 sshd[3981]: Accepted publickey for core from 10.0.0.1 port 55234 ssh2: RSA SHA256:n0+tZ6jsx4rUqm31x+f+3vpltD7bP0WQQLUre7Cw/6U Jul 12 00:11:31.526430 sshd-session[3981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:11:31.531014 systemd-logind[1448]: New session 9 of user core. Jul 12 00:11:31.542087 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 12 00:11:31.664246 sshd[3983]: Connection closed by 10.0.0.1 port 55234 Jul 12 00:11:31.665307 sshd-session[3981]: pam_unix(sshd:session): session closed for user core Jul 12 00:11:31.669778 systemd-logind[1448]: Session 9 logged out. Waiting for processes to exit. Jul 12 00:11:31.670476 systemd[1]: sshd@8-10.0.0.137:22-10.0.0.1:55234.service: Deactivated successfully. Jul 12 00:11:31.674165 systemd[1]: session-9.scope: Deactivated successfully. Jul 12 00:11:31.675754 systemd-logind[1448]: Removed session 9. Jul 12 00:11:36.688171 systemd[1]: Started sshd@9-10.0.0.137:22-10.0.0.1:50452.service - OpenSSH per-connection server daemon (10.0.0.1:50452). Jul 12 00:11:36.733394 sshd[4001]: Accepted publickey for core from 10.0.0.1 port 50452 ssh2: RSA SHA256:n0+tZ6jsx4rUqm31x+f+3vpltD7bP0WQQLUre7Cw/6U Jul 12 00:11:36.734554 sshd-session[4001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:11:36.738617 systemd-logind[1448]: New session 10 of user core. Jul 12 00:11:36.745023 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 12 00:11:36.861477 sshd[4003]: Connection closed by 10.0.0.1 port 50452 Jul 12 00:11:36.861808 sshd-session[4001]: pam_unix(sshd:session): session closed for user core Jul 12 00:11:36.865140 systemd[1]: sshd@9-10.0.0.137:22-10.0.0.1:50452.service: Deactivated successfully. Jul 12 00:11:36.866535 systemd[1]: session-10.scope: Deactivated successfully. Jul 12 00:11:36.868606 systemd-logind[1448]: Session 10 logged out. Waiting for processes to exit. Jul 12 00:11:36.870128 systemd-logind[1448]: Removed session 10. Jul 12 00:11:38.922130 kubelet[2553]: E0712 00:11:38.922024 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:38.930931 kubelet[2553]: I0712 00:11:38.930866 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-9wqh2" podStartSLOduration=33.930804959 podStartE2EDuration="33.930804959s" podCreationTimestamp="2025-07-12 00:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:11:28.966792366 +0000 UTC m=+29.244934006" watchObservedRunningTime="2025-07-12 00:11:38.930804959 +0000 UTC m=+39.208946559" Jul 12 00:11:38.941691 kubelet[2553]: E0712 00:11:38.941418 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:41.876442 systemd[1]: Started sshd@10-10.0.0.137:22-10.0.0.1:50458.service - OpenSSH per-connection server daemon (10.0.0.1:50458). Jul 12 00:11:41.951275 sshd[4021]: Accepted publickey for core from 10.0.0.1 port 50458 ssh2: RSA SHA256:n0+tZ6jsx4rUqm31x+f+3vpltD7bP0WQQLUre7Cw/6U Jul 12 00:11:41.952486 sshd-session[4021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:11:41.956943 systemd-logind[1448]: New session 11 of user core. Jul 12 00:11:41.969043 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 12 00:11:42.093682 sshd[4023]: Connection closed by 10.0.0.1 port 50458 Jul 12 00:11:42.093756 sshd-session[4021]: pam_unix(sshd:session): session closed for user core Jul 12 00:11:42.102379 systemd[1]: sshd@10-10.0.0.137:22-10.0.0.1:50458.service: Deactivated successfully. Jul 12 00:11:42.103993 systemd[1]: session-11.scope: Deactivated successfully. Jul 12 00:11:42.104709 systemd-logind[1448]: Session 11 logged out. Waiting for processes to exit. Jul 12 00:11:42.114282 systemd[1]: Started sshd@11-10.0.0.137:22-10.0.0.1:50470.service - OpenSSH per-connection server daemon (10.0.0.1:50470). Jul 12 00:11:42.115465 systemd-logind[1448]: Removed session 11. Jul 12 00:11:42.160288 sshd[4036]: Accepted publickey for core from 10.0.0.1 port 50470 ssh2: RSA SHA256:n0+tZ6jsx4rUqm31x+f+3vpltD7bP0WQQLUre7Cw/6U Jul 12 00:11:42.161509 sshd-session[4036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:11:42.165956 systemd-logind[1448]: New session 12 of user core. Jul 12 00:11:42.173045 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 12 00:11:42.346095 sshd[4039]: Connection closed by 10.0.0.1 port 50470 Jul 12 00:11:42.346788 sshd-session[4036]: pam_unix(sshd:session): session closed for user core Jul 12 00:11:42.361292 systemd[1]: sshd@11-10.0.0.137:22-10.0.0.1:50470.service: Deactivated successfully. Jul 12 00:11:42.364090 systemd[1]: session-12.scope: Deactivated successfully. Jul 12 00:11:42.365917 systemd-logind[1448]: Session 12 logged out. Waiting for processes to exit. Jul 12 00:11:42.381332 systemd[1]: Started sshd@12-10.0.0.137:22-10.0.0.1:50482.service - OpenSSH per-connection server daemon (10.0.0.1:50482). Jul 12 00:11:42.382372 systemd-logind[1448]: Removed session 12. Jul 12 00:11:42.425338 sshd[4050]: Accepted publickey for core from 10.0.0.1 port 50482 ssh2: RSA SHA256:n0+tZ6jsx4rUqm31x+f+3vpltD7bP0WQQLUre7Cw/6U Jul 12 00:11:42.426555 sshd-session[4050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:11:42.430939 systemd-logind[1448]: New session 13 of user core. Jul 12 00:11:42.443036 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 12 00:11:42.558103 sshd[4053]: Connection closed by 10.0.0.1 port 50482 Jul 12 00:11:42.558021 sshd-session[4050]: pam_unix(sshd:session): session closed for user core Jul 12 00:11:42.561141 systemd[1]: sshd@12-10.0.0.137:22-10.0.0.1:50482.service: Deactivated successfully. Jul 12 00:11:42.563483 systemd[1]: session-13.scope: Deactivated successfully. Jul 12 00:11:42.564246 systemd-logind[1448]: Session 13 logged out. Waiting for processes to exit. Jul 12 00:11:42.565040 systemd-logind[1448]: Removed session 13. Jul 12 00:11:47.573841 systemd[1]: Started sshd@13-10.0.0.137:22-10.0.0.1:56800.service - OpenSSH per-connection server daemon (10.0.0.1:56800). Jul 12 00:11:47.624894 sshd[4066]: Accepted publickey for core from 10.0.0.1 port 56800 ssh2: RSA SHA256:n0+tZ6jsx4rUqm31x+f+3vpltD7bP0WQQLUre7Cw/6U Jul 12 00:11:47.626319 sshd-session[4066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:11:47.630561 systemd-logind[1448]: New session 14 of user core. Jul 12 00:11:47.637044 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 12 00:11:47.759757 sshd[4068]: Connection closed by 10.0.0.1 port 56800 Jul 12 00:11:47.760122 sshd-session[4066]: pam_unix(sshd:session): session closed for user core Jul 12 00:11:47.764821 systemd[1]: sshd@13-10.0.0.137:22-10.0.0.1:56800.service: Deactivated successfully. Jul 12 00:11:47.766891 systemd[1]: session-14.scope: Deactivated successfully. Jul 12 00:11:47.767753 systemd-logind[1448]: Session 14 logged out. Waiting for processes to exit. Jul 12 00:11:47.768792 systemd-logind[1448]: Removed session 14. Jul 12 00:11:52.781214 systemd[1]: Started sshd@14-10.0.0.137:22-10.0.0.1:33658.service - OpenSSH per-connection server daemon (10.0.0.1:33658). Jul 12 00:11:52.854348 sshd[4081]: Accepted publickey for core from 10.0.0.1 port 33658 ssh2: RSA SHA256:n0+tZ6jsx4rUqm31x+f+3vpltD7bP0WQQLUre7Cw/6U Jul 12 00:11:52.855491 sshd-session[4081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:11:52.863073 systemd-logind[1448]: New session 15 of user core. Jul 12 00:11:52.875098 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 12 00:11:53.014409 sshd[4083]: Connection closed by 10.0.0.1 port 33658 Jul 12 00:11:53.017479 sshd-session[4081]: pam_unix(sshd:session): session closed for user core Jul 12 00:11:53.031500 systemd[1]: sshd@14-10.0.0.137:22-10.0.0.1:33658.service: Deactivated successfully. Jul 12 00:11:53.034648 systemd[1]: session-15.scope: Deactivated successfully. Jul 12 00:11:53.037062 systemd-logind[1448]: Session 15 logged out. Waiting for processes to exit. Jul 12 00:11:53.049256 systemd[1]: Started sshd@15-10.0.0.137:22-10.0.0.1:33670.service - OpenSSH per-connection server daemon (10.0.0.1:33670). Jul 12 00:11:53.054077 systemd-logind[1448]: Removed session 15. Jul 12 00:11:53.092516 sshd[4095]: Accepted publickey for core from 10.0.0.1 port 33670 ssh2: RSA SHA256:n0+tZ6jsx4rUqm31x+f+3vpltD7bP0WQQLUre7Cw/6U Jul 12 00:11:53.093804 sshd-session[4095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:11:53.098960 systemd-logind[1448]: New session 16 of user core. Jul 12 00:11:53.106048 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 12 00:11:53.336226 sshd[4098]: Connection closed by 10.0.0.1 port 33670 Jul 12 00:11:53.337850 sshd-session[4095]: pam_unix(sshd:session): session closed for user core Jul 12 00:11:53.349421 systemd[1]: sshd@15-10.0.0.137:22-10.0.0.1:33670.service: Deactivated successfully. Jul 12 00:11:53.351831 systemd[1]: session-16.scope: Deactivated successfully. Jul 12 00:11:53.353247 systemd-logind[1448]: Session 16 logged out. Waiting for processes to exit. Jul 12 00:11:53.359199 systemd[1]: Started sshd@16-10.0.0.137:22-10.0.0.1:33686.service - OpenSSH per-connection server daemon (10.0.0.1:33686). Jul 12 00:11:53.360812 systemd-logind[1448]: Removed session 16. Jul 12 00:11:53.408840 sshd[4108]: Accepted publickey for core from 10.0.0.1 port 33686 ssh2: RSA SHA256:n0+tZ6jsx4rUqm31x+f+3vpltD7bP0WQQLUre7Cw/6U Jul 12 00:11:53.409716 sshd-session[4108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:11:53.414065 systemd-logind[1448]: New session 17 of user core. Jul 12 00:11:53.426068 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 12 00:11:54.149932 sshd[4111]: Connection closed by 10.0.0.1 port 33686 Jul 12 00:11:54.148286 sshd-session[4108]: pam_unix(sshd:session): session closed for user core Jul 12 00:11:54.161097 systemd[1]: sshd@16-10.0.0.137:22-10.0.0.1:33686.service: Deactivated successfully. Jul 12 00:11:54.163336 systemd[1]: session-17.scope: Deactivated successfully. Jul 12 00:11:54.165805 systemd-logind[1448]: Session 17 logged out. Waiting for processes to exit. Jul 12 00:11:54.174243 systemd[1]: Started sshd@17-10.0.0.137:22-10.0.0.1:33696.service - OpenSSH per-connection server daemon (10.0.0.1:33696). Jul 12 00:11:54.176174 systemd-logind[1448]: Removed session 17. Jul 12 00:11:54.222209 sshd[4131]: Accepted publickey for core from 10.0.0.1 port 33696 ssh2: RSA SHA256:n0+tZ6jsx4rUqm31x+f+3vpltD7bP0WQQLUre7Cw/6U Jul 12 00:11:54.223630 sshd-session[4131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:11:54.228828 systemd-logind[1448]: New session 18 of user core. Jul 12 00:11:54.237093 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 12 00:11:54.460355 sshd[4134]: Connection closed by 10.0.0.1 port 33696 Jul 12 00:11:54.461363 sshd-session[4131]: pam_unix(sshd:session): session closed for user core Jul 12 00:11:54.478727 systemd[1]: Started sshd@18-10.0.0.137:22-10.0.0.1:33706.service - OpenSSH per-connection server daemon (10.0.0.1:33706). Jul 12 00:11:54.479247 systemd[1]: sshd@17-10.0.0.137:22-10.0.0.1:33696.service: Deactivated successfully. Jul 12 00:11:54.481588 systemd[1]: session-18.scope: Deactivated successfully. Jul 12 00:11:54.483243 systemd-logind[1448]: Session 18 logged out. Waiting for processes to exit. Jul 12 00:11:54.485246 systemd-logind[1448]: Removed session 18. Jul 12 00:11:54.529380 sshd[4143]: Accepted publickey for core from 10.0.0.1 port 33706 ssh2: RSA SHA256:n0+tZ6jsx4rUqm31x+f+3vpltD7bP0WQQLUre7Cw/6U Jul 12 00:11:54.530963 sshd-session[4143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:11:54.535290 systemd-logind[1448]: New session 19 of user core. Jul 12 00:11:54.541069 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 12 00:11:54.652823 sshd[4148]: Connection closed by 10.0.0.1 port 33706 Jul 12 00:11:54.653597 sshd-session[4143]: pam_unix(sshd:session): session closed for user core Jul 12 00:11:54.657123 systemd[1]: sshd@18-10.0.0.137:22-10.0.0.1:33706.service: Deactivated successfully. Jul 12 00:11:54.659941 systemd[1]: session-19.scope: Deactivated successfully. Jul 12 00:11:54.661656 systemd-logind[1448]: Session 19 logged out. Waiting for processes to exit. Jul 12 00:11:54.663495 systemd-logind[1448]: Removed session 19. Jul 12 00:11:59.671651 systemd[1]: Started sshd@19-10.0.0.137:22-10.0.0.1:33712.service - OpenSSH per-connection server daemon (10.0.0.1:33712). Jul 12 00:11:59.726466 sshd[4164]: Accepted publickey for core from 10.0.0.1 port 33712 ssh2: RSA SHA256:n0+tZ6jsx4rUqm31x+f+3vpltD7bP0WQQLUre7Cw/6U Jul 12 00:11:59.728405 sshd-session[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:11:59.735514 systemd-logind[1448]: New session 20 of user core. Jul 12 00:11:59.742075 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 12 00:11:59.877928 sshd[4166]: Connection closed by 10.0.0.1 port 33712 Jul 12 00:11:59.877446 sshd-session[4164]: pam_unix(sshd:session): session closed for user core Jul 12 00:11:59.880988 systemd[1]: sshd@19-10.0.0.137:22-10.0.0.1:33712.service: Deactivated successfully. Jul 12 00:11:59.885172 systemd[1]: session-20.scope: Deactivated successfully. Jul 12 00:11:59.886316 systemd-logind[1448]: Session 20 logged out. Waiting for processes to exit. Jul 12 00:11:59.887645 systemd-logind[1448]: Removed session 20. Jul 12 00:12:04.895801 systemd[1]: Started sshd@20-10.0.0.137:22-10.0.0.1:43390.service - OpenSSH per-connection server daemon (10.0.0.1:43390). Jul 12 00:12:04.933957 sshd[4181]: Accepted publickey for core from 10.0.0.1 port 43390 ssh2: RSA SHA256:n0+tZ6jsx4rUqm31x+f+3vpltD7bP0WQQLUre7Cw/6U Jul 12 00:12:04.935253 sshd-session[4181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:12:04.940706 systemd-logind[1448]: New session 21 of user core. Jul 12 00:12:04.949106 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 12 00:12:05.063171 sshd[4183]: Connection closed by 10.0.0.1 port 43390 Jul 12 00:12:05.064089 sshd-session[4181]: pam_unix(sshd:session): session closed for user core Jul 12 00:12:05.066837 systemd[1]: sshd@20-10.0.0.137:22-10.0.0.1:43390.service: Deactivated successfully. Jul 12 00:12:05.069362 systemd[1]: session-21.scope: Deactivated successfully. Jul 12 00:12:05.071173 systemd-logind[1448]: Session 21 logged out. Waiting for processes to exit. Jul 12 00:12:05.073381 systemd-logind[1448]: Removed session 21. Jul 12 00:12:08.809173 kubelet[2553]: E0712 00:12:08.809090 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:12:10.074250 systemd[1]: Started sshd@21-10.0.0.137:22-10.0.0.1:43400.service - OpenSSH per-connection server daemon (10.0.0.1:43400). Jul 12 00:12:10.122008 sshd[4198]: Accepted publickey for core from 10.0.0.1 port 43400 ssh2: RSA SHA256:n0+tZ6jsx4rUqm31x+f+3vpltD7bP0WQQLUre7Cw/6U Jul 12 00:12:10.123384 sshd-session[4198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:12:10.128948 systemd-logind[1448]: New session 22 of user core. Jul 12 00:12:10.137173 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 12 00:12:10.263244 sshd[4200]: Connection closed by 10.0.0.1 port 43400 Jul 12 00:12:10.263976 sshd-session[4198]: pam_unix(sshd:session): session closed for user core Jul 12 00:12:10.275712 systemd[1]: sshd@21-10.0.0.137:22-10.0.0.1:43400.service: Deactivated successfully. Jul 12 00:12:10.277521 systemd[1]: session-22.scope: Deactivated successfully. Jul 12 00:12:10.278398 systemd-logind[1448]: Session 22 logged out. Waiting for processes to exit. Jul 12 00:12:10.280410 systemd[1]: Started sshd@22-10.0.0.137:22-10.0.0.1:43404.service - OpenSSH per-connection server daemon (10.0.0.1:43404). Jul 12 00:12:10.281286 systemd-logind[1448]: Removed session 22. Jul 12 00:12:10.329925 sshd[4212]: Accepted publickey for core from 10.0.0.1 port 43404 ssh2: RSA SHA256:n0+tZ6jsx4rUqm31x+f+3vpltD7bP0WQQLUre7Cw/6U Jul 12 00:12:10.331211 sshd-session[4212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:12:10.335618 systemd-logind[1448]: New session 23 of user core. Jul 12 00:12:10.344078 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 12 00:12:12.411195 containerd[1467]: time="2025-07-12T00:12:12.411142996Z" level=info msg="StopContainer for \"37dcffde59a49a81f48cd216838af272a7eb0c5008a86b993c7440c26623c62d\" with timeout 30 (s)" Jul 12 00:12:12.412083 containerd[1467]: time="2025-07-12T00:12:12.412060858Z" level=info msg="Stop container \"37dcffde59a49a81f48cd216838af272a7eb0c5008a86b993c7440c26623c62d\" with signal terminated" Jul 12 00:12:12.423118 systemd[1]: cri-containerd-37dcffde59a49a81f48cd216838af272a7eb0c5008a86b993c7440c26623c62d.scope: Deactivated successfully. Jul 12 00:12:12.441379 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-37dcffde59a49a81f48cd216838af272a7eb0c5008a86b993c7440c26623c62d-rootfs.mount: Deactivated successfully. Jul 12 00:12:12.448928 containerd[1467]: time="2025-07-12T00:12:12.448851215Z" level=info msg="shim disconnected" id=37dcffde59a49a81f48cd216838af272a7eb0c5008a86b993c7440c26623c62d namespace=k8s.io Jul 12 00:12:12.448928 containerd[1467]: time="2025-07-12T00:12:12.448922570Z" level=warning msg="cleaning up after shim disconnected" id=37dcffde59a49a81f48cd216838af272a7eb0c5008a86b993c7440c26623c62d namespace=k8s.io Jul 12 00:12:12.448928 containerd[1467]: time="2025-07-12T00:12:12.448932890Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:12:12.450299 containerd[1467]: time="2025-07-12T00:12:12.450262806Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:12:12.457760 containerd[1467]: time="2025-07-12T00:12:12.457730934Z" level=info msg="StopContainer for \"f318f1085974a0fd32342894c9e917b924f4081a914ed1563320de632402e93c\" with timeout 2 (s)" Jul 12 00:12:12.458011 containerd[1467]: time="2025-07-12T00:12:12.457991758Z" level=info msg="Stop container \"f318f1085974a0fd32342894c9e917b924f4081a914ed1563320de632402e93c\" with signal terminated" Jul 12 00:12:12.464763 systemd-networkd[1399]: lxc_health: Link DOWN Jul 12 00:12:12.464768 systemd-networkd[1399]: lxc_health: Lost carrier Jul 12 00:12:12.477400 systemd[1]: cri-containerd-f318f1085974a0fd32342894c9e917b924f4081a914ed1563320de632402e93c.scope: Deactivated successfully. Jul 12 00:12:12.477723 systemd[1]: cri-containerd-f318f1085974a0fd32342894c9e917b924f4081a914ed1563320de632402e93c.scope: Consumed 6.802s CPU time, 124.8M memory peak, 144K read from disk, 12.9M written to disk. Jul 12 00:12:12.497533 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f318f1085974a0fd32342894c9e917b924f4081a914ed1563320de632402e93c-rootfs.mount: Deactivated successfully. Jul 12 00:12:12.514838 containerd[1467]: time="2025-07-12T00:12:12.514774093Z" level=info msg="shim disconnected" id=f318f1085974a0fd32342894c9e917b924f4081a914ed1563320de632402e93c namespace=k8s.io Jul 12 00:12:12.514838 containerd[1467]: time="2025-07-12T00:12:12.514832129Z" level=warning msg="cleaning up after shim disconnected" id=f318f1085974a0fd32342894c9e917b924f4081a914ed1563320de632402e93c namespace=k8s.io Jul 12 00:12:12.514838 containerd[1467]: time="2025-07-12T00:12:12.514846488Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:12:12.519629 containerd[1467]: time="2025-07-12T00:12:12.519578709Z" level=info msg="StopContainer for \"37dcffde59a49a81f48cd216838af272a7eb0c5008a86b993c7440c26623c62d\" returns successfully" Jul 12 00:12:12.520381 containerd[1467]: time="2025-07-12T00:12:12.520220789Z" level=info msg="StopPodSandbox for \"c508a2859e30a7977f5c1a8f9d181e62c630bb496aa1bdc46da1c59046238940\"" Jul 12 00:12:12.525266 containerd[1467]: time="2025-07-12T00:12:12.525205114Z" level=info msg="Container to stop \"37dcffde59a49a81f48cd216838af272a7eb0c5008a86b993c7440c26623c62d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:12:12.527150 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c508a2859e30a7977f5c1a8f9d181e62c630bb496aa1bdc46da1c59046238940-shm.mount: Deactivated successfully. Jul 12 00:12:12.531107 systemd[1]: cri-containerd-c508a2859e30a7977f5c1a8f9d181e62c630bb496aa1bdc46da1c59046238940.scope: Deactivated successfully. Jul 12 00:12:12.534612 containerd[1467]: time="2025-07-12T00:12:12.534573163Z" level=info msg="StopContainer for \"f318f1085974a0fd32342894c9e917b924f4081a914ed1563320de632402e93c\" returns successfully" Jul 12 00:12:12.536636 containerd[1467]: time="2025-07-12T00:12:12.536607314Z" level=info msg="StopPodSandbox for \"6e90128a78b9ae6b81e46631cd1d0f5fb763ed297c04eff53d607f7628187989\"" Jul 12 00:12:12.536703 containerd[1467]: time="2025-07-12T00:12:12.536650591Z" level=info msg="Container to stop \"eea4867f3d0283eddc38bb38359c6ab395c8a215013eb9c45a3ac1974ced3297\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:12:12.536703 containerd[1467]: time="2025-07-12T00:12:12.536663031Z" level=info msg="Container to stop \"71c06c6fcda89d659dc4ade1586cbc776b8b72f2056d801ab8b0911bef2d8471\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:12:12.536703 containerd[1467]: time="2025-07-12T00:12:12.536672310Z" level=info msg="Container to stop \"e5bd6a329381734a96f2795c9a04ffcbcfa8091127b62b299c9771b35aa09ebb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:12:12.536703 containerd[1467]: time="2025-07-12T00:12:12.536680789Z" level=info msg="Container to stop \"587c849f72f5c89f29e4541be8974dbe8e87d2a52598313cb545c16f7356502d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:12:12.536703 containerd[1467]: time="2025-07-12T00:12:12.536690589Z" level=info msg="Container to stop \"f318f1085974a0fd32342894c9e917b924f4081a914ed1563320de632402e93c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:12:12.539140 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6e90128a78b9ae6b81e46631cd1d0f5fb763ed297c04eff53d607f7628187989-shm.mount: Deactivated successfully. Jul 12 00:12:12.546304 systemd[1]: cri-containerd-6e90128a78b9ae6b81e46631cd1d0f5fb763ed297c04eff53d607f7628187989.scope: Deactivated successfully. Jul 12 00:12:12.564087 containerd[1467]: time="2025-07-12T00:12:12.563946028Z" level=info msg="shim disconnected" id=c508a2859e30a7977f5c1a8f9d181e62c630bb496aa1bdc46da1c59046238940 namespace=k8s.io Jul 12 00:12:12.564087 containerd[1467]: time="2025-07-12T00:12:12.564005984Z" level=warning msg="cleaning up after shim disconnected" id=c508a2859e30a7977f5c1a8f9d181e62c630bb496aa1bdc46da1c59046238940 namespace=k8s.io Jul 12 00:12:12.564087 containerd[1467]: time="2025-07-12T00:12:12.564018703Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:12:12.570280 containerd[1467]: time="2025-07-12T00:12:12.570052202Z" level=info msg="shim disconnected" id=6e90128a78b9ae6b81e46631cd1d0f5fb763ed297c04eff53d607f7628187989 namespace=k8s.io Jul 12 00:12:12.570280 containerd[1467]: time="2025-07-12T00:12:12.570109559Z" level=warning msg="cleaning up after shim disconnected" id=6e90128a78b9ae6b81e46631cd1d0f5fb763ed297c04eff53d607f7628187989 namespace=k8s.io Jul 12 00:12:12.570280 containerd[1467]: time="2025-07-12T00:12:12.570126278Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:12:12.579276 containerd[1467]: time="2025-07-12T00:12:12.579220504Z" level=info msg="TearDown network for sandbox \"c508a2859e30a7977f5c1a8f9d181e62c630bb496aa1bdc46da1c59046238940\" successfully" Jul 12 00:12:12.579276 containerd[1467]: time="2025-07-12T00:12:12.579262301Z" level=info msg="StopPodSandbox for \"c508a2859e30a7977f5c1a8f9d181e62c630bb496aa1bdc46da1c59046238940\" returns successfully" Jul 12 00:12:12.582953 containerd[1467]: time="2025-07-12T00:12:12.582806717Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:12:12Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 12 00:12:12.584061 containerd[1467]: time="2025-07-12T00:12:12.584016321Z" level=info msg="TearDown network for sandbox \"6e90128a78b9ae6b81e46631cd1d0f5fb763ed297c04eff53d607f7628187989\" successfully" Jul 12 00:12:12.584061 containerd[1467]: time="2025-07-12T00:12:12.584049079Z" level=info msg="StopPodSandbox for \"6e90128a78b9ae6b81e46631cd1d0f5fb763ed297c04eff53d607f7628187989\" returns successfully" Jul 12 00:12:12.670470 kubelet[2553]: I0712 00:12:12.670345 2553 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f40d3a62-ffe1-48b1-90a2-9b9253209bef-xtables-lock\") pod \"f40d3a62-ffe1-48b1-90a2-9b9253209bef\" (UID: \"f40d3a62-ffe1-48b1-90a2-9b9253209bef\") " Jul 12 00:12:12.671554 kubelet[2553]: I0712 00:12:12.670956 2553 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f40d3a62-ffe1-48b1-90a2-9b9253209bef-etc-cni-netd\") pod \"f40d3a62-ffe1-48b1-90a2-9b9253209bef\" (UID: \"f40d3a62-ffe1-48b1-90a2-9b9253209bef\") " Jul 12 00:12:12.671554 kubelet[2553]: I0712 00:12:12.670994 2553 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qf6xz\" (UniqueName: \"kubernetes.io/projected/f40d3a62-ffe1-48b1-90a2-9b9253209bef-kube-api-access-qf6xz\") pod \"f40d3a62-ffe1-48b1-90a2-9b9253209bef\" (UID: \"f40d3a62-ffe1-48b1-90a2-9b9253209bef\") " Jul 12 00:12:12.671554 kubelet[2553]: I0712 00:12:12.671016 2553 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f40d3a62-ffe1-48b1-90a2-9b9253209bef-hubble-tls\") pod \"f40d3a62-ffe1-48b1-90a2-9b9253209bef\" (UID: \"f40d3a62-ffe1-48b1-90a2-9b9253209bef\") " Jul 12 00:12:12.671554 kubelet[2553]: I0712 00:12:12.671034 2553 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f40d3a62-ffe1-48b1-90a2-9b9253209bef-cilium-config-path\") pod \"f40d3a62-ffe1-48b1-90a2-9b9253209bef\" (UID: \"f40d3a62-ffe1-48b1-90a2-9b9253209bef\") " Jul 12 00:12:12.671554 kubelet[2553]: I0712 00:12:12.671051 2553 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f40d3a62-ffe1-48b1-90a2-9b9253209bef-host-proc-sys-kernel\") pod \"f40d3a62-ffe1-48b1-90a2-9b9253209bef\" (UID: \"f40d3a62-ffe1-48b1-90a2-9b9253209bef\") " Jul 12 00:12:12.671554 kubelet[2553]: I0712 00:12:12.671066 2553 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f40d3a62-ffe1-48b1-90a2-9b9253209bef-cilium-run\") pod \"f40d3a62-ffe1-48b1-90a2-9b9253209bef\" (UID: \"f40d3a62-ffe1-48b1-90a2-9b9253209bef\") " Jul 12 00:12:12.671772 kubelet[2553]: I0712 00:12:12.671080 2553 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f40d3a62-ffe1-48b1-90a2-9b9253209bef-lib-modules\") pod \"f40d3a62-ffe1-48b1-90a2-9b9253209bef\" (UID: \"f40d3a62-ffe1-48b1-90a2-9b9253209bef\") " Jul 12 00:12:12.671772 kubelet[2553]: I0712 00:12:12.671122 2553 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f40d3a62-ffe1-48b1-90a2-9b9253209bef-bpf-maps\") pod \"f40d3a62-ffe1-48b1-90a2-9b9253209bef\" (UID: \"f40d3a62-ffe1-48b1-90a2-9b9253209bef\") " Jul 12 00:12:12.671772 kubelet[2553]: I0712 00:12:12.671143 2553 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f40d3a62-ffe1-48b1-90a2-9b9253209bef-cilium-cgroup\") pod \"f40d3a62-ffe1-48b1-90a2-9b9253209bef\" (UID: \"f40d3a62-ffe1-48b1-90a2-9b9253209bef\") " Jul 12 00:12:12.671772 kubelet[2553]: I0712 00:12:12.671160 2553 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f40d3a62-ffe1-48b1-90a2-9b9253209bef-host-proc-sys-net\") pod \"f40d3a62-ffe1-48b1-90a2-9b9253209bef\" (UID: \"f40d3a62-ffe1-48b1-90a2-9b9253209bef\") " Jul 12 00:12:12.671772 kubelet[2553]: I0712 00:12:12.671180 2553 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19e30619-8e6b-4e1d-a358-6ba654a268c3-cilium-config-path\") pod \"19e30619-8e6b-4e1d-a358-6ba654a268c3\" (UID: \"19e30619-8e6b-4e1d-a358-6ba654a268c3\") " Jul 12 00:12:12.671772 kubelet[2553]: I0712 00:12:12.671198 2553 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqhjd\" (UniqueName: \"kubernetes.io/projected/19e30619-8e6b-4e1d-a358-6ba654a268c3-kube-api-access-wqhjd\") pod \"19e30619-8e6b-4e1d-a358-6ba654a268c3\" (UID: \"19e30619-8e6b-4e1d-a358-6ba654a268c3\") " Jul 12 00:12:12.671924 kubelet[2553]: I0712 00:12:12.671215 2553 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f40d3a62-ffe1-48b1-90a2-9b9253209bef-hostproc\") pod \"f40d3a62-ffe1-48b1-90a2-9b9253209bef\" (UID: \"f40d3a62-ffe1-48b1-90a2-9b9253209bef\") " Jul 12 00:12:12.671924 kubelet[2553]: I0712 00:12:12.671231 2553 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f40d3a62-ffe1-48b1-90a2-9b9253209bef-cni-path\") pod \"f40d3a62-ffe1-48b1-90a2-9b9253209bef\" (UID: \"f40d3a62-ffe1-48b1-90a2-9b9253209bef\") " Jul 12 00:12:12.671924 kubelet[2553]: I0712 00:12:12.671248 2553 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f40d3a62-ffe1-48b1-90a2-9b9253209bef-clustermesh-secrets\") pod \"f40d3a62-ffe1-48b1-90a2-9b9253209bef\" (UID: \"f40d3a62-ffe1-48b1-90a2-9b9253209bef\") " Jul 12 00:12:12.672348 kubelet[2553]: I0712 00:12:12.672303 2553 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40d3a62-ffe1-48b1-90a2-9b9253209bef-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f40d3a62-ffe1-48b1-90a2-9b9253209bef" (UID: "f40d3a62-ffe1-48b1-90a2-9b9253209bef"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:12:12.672387 kubelet[2553]: I0712 00:12:12.672303 2553 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40d3a62-ffe1-48b1-90a2-9b9253209bef-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f40d3a62-ffe1-48b1-90a2-9b9253209bef" (UID: "f40d3a62-ffe1-48b1-90a2-9b9253209bef"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:12:12.674830 kubelet[2553]: I0712 00:12:12.674435 2553 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f40d3a62-ffe1-48b1-90a2-9b9253209bef-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f40d3a62-ffe1-48b1-90a2-9b9253209bef" (UID: "f40d3a62-ffe1-48b1-90a2-9b9253209bef"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 12 00:12:12.674830 kubelet[2553]: I0712 00:12:12.674507 2553 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40d3a62-ffe1-48b1-90a2-9b9253209bef-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f40d3a62-ffe1-48b1-90a2-9b9253209bef" (UID: "f40d3a62-ffe1-48b1-90a2-9b9253209bef"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:12:12.674830 kubelet[2553]: I0712 00:12:12.674540 2553 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40d3a62-ffe1-48b1-90a2-9b9253209bef-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f40d3a62-ffe1-48b1-90a2-9b9253209bef" (UID: "f40d3a62-ffe1-48b1-90a2-9b9253209bef"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:12:12.674830 kubelet[2553]: I0712 00:12:12.674560 2553 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40d3a62-ffe1-48b1-90a2-9b9253209bef-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f40d3a62-ffe1-48b1-90a2-9b9253209bef" (UID: "f40d3a62-ffe1-48b1-90a2-9b9253209bef"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:12:12.674830 kubelet[2553]: I0712 00:12:12.674581 2553 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40d3a62-ffe1-48b1-90a2-9b9253209bef-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f40d3a62-ffe1-48b1-90a2-9b9253209bef" (UID: "f40d3a62-ffe1-48b1-90a2-9b9253209bef"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:12:12.675061 kubelet[2553]: I0712 00:12:12.674594 2553 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40d3a62-ffe1-48b1-90a2-9b9253209bef-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f40d3a62-ffe1-48b1-90a2-9b9253209bef" (UID: "f40d3a62-ffe1-48b1-90a2-9b9253209bef"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:12:12.675061 kubelet[2553]: I0712 00:12:12.674777 2553 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40d3a62-ffe1-48b1-90a2-9b9253209bef-hostproc" (OuterVolumeSpecName: "hostproc") pod "f40d3a62-ffe1-48b1-90a2-9b9253209bef" (UID: "f40d3a62-ffe1-48b1-90a2-9b9253209bef"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:12:12.675061 kubelet[2553]: I0712 00:12:12.674814 2553 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40d3a62-ffe1-48b1-90a2-9b9253209bef-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f40d3a62-ffe1-48b1-90a2-9b9253209bef" (UID: "f40d3a62-ffe1-48b1-90a2-9b9253209bef"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:12:12.675061 kubelet[2553]: I0712 00:12:12.674831 2553 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40d3a62-ffe1-48b1-90a2-9b9253209bef-cni-path" (OuterVolumeSpecName: "cni-path") pod "f40d3a62-ffe1-48b1-90a2-9b9253209bef" (UID: "f40d3a62-ffe1-48b1-90a2-9b9253209bef"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:12:12.675151 kubelet[2553]: I0712 00:12:12.675073 2553 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f40d3a62-ffe1-48b1-90a2-9b9253209bef-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f40d3a62-ffe1-48b1-90a2-9b9253209bef" (UID: "f40d3a62-ffe1-48b1-90a2-9b9253209bef"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 00:12:12.675175 kubelet[2553]: I0712 00:12:12.675154 2553 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f40d3a62-ffe1-48b1-90a2-9b9253209bef-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f40d3a62-ffe1-48b1-90a2-9b9253209bef" (UID: "f40d3a62-ffe1-48b1-90a2-9b9253209bef"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 12 00:12:12.675906 kubelet[2553]: I0712 00:12:12.675848 2553 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f40d3a62-ffe1-48b1-90a2-9b9253209bef-kube-api-access-qf6xz" (OuterVolumeSpecName: "kube-api-access-qf6xz") pod "f40d3a62-ffe1-48b1-90a2-9b9253209bef" (UID: "f40d3a62-ffe1-48b1-90a2-9b9253209bef"). InnerVolumeSpecName "kube-api-access-qf6xz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 00:12:12.677280 kubelet[2553]: I0712 00:12:12.677241 2553 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19e30619-8e6b-4e1d-a358-6ba654a268c3-kube-api-access-wqhjd" (OuterVolumeSpecName: "kube-api-access-wqhjd") pod "19e30619-8e6b-4e1d-a358-6ba654a268c3" (UID: "19e30619-8e6b-4e1d-a358-6ba654a268c3"). InnerVolumeSpecName "kube-api-access-wqhjd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 00:12:12.678291 kubelet[2553]: I0712 00:12:12.678258 2553 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19e30619-8e6b-4e1d-a358-6ba654a268c3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "19e30619-8e6b-4e1d-a358-6ba654a268c3" (UID: "19e30619-8e6b-4e1d-a358-6ba654a268c3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 12 00:12:12.771689 kubelet[2553]: I0712 00:12:12.771642 2553 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f40d3a62-ffe1-48b1-90a2-9b9253209bef-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 12 00:12:12.771689 kubelet[2553]: I0712 00:12:12.771676 2553 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f40d3a62-ffe1-48b1-90a2-9b9253209bef-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 12 00:12:12.771689 kubelet[2553]: I0712 00:12:12.771686 2553 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f40d3a62-ffe1-48b1-90a2-9b9253209bef-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 12 00:12:12.771689 kubelet[2553]: I0712 00:12:12.771697 2553 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qf6xz\" (UniqueName: \"kubernetes.io/projected/f40d3a62-ffe1-48b1-90a2-9b9253209bef-kube-api-access-qf6xz\") on node \"localhost\" DevicePath \"\"" Jul 12 00:12:12.771920 kubelet[2553]: I0712 00:12:12.771710 2553 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f40d3a62-ffe1-48b1-90a2-9b9253209bef-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 12 00:12:12.771920 kubelet[2553]: I0712 00:12:12.771720 2553 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f40d3a62-ffe1-48b1-90a2-9b9253209bef-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 12 00:12:12.771920 kubelet[2553]: I0712 00:12:12.771731 2553 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f40d3a62-ffe1-48b1-90a2-9b9253209bef-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 12 00:12:12.771920 kubelet[2553]: I0712 00:12:12.771739 2553 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f40d3a62-ffe1-48b1-90a2-9b9253209bef-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 12 00:12:12.771920 kubelet[2553]: I0712 00:12:12.771748 2553 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f40d3a62-ffe1-48b1-90a2-9b9253209bef-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 12 00:12:12.771920 kubelet[2553]: I0712 00:12:12.771756 2553 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f40d3a62-ffe1-48b1-90a2-9b9253209bef-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 12 00:12:12.771920 kubelet[2553]: I0712 00:12:12.771764 2553 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f40d3a62-ffe1-48b1-90a2-9b9253209bef-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 12 00:12:12.771920 kubelet[2553]: I0712 00:12:12.771772 2553 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f40d3a62-ffe1-48b1-90a2-9b9253209bef-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 12 00:12:12.772124 kubelet[2553]: I0712 00:12:12.771780 2553 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f40d3a62-ffe1-48b1-90a2-9b9253209bef-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 12 00:12:12.772124 kubelet[2553]: I0712 00:12:12.771788 2553 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wqhjd\" (UniqueName: \"kubernetes.io/projected/19e30619-8e6b-4e1d-a358-6ba654a268c3-kube-api-access-wqhjd\") on node \"localhost\" DevicePath \"\"" Jul 12 00:12:12.772124 kubelet[2553]: I0712 00:12:12.771796 2553 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f40d3a62-ffe1-48b1-90a2-9b9253209bef-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 12 00:12:12.772124 kubelet[2553]: I0712 00:12:12.771805 2553 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19e30619-8e6b-4e1d-a358-6ba654a268c3-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 12 00:12:13.009972 kubelet[2553]: I0712 00:12:13.009351 2553 scope.go:117] "RemoveContainer" containerID="37dcffde59a49a81f48cd216838af272a7eb0c5008a86b993c7440c26623c62d" Jul 12 00:12:13.011498 containerd[1467]: time="2025-07-12T00:12:13.010660453Z" level=info msg="RemoveContainer for \"37dcffde59a49a81f48cd216838af272a7eb0c5008a86b993c7440c26623c62d\"" Jul 12 00:12:13.012215 systemd[1]: Removed slice kubepods-besteffort-pod19e30619_8e6b_4e1d_a358_6ba654a268c3.slice - libcontainer container kubepods-besteffort-pod19e30619_8e6b_4e1d_a358_6ba654a268c3.slice. Jul 12 00:12:13.021570 systemd[1]: Removed slice kubepods-burstable-podf40d3a62_ffe1_48b1_90a2_9b9253209bef.slice - libcontainer container kubepods-burstable-podf40d3a62_ffe1_48b1_90a2_9b9253209bef.slice. Jul 12 00:12:13.021661 systemd[1]: kubepods-burstable-podf40d3a62_ffe1_48b1_90a2_9b9253209bef.slice: Consumed 6.961s CPU time, 125.2M memory peak, 160K read from disk, 12.9M written to disk. Jul 12 00:12:13.027371 containerd[1467]: time="2025-07-12T00:12:13.027323702Z" level=info msg="RemoveContainer for \"37dcffde59a49a81f48cd216838af272a7eb0c5008a86b993c7440c26623c62d\" returns successfully" Jul 12 00:12:13.028140 kubelet[2553]: I0712 00:12:13.027866 2553 scope.go:117] "RemoveContainer" containerID="37dcffde59a49a81f48cd216838af272a7eb0c5008a86b993c7440c26623c62d" Jul 12 00:12:13.028444 containerd[1467]: time="2025-07-12T00:12:13.028388239Z" level=error msg="ContainerStatus for \"37dcffde59a49a81f48cd216838af272a7eb0c5008a86b993c7440c26623c62d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"37dcffde59a49a81f48cd216838af272a7eb0c5008a86b993c7440c26623c62d\": not found" Jul 12 00:12:13.028616 kubelet[2553]: E0712 00:12:13.028581 2553 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"37dcffde59a49a81f48cd216838af272a7eb0c5008a86b993c7440c26623c62d\": not found" containerID="37dcffde59a49a81f48cd216838af272a7eb0c5008a86b993c7440c26623c62d" Jul 12 00:12:13.034300 kubelet[2553]: I0712 00:12:13.034182 2553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"37dcffde59a49a81f48cd216838af272a7eb0c5008a86b993c7440c26623c62d"} err="failed to get container status \"37dcffde59a49a81f48cd216838af272a7eb0c5008a86b993c7440c26623c62d\": rpc error: code = NotFound desc = an error occurred when try to find container \"37dcffde59a49a81f48cd216838af272a7eb0c5008a86b993c7440c26623c62d\": not found" Jul 12 00:12:13.035136 kubelet[2553]: I0712 00:12:13.034991 2553 scope.go:117] "RemoveContainer" containerID="f318f1085974a0fd32342894c9e917b924f4081a914ed1563320de632402e93c" Jul 12 00:12:13.039887 containerd[1467]: time="2025-07-12T00:12:13.039824359Z" level=info msg="RemoveContainer for \"f318f1085974a0fd32342894c9e917b924f4081a914ed1563320de632402e93c\"" Jul 12 00:12:13.045370 containerd[1467]: time="2025-07-12T00:12:13.045227238Z" level=info msg="RemoveContainer for \"f318f1085974a0fd32342894c9e917b924f4081a914ed1563320de632402e93c\" returns successfully" Jul 12 00:12:13.045686 kubelet[2553]: I0712 00:12:13.045586 2553 scope.go:117] "RemoveContainer" containerID="587c849f72f5c89f29e4541be8974dbe8e87d2a52598313cb545c16f7356502d" Jul 12 00:12:13.046962 containerd[1467]: time="2025-07-12T00:12:13.046660352Z" level=info msg="RemoveContainer for \"587c849f72f5c89f29e4541be8974dbe8e87d2a52598313cb545c16f7356502d\"" Jul 12 00:12:13.048969 containerd[1467]: time="2025-07-12T00:12:13.048939217Z" level=info msg="RemoveContainer for \"587c849f72f5c89f29e4541be8974dbe8e87d2a52598313cb545c16f7356502d\" returns successfully" Jul 12 00:12:13.049212 kubelet[2553]: I0712 00:12:13.049192 2553 scope.go:117] "RemoveContainer" containerID="e5bd6a329381734a96f2795c9a04ffcbcfa8091127b62b299c9771b35aa09ebb" Jul 12 00:12:13.050472 containerd[1467]: time="2025-07-12T00:12:13.050226500Z" level=info msg="RemoveContainer for \"e5bd6a329381734a96f2795c9a04ffcbcfa8091127b62b299c9771b35aa09ebb\"" Jul 12 00:12:13.052371 containerd[1467]: time="2025-07-12T00:12:13.052284218Z" level=info msg="RemoveContainer for \"e5bd6a329381734a96f2795c9a04ffcbcfa8091127b62b299c9771b35aa09ebb\" returns successfully" Jul 12 00:12:13.052534 kubelet[2553]: I0712 00:12:13.052510 2553 scope.go:117] "RemoveContainer" containerID="71c06c6fcda89d659dc4ade1586cbc776b8b72f2056d801ab8b0911bef2d8471" Jul 12 00:12:13.053564 containerd[1467]: time="2025-07-12T00:12:13.053539263Z" level=info msg="RemoveContainer for \"71c06c6fcda89d659dc4ade1586cbc776b8b72f2056d801ab8b0911bef2d8471\"" Jul 12 00:12:13.056829 containerd[1467]: time="2025-07-12T00:12:13.056796390Z" level=info msg="RemoveContainer for \"71c06c6fcda89d659dc4ade1586cbc776b8b72f2056d801ab8b0911bef2d8471\" returns successfully" Jul 12 00:12:13.057057 kubelet[2553]: I0712 00:12:13.057039 2553 scope.go:117] "RemoveContainer" containerID="eea4867f3d0283eddc38bb38359c6ab395c8a215013eb9c45a3ac1974ced3297" Jul 12 00:12:13.058041 containerd[1467]: time="2025-07-12T00:12:13.058012797Z" level=info msg="RemoveContainer for \"eea4867f3d0283eddc38bb38359c6ab395c8a215013eb9c45a3ac1974ced3297\"" Jul 12 00:12:13.062189 containerd[1467]: time="2025-07-12T00:12:13.062154951Z" level=info msg="RemoveContainer for \"eea4867f3d0283eddc38bb38359c6ab395c8a215013eb9c45a3ac1974ced3297\" returns successfully" Jul 12 00:12:13.062376 kubelet[2553]: I0712 00:12:13.062345 2553 scope.go:117] "RemoveContainer" containerID="f318f1085974a0fd32342894c9e917b924f4081a914ed1563320de632402e93c" Jul 12 00:12:13.062641 containerd[1467]: time="2025-07-12T00:12:13.062565207Z" level=error msg="ContainerStatus for \"f318f1085974a0fd32342894c9e917b924f4081a914ed1563320de632402e93c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f318f1085974a0fd32342894c9e917b924f4081a914ed1563320de632402e93c\": not found" Jul 12 00:12:13.062704 kubelet[2553]: E0712 00:12:13.062685 2553 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f318f1085974a0fd32342894c9e917b924f4081a914ed1563320de632402e93c\": not found" containerID="f318f1085974a0fd32342894c9e917b924f4081a914ed1563320de632402e93c" Jul 12 00:12:13.062738 kubelet[2553]: I0712 00:12:13.062708 2553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f318f1085974a0fd32342894c9e917b924f4081a914ed1563320de632402e93c"} err="failed to get container status \"f318f1085974a0fd32342894c9e917b924f4081a914ed1563320de632402e93c\": rpc error: code = NotFound desc = an error occurred when try to find container \"f318f1085974a0fd32342894c9e917b924f4081a914ed1563320de632402e93c\": not found" Jul 12 00:12:13.062738 kubelet[2553]: I0712 00:12:13.062729 2553 scope.go:117] "RemoveContainer" containerID="587c849f72f5c89f29e4541be8974dbe8e87d2a52598313cb545c16f7356502d" Jul 12 00:12:13.062958 containerd[1467]: time="2025-07-12T00:12:13.062924385Z" level=error msg="ContainerStatus for \"587c849f72f5c89f29e4541be8974dbe8e87d2a52598313cb545c16f7356502d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"587c849f72f5c89f29e4541be8974dbe8e87d2a52598313cb545c16f7356502d\": not found" Jul 12 00:12:13.063084 kubelet[2553]: E0712 00:12:13.063064 2553 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"587c849f72f5c89f29e4541be8974dbe8e87d2a52598313cb545c16f7356502d\": not found" containerID="587c849f72f5c89f29e4541be8974dbe8e87d2a52598313cb545c16f7356502d" Jul 12 00:12:13.063121 kubelet[2553]: I0712 00:12:13.063095 2553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"587c849f72f5c89f29e4541be8974dbe8e87d2a52598313cb545c16f7356502d"} err="failed to get container status \"587c849f72f5c89f29e4541be8974dbe8e87d2a52598313cb545c16f7356502d\": rpc error: code = NotFound desc = an error occurred when try to find container \"587c849f72f5c89f29e4541be8974dbe8e87d2a52598313cb545c16f7356502d\": not found" Jul 12 00:12:13.063121 kubelet[2553]: I0712 00:12:13.063112 2553 scope.go:117] "RemoveContainer" containerID="e5bd6a329381734a96f2795c9a04ffcbcfa8091127b62b299c9771b35aa09ebb" Jul 12 00:12:13.063342 containerd[1467]: time="2025-07-12T00:12:13.063299723Z" level=error msg="ContainerStatus for \"e5bd6a329381734a96f2795c9a04ffcbcfa8091127b62b299c9771b35aa09ebb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e5bd6a329381734a96f2795c9a04ffcbcfa8091127b62b299c9771b35aa09ebb\": not found" Jul 12 00:12:13.063506 kubelet[2553]: E0712 00:12:13.063473 2553 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e5bd6a329381734a96f2795c9a04ffcbcfa8091127b62b299c9771b35aa09ebb\": not found" containerID="e5bd6a329381734a96f2795c9a04ffcbcfa8091127b62b299c9771b35aa09ebb" Jul 12 00:12:13.063552 kubelet[2553]: I0712 00:12:13.063508 2553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e5bd6a329381734a96f2795c9a04ffcbcfa8091127b62b299c9771b35aa09ebb"} err="failed to get container status \"e5bd6a329381734a96f2795c9a04ffcbcfa8091127b62b299c9771b35aa09ebb\": rpc error: code = NotFound desc = an error occurred when try to find container \"e5bd6a329381734a96f2795c9a04ffcbcfa8091127b62b299c9771b35aa09ebb\": not found" Jul 12 00:12:13.063552 kubelet[2553]: I0712 00:12:13.063523 2553 scope.go:117] "RemoveContainer" containerID="71c06c6fcda89d659dc4ade1586cbc776b8b72f2056d801ab8b0911bef2d8471" Jul 12 00:12:13.063859 containerd[1467]: time="2025-07-12T00:12:13.063783294Z" level=error msg="ContainerStatus for \"71c06c6fcda89d659dc4ade1586cbc776b8b72f2056d801ab8b0911bef2d8471\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"71c06c6fcda89d659dc4ade1586cbc776b8b72f2056d801ab8b0911bef2d8471\": not found" Jul 12 00:12:13.063948 kubelet[2553]: E0712 00:12:13.063913 2553 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"71c06c6fcda89d659dc4ade1586cbc776b8b72f2056d801ab8b0911bef2d8471\": not found" containerID="71c06c6fcda89d659dc4ade1586cbc776b8b72f2056d801ab8b0911bef2d8471" Jul 12 00:12:13.063948 kubelet[2553]: I0712 00:12:13.063944 2553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"71c06c6fcda89d659dc4ade1586cbc776b8b72f2056d801ab8b0911bef2d8471"} err="failed to get container status \"71c06c6fcda89d659dc4ade1586cbc776b8b72f2056d801ab8b0911bef2d8471\": rpc error: code = NotFound desc = an error occurred when try to find container \"71c06c6fcda89d659dc4ade1586cbc776b8b72f2056d801ab8b0911bef2d8471\": not found" Jul 12 00:12:13.064029 kubelet[2553]: I0712 00:12:13.063958 2553 scope.go:117] "RemoveContainer" containerID="eea4867f3d0283eddc38bb38359c6ab395c8a215013eb9c45a3ac1974ced3297" Jul 12 00:12:13.064159 containerd[1467]: time="2025-07-12T00:12:13.064128754Z" level=error msg="ContainerStatus for \"eea4867f3d0283eddc38bb38359c6ab395c8a215013eb9c45a3ac1974ced3297\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eea4867f3d0283eddc38bb38359c6ab395c8a215013eb9c45a3ac1974ced3297\": not found" Jul 12 00:12:13.064322 kubelet[2553]: E0712 00:12:13.064256 2553 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eea4867f3d0283eddc38bb38359c6ab395c8a215013eb9c45a3ac1974ced3297\": not found" containerID="eea4867f3d0283eddc38bb38359c6ab395c8a215013eb9c45a3ac1974ced3297" Jul 12 00:12:13.064322 kubelet[2553]: I0712 00:12:13.064279 2553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eea4867f3d0283eddc38bb38359c6ab395c8a215013eb9c45a3ac1974ced3297"} err="failed to get container status \"eea4867f3d0283eddc38bb38359c6ab395c8a215013eb9c45a3ac1974ced3297\": rpc error: code = NotFound desc = an error occurred when try to find container \"eea4867f3d0283eddc38bb38359c6ab395c8a215013eb9c45a3ac1974ced3297\": not found" Jul 12 00:12:13.433547 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c508a2859e30a7977f5c1a8f9d181e62c630bb496aa1bdc46da1c59046238940-rootfs.mount: Deactivated successfully. Jul 12 00:12:13.433662 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e90128a78b9ae6b81e46631cd1d0f5fb763ed297c04eff53d607f7628187989-rootfs.mount: Deactivated successfully. Jul 12 00:12:13.433717 systemd[1]: var-lib-kubelet-pods-19e30619\x2d8e6b\x2d4e1d\x2da358\x2d6ba654a268c3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwqhjd.mount: Deactivated successfully. Jul 12 00:12:13.433773 systemd[1]: var-lib-kubelet-pods-f40d3a62\x2dffe1\x2d48b1\x2d90a2\x2d9b9253209bef-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqf6xz.mount: Deactivated successfully. Jul 12 00:12:13.433836 systemd[1]: var-lib-kubelet-pods-f40d3a62\x2dffe1\x2d48b1\x2d90a2\x2d9b9253209bef-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 12 00:12:13.433908 systemd[1]: var-lib-kubelet-pods-f40d3a62\x2dffe1\x2d48b1\x2d90a2\x2d9b9253209bef-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 12 00:12:13.810791 kubelet[2553]: I0712 00:12:13.810682 2553 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19e30619-8e6b-4e1d-a358-6ba654a268c3" path="/var/lib/kubelet/pods/19e30619-8e6b-4e1d-a358-6ba654a268c3/volumes" Jul 12 00:12:13.811130 kubelet[2553]: I0712 00:12:13.811119 2553 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f40d3a62-ffe1-48b1-90a2-9b9253209bef" path="/var/lib/kubelet/pods/f40d3a62-ffe1-48b1-90a2-9b9253209bef/volumes" Jul 12 00:12:14.363018 sshd[4215]: Connection closed by 10.0.0.1 port 43404 Jul 12 00:12:14.364398 sshd-session[4212]: pam_unix(sshd:session): session closed for user core Jul 12 00:12:14.379490 systemd[1]: sshd@22-10.0.0.137:22-10.0.0.1:43404.service: Deactivated successfully. Jul 12 00:12:14.382352 systemd[1]: session-23.scope: Deactivated successfully. Jul 12 00:12:14.382790 systemd[1]: session-23.scope: Consumed 1.389s CPU time, 27.8M memory peak. Jul 12 00:12:14.383518 systemd-logind[1448]: Session 23 logged out. Waiting for processes to exit. Jul 12 00:12:14.396659 systemd[1]: Started sshd@23-10.0.0.137:22-10.0.0.1:60442.service - OpenSSH per-connection server daemon (10.0.0.1:60442). Jul 12 00:12:14.398378 systemd-logind[1448]: Removed session 23. Jul 12 00:12:14.448111 sshd[4375]: Accepted publickey for core from 10.0.0.1 port 60442 ssh2: RSA SHA256:n0+tZ6jsx4rUqm31x+f+3vpltD7bP0WQQLUre7Cw/6U Jul 12 00:12:14.450006 sshd-session[4375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:12:14.456334 systemd-logind[1448]: New session 24 of user core. Jul 12 00:12:14.468080 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 12 00:12:14.863674 kubelet[2553]: E0712 00:12:14.863594 2553 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 12 00:12:15.603611 sshd[4378]: Connection closed by 10.0.0.1 port 60442 Jul 12 00:12:15.604674 sshd-session[4375]: pam_unix(sshd:session): session closed for user core Jul 12 00:12:15.614406 systemd[1]: sshd@23-10.0.0.137:22-10.0.0.1:60442.service: Deactivated successfully. Jul 12 00:12:15.619010 systemd[1]: session-24.scope: Deactivated successfully. Jul 12 00:12:15.619393 systemd[1]: session-24.scope: Consumed 1.050s CPU time, 26.3M memory peak. Jul 12 00:12:15.623740 systemd-logind[1448]: Session 24 logged out. Waiting for processes to exit. Jul 12 00:12:15.629782 kubelet[2553]: I0712 00:12:15.629734 2553 memory_manager.go:355] "RemoveStaleState removing state" podUID="f40d3a62-ffe1-48b1-90a2-9b9253209bef" containerName="cilium-agent" Jul 12 00:12:15.629782 kubelet[2553]: I0712 00:12:15.629775 2553 memory_manager.go:355] "RemoveStaleState removing state" podUID="19e30619-8e6b-4e1d-a358-6ba654a268c3" containerName="cilium-operator" Jul 12 00:12:15.636415 systemd[1]: Started sshd@24-10.0.0.137:22-10.0.0.1:60452.service - OpenSSH per-connection server daemon (10.0.0.1:60452). Jul 12 00:12:15.641543 systemd-logind[1448]: Removed session 24. Jul 12 00:12:15.654543 systemd[1]: Created slice kubepods-burstable-pod1fafce52_c4f1_414d_b6b6_cd8505c596ed.slice - libcontainer container kubepods-burstable-pod1fafce52_c4f1_414d_b6b6_cd8505c596ed.slice. Jul 12 00:12:15.690253 kubelet[2553]: I0712 00:12:15.690204 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1fafce52-c4f1-414d-b6b6-cd8505c596ed-lib-modules\") pod \"cilium-mjxbx\" (UID: \"1fafce52-c4f1-414d-b6b6-cd8505c596ed\") " pod="kube-system/cilium-mjxbx" Jul 12 00:12:15.690253 kubelet[2553]: I0712 00:12:15.690248 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1fafce52-c4f1-414d-b6b6-cd8505c596ed-host-proc-sys-net\") pod \"cilium-mjxbx\" (UID: \"1fafce52-c4f1-414d-b6b6-cd8505c596ed\") " pod="kube-system/cilium-mjxbx" Jul 12 00:12:15.690696 kubelet[2553]: I0712 00:12:15.690271 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1fafce52-c4f1-414d-b6b6-cd8505c596ed-host-proc-sys-kernel\") pod \"cilium-mjxbx\" (UID: \"1fafce52-c4f1-414d-b6b6-cd8505c596ed\") " pod="kube-system/cilium-mjxbx" Jul 12 00:12:15.690696 kubelet[2553]: I0712 00:12:15.690288 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqqpw\" (UniqueName: \"kubernetes.io/projected/1fafce52-c4f1-414d-b6b6-cd8505c596ed-kube-api-access-mqqpw\") pod \"cilium-mjxbx\" (UID: \"1fafce52-c4f1-414d-b6b6-cd8505c596ed\") " pod="kube-system/cilium-mjxbx" Jul 12 00:12:15.690696 kubelet[2553]: I0712 00:12:15.690316 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1fafce52-c4f1-414d-b6b6-cd8505c596ed-cilium-ipsec-secrets\") pod \"cilium-mjxbx\" (UID: \"1fafce52-c4f1-414d-b6b6-cd8505c596ed\") " pod="kube-system/cilium-mjxbx" Jul 12 00:12:15.690696 kubelet[2553]: I0712 00:12:15.690333 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1fafce52-c4f1-414d-b6b6-cd8505c596ed-xtables-lock\") pod \"cilium-mjxbx\" (UID: \"1fafce52-c4f1-414d-b6b6-cd8505c596ed\") " pod="kube-system/cilium-mjxbx" Jul 12 00:12:15.690696 kubelet[2553]: I0712 00:12:15.690348 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1fafce52-c4f1-414d-b6b6-cd8505c596ed-cilium-run\") pod \"cilium-mjxbx\" (UID: \"1fafce52-c4f1-414d-b6b6-cd8505c596ed\") " pod="kube-system/cilium-mjxbx" Jul 12 00:12:15.690696 kubelet[2553]: I0712 00:12:15.690364 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1fafce52-c4f1-414d-b6b6-cd8505c596ed-bpf-maps\") pod \"cilium-mjxbx\" (UID: \"1fafce52-c4f1-414d-b6b6-cd8505c596ed\") " pod="kube-system/cilium-mjxbx" Jul 12 00:12:15.690897 kubelet[2553]: I0712 00:12:15.690439 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1fafce52-c4f1-414d-b6b6-cd8505c596ed-cilium-cgroup\") pod \"cilium-mjxbx\" (UID: \"1fafce52-c4f1-414d-b6b6-cd8505c596ed\") " pod="kube-system/cilium-mjxbx" Jul 12 00:12:15.690897 kubelet[2553]: I0712 00:12:15.690486 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1fafce52-c4f1-414d-b6b6-cd8505c596ed-cni-path\") pod \"cilium-mjxbx\" (UID: \"1fafce52-c4f1-414d-b6b6-cd8505c596ed\") " pod="kube-system/cilium-mjxbx" Jul 12 00:12:15.690897 kubelet[2553]: I0712 00:12:15.690536 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1fafce52-c4f1-414d-b6b6-cd8505c596ed-hubble-tls\") pod \"cilium-mjxbx\" (UID: \"1fafce52-c4f1-414d-b6b6-cd8505c596ed\") " pod="kube-system/cilium-mjxbx" Jul 12 00:12:15.690897 kubelet[2553]: I0712 00:12:15.690566 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1fafce52-c4f1-414d-b6b6-cd8505c596ed-hostproc\") pod \"cilium-mjxbx\" (UID: \"1fafce52-c4f1-414d-b6b6-cd8505c596ed\") " pod="kube-system/cilium-mjxbx" Jul 12 00:12:15.690897 kubelet[2553]: I0712 00:12:15.690586 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1fafce52-c4f1-414d-b6b6-cd8505c596ed-clustermesh-secrets\") pod \"cilium-mjxbx\" (UID: \"1fafce52-c4f1-414d-b6b6-cd8505c596ed\") " pod="kube-system/cilium-mjxbx" Jul 12 00:12:15.690897 kubelet[2553]: I0712 00:12:15.690608 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1fafce52-c4f1-414d-b6b6-cd8505c596ed-cilium-config-path\") pod \"cilium-mjxbx\" (UID: \"1fafce52-c4f1-414d-b6b6-cd8505c596ed\") " pod="kube-system/cilium-mjxbx" Jul 12 00:12:15.691081 kubelet[2553]: I0712 00:12:15.690622 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1fafce52-c4f1-414d-b6b6-cd8505c596ed-etc-cni-netd\") pod \"cilium-mjxbx\" (UID: \"1fafce52-c4f1-414d-b6b6-cd8505c596ed\") " pod="kube-system/cilium-mjxbx" Jul 12 00:12:15.698257 sshd[4389]: Accepted publickey for core from 10.0.0.1 port 60452 ssh2: RSA SHA256:n0+tZ6jsx4rUqm31x+f+3vpltD7bP0WQQLUre7Cw/6U Jul 12 00:12:15.699074 sshd-session[4389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:12:15.703015 systemd-logind[1448]: New session 25 of user core. Jul 12 00:12:15.711094 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 12 00:12:15.762636 sshd[4392]: Connection closed by 10.0.0.1 port 60452 Jul 12 00:12:15.763356 sshd-session[4389]: pam_unix(sshd:session): session closed for user core Jul 12 00:12:15.779431 systemd[1]: sshd@24-10.0.0.137:22-10.0.0.1:60452.service: Deactivated successfully. Jul 12 00:12:15.781232 systemd[1]: session-25.scope: Deactivated successfully. Jul 12 00:12:15.784525 systemd-logind[1448]: Session 25 logged out. Waiting for processes to exit. Jul 12 00:12:15.792364 systemd[1]: Started sshd@25-10.0.0.137:22-10.0.0.1:60462.service - OpenSSH per-connection server daemon (10.0.0.1:60462). Jul 12 00:12:15.804407 systemd-logind[1448]: Removed session 25. Jul 12 00:12:15.810678 kubelet[2553]: E0712 00:12:15.810644 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:12:15.851520 sshd[4398]: Accepted publickey for core from 10.0.0.1 port 60462 ssh2: RSA SHA256:n0+tZ6jsx4rUqm31x+f+3vpltD7bP0WQQLUre7Cw/6U Jul 12 00:12:15.853207 sshd-session[4398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:12:15.857611 systemd-logind[1448]: New session 26 of user core. Jul 12 00:12:15.866075 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 12 00:12:15.964537 kubelet[2553]: E0712 00:12:15.964266 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:12:15.965930 containerd[1467]: time="2025-07-12T00:12:15.965336161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mjxbx,Uid:1fafce52-c4f1-414d-b6b6-cd8505c596ed,Namespace:kube-system,Attempt:0,}" Jul 12 00:12:15.995926 containerd[1467]: time="2025-07-12T00:12:15.995779044Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:12:15.995926 containerd[1467]: time="2025-07-12T00:12:15.995871559Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:12:15.995926 containerd[1467]: time="2025-07-12T00:12:15.995900678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:12:15.996382 containerd[1467]: time="2025-07-12T00:12:15.996266938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:12:16.023422 systemd[1]: Started cri-containerd-13e89cb52f9c43e5fa39930ba21321e1ea6eee8c6299ea2928ae046204e46ec8.scope - libcontainer container 13e89cb52f9c43e5fa39930ba21321e1ea6eee8c6299ea2928ae046204e46ec8. Jul 12 00:12:16.057645 containerd[1467]: time="2025-07-12T00:12:16.057591351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mjxbx,Uid:1fafce52-c4f1-414d-b6b6-cd8505c596ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"13e89cb52f9c43e5fa39930ba21321e1ea6eee8c6299ea2928ae046204e46ec8\"" Jul 12 00:12:16.058576 kubelet[2553]: E0712 00:12:16.058552 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:12:16.061974 containerd[1467]: time="2025-07-12T00:12:16.061367126Z" level=info msg="CreateContainer within sandbox \"13e89cb52f9c43e5fa39930ba21321e1ea6eee8c6299ea2928ae046204e46ec8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 12 00:12:16.086594 containerd[1467]: time="2025-07-12T00:12:16.086518131Z" level=info msg="CreateContainer within sandbox \"13e89cb52f9c43e5fa39930ba21321e1ea6eee8c6299ea2928ae046204e46ec8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d9c98984b72062956d2b8756cf2dbc0084ca7d11acef955989eb640629230f88\"" Jul 12 00:12:16.087366 containerd[1467]: time="2025-07-12T00:12:16.087304732Z" level=info msg="StartContainer for \"d9c98984b72062956d2b8756cf2dbc0084ca7d11acef955989eb640629230f88\"" Jul 12 00:12:16.116223 systemd[1]: Started cri-containerd-d9c98984b72062956d2b8756cf2dbc0084ca7d11acef955989eb640629230f88.scope - libcontainer container d9c98984b72062956d2b8756cf2dbc0084ca7d11acef955989eb640629230f88. Jul 12 00:12:16.140945 containerd[1467]: time="2025-07-12T00:12:16.140683431Z" level=info msg="StartContainer for \"d9c98984b72062956d2b8756cf2dbc0084ca7d11acef955989eb640629230f88\" returns successfully" Jul 12 00:12:16.152494 systemd[1]: cri-containerd-d9c98984b72062956d2b8756cf2dbc0084ca7d11acef955989eb640629230f88.scope: Deactivated successfully. Jul 12 00:12:16.183697 containerd[1467]: time="2025-07-12T00:12:16.183556245Z" level=info msg="shim disconnected" id=d9c98984b72062956d2b8756cf2dbc0084ca7d11acef955989eb640629230f88 namespace=k8s.io Jul 12 00:12:16.183697 containerd[1467]: time="2025-07-12T00:12:16.183609923Z" level=warning msg="cleaning up after shim disconnected" id=d9c98984b72062956d2b8756cf2dbc0084ca7d11acef955989eb640629230f88 namespace=k8s.io Jul 12 00:12:16.183697 containerd[1467]: time="2025-07-12T00:12:16.183618522Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:12:17.028449 kubelet[2553]: E0712 00:12:17.026830 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:12:17.045899 containerd[1467]: time="2025-07-12T00:12:17.045239300Z" level=info msg="CreateContainer within sandbox \"13e89cb52f9c43e5fa39930ba21321e1ea6eee8c6299ea2928ae046204e46ec8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 12 00:12:17.060757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2157404132.mount: Deactivated successfully. Jul 12 00:12:17.065300 containerd[1467]: time="2025-07-12T00:12:17.065259342Z" level=info msg="CreateContainer within sandbox \"13e89cb52f9c43e5fa39930ba21321e1ea6eee8c6299ea2928ae046204e46ec8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d10cb22dab3d0c8c6537535331dea21797125a44f6228222ebcc898fe9b51c85\"" Jul 12 00:12:17.066616 containerd[1467]: time="2025-07-12T00:12:17.066589361Z" level=info msg="StartContainer for \"d10cb22dab3d0c8c6537535331dea21797125a44f6228222ebcc898fe9b51c85\"" Jul 12 00:12:17.094094 systemd[1]: Started cri-containerd-d10cb22dab3d0c8c6537535331dea21797125a44f6228222ebcc898fe9b51c85.scope - libcontainer container d10cb22dab3d0c8c6537535331dea21797125a44f6228222ebcc898fe9b51c85. Jul 12 00:12:17.124554 containerd[1467]: time="2025-07-12T00:12:17.124513784Z" level=info msg="StartContainer for \"d10cb22dab3d0c8c6537535331dea21797125a44f6228222ebcc898fe9b51c85\" returns successfully" Jul 12 00:12:17.136674 systemd[1]: cri-containerd-d10cb22dab3d0c8c6537535331dea21797125a44f6228222ebcc898fe9b51c85.scope: Deactivated successfully. Jul 12 00:12:17.167117 containerd[1467]: time="2025-07-12T00:12:17.167060512Z" level=info msg="shim disconnected" id=d10cb22dab3d0c8c6537535331dea21797125a44f6228222ebcc898fe9b51c85 namespace=k8s.io Jul 12 00:12:17.167117 containerd[1467]: time="2025-07-12T00:12:17.167114550Z" level=warning msg="cleaning up after shim disconnected" id=d10cb22dab3d0c8c6537535331dea21797125a44f6228222ebcc898fe9b51c85 namespace=k8s.io Jul 12 00:12:17.167117 containerd[1467]: time="2025-07-12T00:12:17.167123189Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:12:18.030406 kubelet[2553]: E0712 00:12:18.030376 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:12:18.032895 containerd[1467]: time="2025-07-12T00:12:18.032796256Z" level=info msg="CreateContainer within sandbox \"13e89cb52f9c43e5fa39930ba21321e1ea6eee8c6299ea2928ae046204e46ec8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 12 00:12:18.045345 containerd[1467]: time="2025-07-12T00:12:18.045288522Z" level=info msg="CreateContainer within sandbox \"13e89cb52f9c43e5fa39930ba21321e1ea6eee8c6299ea2928ae046204e46ec8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"646f2b70f33addaf50a79e0009ef9cd7ed08de132e4a2819f6b9d8c01d2cbfca\"" Jul 12 00:12:18.046105 containerd[1467]: time="2025-07-12T00:12:18.046079209Z" level=info msg="StartContainer for \"646f2b70f33addaf50a79e0009ef9cd7ed08de132e4a2819f6b9d8c01d2cbfca\"" Jul 12 00:12:18.046472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount369385617.mount: Deactivated successfully. Jul 12 00:12:18.075440 systemd[1]: Started cri-containerd-646f2b70f33addaf50a79e0009ef9cd7ed08de132e4a2819f6b9d8c01d2cbfca.scope - libcontainer container 646f2b70f33addaf50a79e0009ef9cd7ed08de132e4a2819f6b9d8c01d2cbfca. Jul 12 00:12:18.114683 containerd[1467]: time="2025-07-12T00:12:18.114558842Z" level=info msg="StartContainer for \"646f2b70f33addaf50a79e0009ef9cd7ed08de132e4a2819f6b9d8c01d2cbfca\" returns successfully" Jul 12 00:12:18.123622 systemd[1]: cri-containerd-646f2b70f33addaf50a79e0009ef9cd7ed08de132e4a2819f6b9d8c01d2cbfca.scope: Deactivated successfully. Jul 12 00:12:18.147453 containerd[1467]: time="2025-07-12T00:12:18.147393639Z" level=info msg="shim disconnected" id=646f2b70f33addaf50a79e0009ef9cd7ed08de132e4a2819f6b9d8c01d2cbfca namespace=k8s.io Jul 12 00:12:18.147453 containerd[1467]: time="2025-07-12T00:12:18.147451196Z" level=warning msg="cleaning up after shim disconnected" id=646f2b70f33addaf50a79e0009ef9cd7ed08de132e4a2819f6b9d8c01d2cbfca namespace=k8s.io Jul 12 00:12:18.147453 containerd[1467]: time="2025-07-12T00:12:18.147461396Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:12:18.795680 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-646f2b70f33addaf50a79e0009ef9cd7ed08de132e4a2819f6b9d8c01d2cbfca-rootfs.mount: Deactivated successfully. Jul 12 00:12:19.034574 kubelet[2553]: E0712 00:12:19.034391 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:12:19.035972 containerd[1467]: time="2025-07-12T00:12:19.035939311Z" level=info msg="CreateContainer within sandbox \"13e89cb52f9c43e5fa39930ba21321e1ea6eee8c6299ea2928ae046204e46ec8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 12 00:12:19.063090 containerd[1467]: time="2025-07-12T00:12:19.062902600Z" level=info msg="CreateContainer within sandbox \"13e89cb52f9c43e5fa39930ba21321e1ea6eee8c6299ea2928ae046204e46ec8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"86e82157967c2849b9336271b39869791cab481dc32503b4761ec2b7423007dc\"" Jul 12 00:12:19.063775 containerd[1467]: time="2025-07-12T00:12:19.063601373Z" level=info msg="StartContainer for \"86e82157967c2849b9336271b39869791cab481dc32503b4761ec2b7423007dc\"" Jul 12 00:12:19.124092 systemd[1]: Started cri-containerd-86e82157967c2849b9336271b39869791cab481dc32503b4761ec2b7423007dc.scope - libcontainer container 86e82157967c2849b9336271b39869791cab481dc32503b4761ec2b7423007dc. Jul 12 00:12:19.145936 systemd[1]: cri-containerd-86e82157967c2849b9336271b39869791cab481dc32503b4761ec2b7423007dc.scope: Deactivated successfully. Jul 12 00:12:19.167605 containerd[1467]: time="2025-07-12T00:12:19.167553566Z" level=info msg="StartContainer for \"86e82157967c2849b9336271b39869791cab481dc32503b4761ec2b7423007dc\" returns successfully" Jul 12 00:12:19.188939 containerd[1467]: time="2025-07-12T00:12:19.188873560Z" level=info msg="shim disconnected" id=86e82157967c2849b9336271b39869791cab481dc32503b4761ec2b7423007dc namespace=k8s.io Jul 12 00:12:19.188939 containerd[1467]: time="2025-07-12T00:12:19.188933997Z" level=warning msg="cleaning up after shim disconnected" id=86e82157967c2849b9336271b39869791cab481dc32503b4761ec2b7423007dc namespace=k8s.io Jul 12 00:12:19.188939 containerd[1467]: time="2025-07-12T00:12:19.188947357Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:12:19.795986 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86e82157967c2849b9336271b39869791cab481dc32503b4761ec2b7423007dc-rootfs.mount: Deactivated successfully. Jul 12 00:12:19.864659 kubelet[2553]: E0712 00:12:19.864601 2553 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 12 00:12:20.039902 kubelet[2553]: E0712 00:12:20.039801 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:12:20.046459 containerd[1467]: time="2025-07-12T00:12:20.046279416Z" level=info msg="CreateContainer within sandbox \"13e89cb52f9c43e5fa39930ba21321e1ea6eee8c6299ea2928ae046204e46ec8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 12 00:12:20.078473 containerd[1467]: time="2025-07-12T00:12:20.078282920Z" level=info msg="CreateContainer within sandbox \"13e89cb52f9c43e5fa39930ba21321e1ea6eee8c6299ea2928ae046204e46ec8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"eccce011b52c37872211741a57e2e1c18d7e9b9a3f50aa26e21bf31896865706\"" Jul 12 00:12:20.081533 containerd[1467]: time="2025-07-12T00:12:20.081498282Z" level=info msg="StartContainer for \"eccce011b52c37872211741a57e2e1c18d7e9b9a3f50aa26e21bf31896865706\"" Jul 12 00:12:20.116318 systemd[1]: Started cri-containerd-eccce011b52c37872211741a57e2e1c18d7e9b9a3f50aa26e21bf31896865706.scope - libcontainer container eccce011b52c37872211741a57e2e1c18d7e9b9a3f50aa26e21bf31896865706. Jul 12 00:12:20.149644 containerd[1467]: time="2025-07-12T00:12:20.149598059Z" level=info msg="StartContainer for \"eccce011b52c37872211741a57e2e1c18d7e9b9a3f50aa26e21bf31896865706\" returns successfully" Jul 12 00:12:20.446914 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 12 00:12:21.043719 kubelet[2553]: E0712 00:12:21.043694 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:12:21.059514 kubelet[2553]: I0712 00:12:21.059446 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mjxbx" podStartSLOduration=6.059428506 podStartE2EDuration="6.059428506s" podCreationTimestamp="2025-07-12 00:12:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:12:21.05873701 +0000 UTC m=+81.336878610" watchObservedRunningTime="2025-07-12 00:12:21.059428506 +0000 UTC m=+81.337570106" Jul 12 00:12:21.147252 kubelet[2553]: I0712 00:12:21.146185 2553 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-12T00:12:21Z","lastTransitionTime":"2025-07-12T00:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 12 00:12:21.808775 kubelet[2553]: E0712 00:12:21.808396 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:12:22.046198 kubelet[2553]: E0712 00:12:22.046167 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:12:23.377428 systemd-networkd[1399]: lxc_health: Link UP Jul 12 00:12:23.392808 systemd-networkd[1399]: lxc_health: Gained carrier Jul 12 00:12:23.813452 kubelet[2553]: E0712 00:12:23.813047 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:12:23.968301 kubelet[2553]: E0712 00:12:23.967099 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:12:24.050829 kubelet[2553]: E0712 00:12:24.050797 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:12:24.621019 systemd-networkd[1399]: lxc_health: Gained IPv6LL Jul 12 00:12:25.052414 kubelet[2553]: E0712 00:12:25.052335 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:12:26.540121 systemd[1]: run-containerd-runc-k8s.io-eccce011b52c37872211741a57e2e1c18d7e9b9a3f50aa26e21bf31896865706-runc.G91TJq.mount: Deactivated successfully. Jul 12 00:12:28.709289 sshd[4405]: Connection closed by 10.0.0.1 port 60462 Jul 12 00:12:28.709980 sshd-session[4398]: pam_unix(sshd:session): session closed for user core Jul 12 00:12:28.713115 systemd-logind[1448]: Session 26 logged out. Waiting for processes to exit. Jul 12 00:12:28.713345 systemd[1]: sshd@25-10.0.0.137:22-10.0.0.1:60462.service: Deactivated successfully. Jul 12 00:12:28.715049 systemd[1]: session-26.scope: Deactivated successfully. Jul 12 00:12:28.716584 systemd-logind[1448]: Removed session 26.