Jul 14 21:24:21.934386 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 14 21:24:21.934407 kernel: Linux version 6.6.97-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Mon Jul 14 19:52:13 -00 2025 Jul 14 21:24:21.934417 kernel: KASLR enabled Jul 14 21:24:21.934422 kernel: efi: EFI v2.7 by EDK II Jul 14 21:24:21.934428 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Jul 14 21:24:21.934433 kernel: random: crng init done Jul 14 21:24:21.934440 kernel: secureboot: Secure boot disabled Jul 14 21:24:21.934446 kernel: ACPI: Early table checksum verification disabled Jul 14 21:24:21.934451 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Jul 14 21:24:21.934459 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 14 21:24:21.934465 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:24:21.934471 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:24:21.934477 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:24:21.934483 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:24:21.934490 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:24:21.934497 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:24:21.934504 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:24:21.934510 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:24:21.934516 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:24:21.934522 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 14 21:24:21.934529 kernel: NUMA: Failed to initialise from firmware Jul 14 21:24:21.934535 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 14 21:24:21.934541 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jul 14 21:24:21.934547 kernel: Zone ranges: Jul 14 21:24:21.934553 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 14 21:24:21.934560 kernel: DMA32 empty Jul 14 21:24:21.934566 kernel: Normal empty Jul 14 21:24:21.934572 kernel: Movable zone start for each node Jul 14 21:24:21.934578 kernel: Early memory node ranges Jul 14 21:24:21.934584 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Jul 14 21:24:21.934590 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Jul 14 21:24:21.934597 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Jul 14 21:24:21.934603 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 14 21:24:21.934609 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 14 21:24:21.934615 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 14 21:24:21.934621 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 14 21:24:21.934627 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 14 21:24:21.934635 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 14 21:24:21.934641 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 14 21:24:21.934647 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 14 21:24:21.934656 kernel: psci: probing for conduit method from ACPI. Jul 14 21:24:21.934662 kernel: psci: PSCIv1.1 detected in firmware. Jul 14 21:24:21.934669 kernel: psci: Using standard PSCI v0.2 function IDs Jul 14 21:24:21.934677 kernel: psci: Trusted OS migration not required Jul 14 21:24:21.934683 kernel: psci: SMC Calling Convention v1.1 Jul 14 21:24:21.934690 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 14 21:24:21.934696 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 14 21:24:21.934703 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 14 21:24:21.934709 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 14 21:24:21.934716 kernel: Detected PIPT I-cache on CPU0 Jul 14 21:24:21.934722 kernel: CPU features: detected: GIC system register CPU interface Jul 14 21:24:21.934729 kernel: CPU features: detected: Hardware dirty bit management Jul 14 21:24:21.934735 kernel: CPU features: detected: Spectre-v4 Jul 14 21:24:21.934743 kernel: CPU features: detected: Spectre-BHB Jul 14 21:24:21.934749 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 14 21:24:21.934756 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 14 21:24:21.934769 kernel: CPU features: detected: ARM erratum 1418040 Jul 14 21:24:21.934777 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 14 21:24:21.934783 kernel: alternatives: applying boot alternatives Jul 14 21:24:21.934791 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=910c1eabca8d0b0719454fc348b97a88b5106b4a5abdaa492c9bb12d343d8a85 Jul 14 21:24:21.934798 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 14 21:24:21.934804 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 14 21:24:21.934811 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 14 21:24:21.934817 kernel: Fallback order for Node 0: 0 Jul 14 21:24:21.934826 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 14 21:24:21.934832 kernel: Policy zone: DMA Jul 14 21:24:21.934838 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 14 21:24:21.934845 kernel: software IO TLB: area num 4. Jul 14 21:24:21.934852 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 14 21:24:21.934859 kernel: Memory: 2387476K/2572288K available (10368K kernel code, 2186K rwdata, 8104K rodata, 38336K init, 897K bss, 184812K reserved, 0K cma-reserved) Jul 14 21:24:21.934865 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 14 21:24:21.934872 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 14 21:24:21.934879 kernel: rcu: RCU event tracing is enabled. Jul 14 21:24:21.934886 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 14 21:24:21.934892 kernel: Trampoline variant of Tasks RCU enabled. Jul 14 21:24:21.934899 kernel: Tracing variant of Tasks RCU enabled. Jul 14 21:24:21.934907 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 14 21:24:21.934913 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 14 21:24:21.934920 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 14 21:24:21.934926 kernel: GICv3: 256 SPIs implemented Jul 14 21:24:21.934932 kernel: GICv3: 0 Extended SPIs implemented Jul 14 21:24:21.934939 kernel: Root IRQ handler: gic_handle_irq Jul 14 21:24:21.934945 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 14 21:24:21.934952 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 14 21:24:21.934958 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 14 21:24:21.934965 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jul 14 21:24:21.934972 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jul 14 21:24:21.934979 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 14 21:24:21.934986 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 14 21:24:21.934993 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 14 21:24:21.934999 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 21:24:21.935006 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 14 21:24:21.935012 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 14 21:24:21.935019 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 14 21:24:21.935026 kernel: arm-pv: using stolen time PV Jul 14 21:24:21.935032 kernel: Console: colour dummy device 80x25 Jul 14 21:24:21.935039 kernel: ACPI: Core revision 20230628 Jul 14 21:24:21.935046 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 14 21:24:21.935054 kernel: pid_max: default: 32768 minimum: 301 Jul 14 21:24:21.935060 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 14 21:24:21.935067 kernel: landlock: Up and running. Jul 14 21:24:21.935074 kernel: SELinux: Initializing. Jul 14 21:24:21.935080 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 21:24:21.935087 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 21:24:21.935103 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 14 21:24:21.935111 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 14 21:24:21.935117 kernel: rcu: Hierarchical SRCU implementation. Jul 14 21:24:21.935126 kernel: rcu: Max phase no-delay instances is 400. Jul 14 21:24:21.935132 kernel: Platform MSI: ITS@0x8080000 domain created Jul 14 21:24:21.935139 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 14 21:24:21.935146 kernel: Remapping and enabling EFI services. Jul 14 21:24:21.935152 kernel: smp: Bringing up secondary CPUs ... Jul 14 21:24:21.935159 kernel: Detected PIPT I-cache on CPU1 Jul 14 21:24:21.935166 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 14 21:24:21.935172 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 14 21:24:21.935179 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 21:24:21.935187 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 14 21:24:21.935194 kernel: Detected PIPT I-cache on CPU2 Jul 14 21:24:21.935205 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 14 21:24:21.935214 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 14 21:24:21.935221 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 21:24:21.935233 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 14 21:24:21.935241 kernel: Detected PIPT I-cache on CPU3 Jul 14 21:24:21.935247 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 14 21:24:21.935255 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 14 21:24:21.935263 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 21:24:21.935270 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 14 21:24:21.935277 kernel: smp: Brought up 1 node, 4 CPUs Jul 14 21:24:21.935284 kernel: SMP: Total of 4 processors activated. Jul 14 21:24:21.935291 kernel: CPU features: detected: 32-bit EL0 Support Jul 14 21:24:21.935298 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 14 21:24:21.935305 kernel: CPU features: detected: Common not Private translations Jul 14 21:24:21.935312 kernel: CPU features: detected: CRC32 instructions Jul 14 21:24:21.935321 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 14 21:24:21.935328 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 14 21:24:21.935335 kernel: CPU features: detected: LSE atomic instructions Jul 14 21:24:21.935342 kernel: CPU features: detected: Privileged Access Never Jul 14 21:24:21.935349 kernel: CPU features: detected: RAS Extension Support Jul 14 21:24:21.935356 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 14 21:24:21.935363 kernel: CPU: All CPU(s) started at EL1 Jul 14 21:24:21.935370 kernel: alternatives: applying system-wide alternatives Jul 14 21:24:21.935377 kernel: devtmpfs: initialized Jul 14 21:24:21.935384 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 14 21:24:21.935393 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 14 21:24:21.935400 kernel: pinctrl core: initialized pinctrl subsystem Jul 14 21:24:21.935406 kernel: SMBIOS 3.0.0 present. Jul 14 21:24:21.935414 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jul 14 21:24:21.935421 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 14 21:24:21.935428 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 14 21:24:21.935435 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 14 21:24:21.935442 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 14 21:24:21.935450 kernel: audit: initializing netlink subsys (disabled) Jul 14 21:24:21.935458 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Jul 14 21:24:21.935465 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 14 21:24:21.935472 kernel: cpuidle: using governor menu Jul 14 21:24:21.935479 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 14 21:24:21.935485 kernel: ASID allocator initialised with 32768 entries Jul 14 21:24:21.935493 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 14 21:24:21.935500 kernel: Serial: AMBA PL011 UART driver Jul 14 21:24:21.935507 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 14 21:24:21.935516 kernel: Modules: 0 pages in range for non-PLT usage Jul 14 21:24:21.935523 kernel: Modules: 509264 pages in range for PLT usage Jul 14 21:24:21.935530 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 14 21:24:21.935537 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 14 21:24:21.935544 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 14 21:24:21.935551 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 14 21:24:21.935558 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 14 21:24:21.935565 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 14 21:24:21.935572 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 14 21:24:21.935580 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 14 21:24:21.935587 kernel: ACPI: Added _OSI(Module Device) Jul 14 21:24:21.935594 kernel: ACPI: Added _OSI(Processor Device) Jul 14 21:24:21.935601 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 14 21:24:21.935608 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 14 21:24:21.935615 kernel: ACPI: Interpreter enabled Jul 14 21:24:21.935622 kernel: ACPI: Using GIC for interrupt routing Jul 14 21:24:21.935629 kernel: ACPI: MCFG table detected, 1 entries Jul 14 21:24:21.935636 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 14 21:24:21.935643 kernel: printk: console [ttyAMA0] enabled Jul 14 21:24:21.935652 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 14 21:24:21.935798 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 14 21:24:21.935873 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 14 21:24:21.935938 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 14 21:24:21.936004 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 14 21:24:21.936068 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 14 21:24:21.936078 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 14 21:24:21.936088 kernel: PCI host bridge to bus 0000:00 Jul 14 21:24:21.936192 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 14 21:24:21.936254 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 14 21:24:21.936329 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 14 21:24:21.936385 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 14 21:24:21.936464 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 14 21:24:21.936541 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 14 21:24:21.936607 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 14 21:24:21.936671 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 14 21:24:21.936735 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 14 21:24:21.936810 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 14 21:24:21.936877 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 14 21:24:21.936941 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 14 21:24:21.937008 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 14 21:24:21.937064 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 14 21:24:21.937137 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 14 21:24:21.937147 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 14 21:24:21.937154 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 14 21:24:21.937162 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 14 21:24:21.937169 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 14 21:24:21.937176 kernel: iommu: Default domain type: Translated Jul 14 21:24:21.937185 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 14 21:24:21.937192 kernel: efivars: Registered efivars operations Jul 14 21:24:21.937199 kernel: vgaarb: loaded Jul 14 21:24:21.937206 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 14 21:24:21.937213 kernel: VFS: Disk quotas dquot_6.6.0 Jul 14 21:24:21.937220 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 14 21:24:21.937227 kernel: pnp: PnP ACPI init Jul 14 21:24:21.937302 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 14 21:24:21.937314 kernel: pnp: PnP ACPI: found 1 devices Jul 14 21:24:21.937321 kernel: NET: Registered PF_INET protocol family Jul 14 21:24:21.937329 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 14 21:24:21.937336 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 14 21:24:21.937343 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 14 21:24:21.937350 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 14 21:24:21.937357 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 14 21:24:21.937364 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 14 21:24:21.937372 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 21:24:21.937380 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 21:24:21.937387 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 14 21:24:21.937394 kernel: PCI: CLS 0 bytes, default 64 Jul 14 21:24:21.937401 kernel: kvm [1]: HYP mode not available Jul 14 21:24:21.937408 kernel: Initialise system trusted keyrings Jul 14 21:24:21.937415 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 14 21:24:21.937422 kernel: Key type asymmetric registered Jul 14 21:24:21.937429 kernel: Asymmetric key parser 'x509' registered Jul 14 21:24:21.937436 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 14 21:24:21.937445 kernel: io scheduler mq-deadline registered Jul 14 21:24:21.937452 kernel: io scheduler kyber registered Jul 14 21:24:21.937459 kernel: io scheduler bfq registered Jul 14 21:24:21.937466 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 14 21:24:21.937473 kernel: ACPI: button: Power Button [PWRB] Jul 14 21:24:21.937480 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 14 21:24:21.937561 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 14 21:24:21.937570 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 14 21:24:21.937578 kernel: thunder_xcv, ver 1.0 Jul 14 21:24:21.937585 kernel: thunder_bgx, ver 1.0 Jul 14 21:24:21.937594 kernel: nicpf, ver 1.0 Jul 14 21:24:21.937601 kernel: nicvf, ver 1.0 Jul 14 21:24:21.937683 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 14 21:24:21.937747 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-14T21:24:21 UTC (1752528261) Jul 14 21:24:21.937757 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 14 21:24:21.937772 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 14 21:24:21.937780 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 14 21:24:21.937789 kernel: watchdog: Hard watchdog permanently disabled Jul 14 21:24:21.937796 kernel: NET: Registered PF_INET6 protocol family Jul 14 21:24:21.937804 kernel: Segment Routing with IPv6 Jul 14 21:24:21.937810 kernel: In-situ OAM (IOAM) with IPv6 Jul 14 21:24:21.937817 kernel: NET: Registered PF_PACKET protocol family Jul 14 21:24:21.937825 kernel: Key type dns_resolver registered Jul 14 21:24:21.937831 kernel: registered taskstats version 1 Jul 14 21:24:21.937838 kernel: Loading compiled-in X.509 certificates Jul 14 21:24:21.937846 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.97-flatcar: 182b97dc7c851019e3a7f255ecd17f2c51e36a1f' Jul 14 21:24:21.937853 kernel: Key type .fscrypt registered Jul 14 21:24:21.937861 kernel: Key type fscrypt-provisioning registered Jul 14 21:24:21.937868 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 14 21:24:21.937875 kernel: ima: Allocated hash algorithm: sha1 Jul 14 21:24:21.937882 kernel: ima: No architecture policies found Jul 14 21:24:21.937889 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 14 21:24:21.937896 kernel: clk: Disabling unused clocks Jul 14 21:24:21.937904 kernel: Freeing unused kernel memory: 38336K Jul 14 21:24:21.937911 kernel: Run /init as init process Jul 14 21:24:21.937919 kernel: with arguments: Jul 14 21:24:21.937927 kernel: /init Jul 14 21:24:21.937934 kernel: with environment: Jul 14 21:24:21.937941 kernel: HOME=/ Jul 14 21:24:21.937948 kernel: TERM=linux Jul 14 21:24:21.937955 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 14 21:24:21.937963 systemd[1]: Successfully made /usr/ read-only. Jul 14 21:24:21.937973 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 14 21:24:21.937982 systemd[1]: Detected virtualization kvm. Jul 14 21:24:21.937989 systemd[1]: Detected architecture arm64. Jul 14 21:24:21.937997 systemd[1]: Running in initrd. Jul 14 21:24:21.938004 systemd[1]: No hostname configured, using default hostname. Jul 14 21:24:21.938012 systemd[1]: Hostname set to . Jul 14 21:24:21.938020 systemd[1]: Initializing machine ID from VM UUID. Jul 14 21:24:21.938027 systemd[1]: Queued start job for default target initrd.target. Jul 14 21:24:21.938035 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 21:24:21.938044 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 21:24:21.938053 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 14 21:24:21.938060 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 14 21:24:21.938068 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 14 21:24:21.938077 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 14 21:24:21.938085 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 14 21:24:21.938107 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 14 21:24:21.938118 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 21:24:21.938125 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 14 21:24:21.938133 systemd[1]: Reached target paths.target - Path Units. Jul 14 21:24:21.938140 systemd[1]: Reached target slices.target - Slice Units. Jul 14 21:24:21.938148 systemd[1]: Reached target swap.target - Swaps. Jul 14 21:24:21.938155 systemd[1]: Reached target timers.target - Timer Units. Jul 14 21:24:21.938163 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 14 21:24:21.938170 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 14 21:24:21.938178 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 14 21:24:21.938187 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 14 21:24:21.938195 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 14 21:24:21.938203 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 14 21:24:21.938211 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 21:24:21.938218 systemd[1]: Reached target sockets.target - Socket Units. Jul 14 21:24:21.938226 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 14 21:24:21.938234 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 14 21:24:21.938241 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 14 21:24:21.938250 systemd[1]: Starting systemd-fsck-usr.service... Jul 14 21:24:21.938258 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 14 21:24:21.938266 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 14 21:24:21.938273 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 21:24:21.938282 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 14 21:24:21.938289 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 21:24:21.938299 systemd[1]: Finished systemd-fsck-usr.service. Jul 14 21:24:21.938326 systemd-journald[238]: Collecting audit messages is disabled. Jul 14 21:24:21.938345 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 14 21:24:21.938354 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 14 21:24:21.938361 kernel: Bridge firewalling registered Jul 14 21:24:21.938369 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 21:24:21.938377 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 14 21:24:21.938385 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 21:24:21.938392 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 21:24:21.938401 systemd-journald[238]: Journal started Jul 14 21:24:21.938421 systemd-journald[238]: Runtime Journal (/run/log/journal/1e778c19452a466391ff140f8267561b) is 5.9M, max 47.3M, 41.4M free. Jul 14 21:24:21.918527 systemd-modules-load[239]: Inserted module 'overlay' Jul 14 21:24:21.940279 systemd[1]: Started systemd-journald.service - Journal Service. Jul 14 21:24:21.931481 systemd-modules-load[239]: Inserted module 'br_netfilter' Jul 14 21:24:21.941198 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 14 21:24:21.943912 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 14 21:24:21.946081 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 14 21:24:21.949580 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 21:24:21.953121 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 21:24:21.959886 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 21:24:21.961947 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 21:24:21.974263 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 14 21:24:21.976250 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 14 21:24:21.984953 dracut-cmdline[278]: dracut-dracut-053 Jul 14 21:24:21.987476 dracut-cmdline[278]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=910c1eabca8d0b0719454fc348b97a88b5106b4a5abdaa492c9bb12d343d8a85 Jul 14 21:24:22.009256 systemd-resolved[280]: Positive Trust Anchors: Jul 14 21:24:22.009271 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 21:24:22.009302 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 14 21:24:22.013872 systemd-resolved[280]: Defaulting to hostname 'linux'. Jul 14 21:24:22.014810 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 14 21:24:22.016605 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 14 21:24:22.058124 kernel: SCSI subsystem initialized Jul 14 21:24:22.063113 kernel: Loading iSCSI transport class v2.0-870. Jul 14 21:24:22.070116 kernel: iscsi: registered transport (tcp) Jul 14 21:24:22.083135 kernel: iscsi: registered transport (qla4xxx) Jul 14 21:24:22.083179 kernel: QLogic iSCSI HBA Driver Jul 14 21:24:22.123711 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 14 21:24:22.132257 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 14 21:24:22.150365 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 14 21:24:22.150416 kernel: device-mapper: uevent: version 1.0.3 Jul 14 21:24:22.152114 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 14 21:24:22.196117 kernel: raid6: neonx8 gen() 15687 MB/s Jul 14 21:24:22.213107 kernel: raid6: neonx4 gen() 15744 MB/s Jul 14 21:24:22.230107 kernel: raid6: neonx2 gen() 13190 MB/s Jul 14 21:24:22.247108 kernel: raid6: neonx1 gen() 10469 MB/s Jul 14 21:24:22.264104 kernel: raid6: int64x8 gen() 6779 MB/s Jul 14 21:24:22.281110 kernel: raid6: int64x4 gen() 7309 MB/s Jul 14 21:24:22.298111 kernel: raid6: int64x2 gen() 6074 MB/s Jul 14 21:24:22.315119 kernel: raid6: int64x1 gen() 5025 MB/s Jul 14 21:24:22.315148 kernel: raid6: using algorithm neonx4 gen() 15744 MB/s Jul 14 21:24:22.332123 kernel: raid6: .... xor() 12314 MB/s, rmw enabled Jul 14 21:24:22.332147 kernel: raid6: using neon recovery algorithm Jul 14 21:24:22.337111 kernel: xor: measuring software checksum speed Jul 14 21:24:22.337127 kernel: 8regs : 21664 MB/sec Jul 14 21:24:22.337136 kernel: 32regs : 20143 MB/sec Jul 14 21:24:22.338388 kernel: arm64_neon : 28003 MB/sec Jul 14 21:24:22.338412 kernel: xor: using function: arm64_neon (28003 MB/sec) Jul 14 21:24:22.389122 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 14 21:24:22.399861 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 14 21:24:22.411245 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 21:24:22.424888 systemd-udevd[464]: Using default interface naming scheme 'v255'. Jul 14 21:24:22.428749 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 21:24:22.435240 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 14 21:24:22.445961 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation Jul 14 21:24:22.471038 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 14 21:24:22.483237 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 14 21:24:22.523547 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 21:24:22.531296 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 14 21:24:22.541561 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 14 21:24:22.542818 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 14 21:24:22.544681 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 21:24:22.547355 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 14 21:24:22.557248 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 14 21:24:22.566405 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 14 21:24:22.579124 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 14 21:24:22.579299 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 14 21:24:22.583332 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 14 21:24:22.583383 kernel: GPT:9289727 != 19775487 Jul 14 21:24:22.583394 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 14 21:24:22.583404 kernel: GPT:9289727 != 19775487 Jul 14 21:24:22.584120 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 14 21:24:22.585209 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 21:24:22.586071 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 21:24:22.586201 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 21:24:22.588754 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 21:24:22.589613 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 21:24:22.589817 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 21:24:22.592727 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 21:24:22.601788 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 21:24:22.606126 kernel: BTRFS: device fsid 0e96f54b-331a-4033-a6e7-997513e11389 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (516) Jul 14 21:24:22.608118 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (514) Jul 14 21:24:22.613681 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 21:24:22.621580 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 14 21:24:22.629257 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 14 21:24:22.644114 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 14 21:24:22.645202 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 14 21:24:22.653217 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 14 21:24:22.666277 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 14 21:24:22.668246 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 21:24:22.672826 disk-uuid[552]: Primary Header is updated. Jul 14 21:24:22.672826 disk-uuid[552]: Secondary Entries is updated. Jul 14 21:24:22.672826 disk-uuid[552]: Secondary Header is updated. Jul 14 21:24:22.680122 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 21:24:22.687795 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 21:24:23.689126 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 21:24:23.689984 disk-uuid[553]: The operation has completed successfully. Jul 14 21:24:23.718773 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 14 21:24:23.718868 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 14 21:24:23.750329 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 14 21:24:23.754137 sh[573]: Success Jul 14 21:24:23.769115 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 14 21:24:23.798335 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 14 21:24:23.808432 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 14 21:24:23.810794 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 14 21:24:23.822873 kernel: BTRFS info (device dm-0): first mount of filesystem 0e96f54b-331a-4033-a6e7-997513e11389 Jul 14 21:24:23.822908 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 14 21:24:23.822918 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 14 21:24:23.822935 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 14 21:24:23.823433 kernel: BTRFS info (device dm-0): using free space tree Jul 14 21:24:23.827409 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 14 21:24:23.828715 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 14 21:24:23.835246 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 14 21:24:23.836778 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 14 21:24:23.851308 kernel: BTRFS info (device vda6): first mount of filesystem 3c01b6ed-570d-4a64-bd1f-bb21004eb8d9 Jul 14 21:24:23.851348 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 14 21:24:23.851359 kernel: BTRFS info (device vda6): using free space tree Jul 14 21:24:23.854118 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 21:24:23.857148 kernel: BTRFS info (device vda6): last unmount of filesystem 3c01b6ed-570d-4a64-bd1f-bb21004eb8d9 Jul 14 21:24:23.860657 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 14 21:24:23.869331 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 14 21:24:23.932394 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 14 21:24:23.948339 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 14 21:24:23.978207 ignition[658]: Ignition 2.20.0 Jul 14 21:24:23.978217 ignition[658]: Stage: fetch-offline Jul 14 21:24:23.978252 ignition[658]: no configs at "/usr/lib/ignition/base.d" Jul 14 21:24:23.979966 systemd-networkd[761]: lo: Link UP Jul 14 21:24:23.978260 ignition[658]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:24:23.979969 systemd-networkd[761]: lo: Gained carrier Jul 14 21:24:23.978542 ignition[658]: parsed url from cmdline: "" Jul 14 21:24:23.981165 systemd-networkd[761]: Enumeration completed Jul 14 21:24:23.978546 ignition[658]: no config URL provided Jul 14 21:24:23.981693 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 14 21:24:23.978550 ignition[658]: reading system config file "/usr/lib/ignition/user.ign" Jul 14 21:24:23.981801 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 21:24:23.978557 ignition[658]: no config at "/usr/lib/ignition/user.ign" Jul 14 21:24:23.981804 systemd-networkd[761]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 21:24:23.978580 ignition[658]: op(1): [started] loading QEMU firmware config module Jul 14 21:24:23.982647 systemd-networkd[761]: eth0: Link UP Jul 14 21:24:23.978585 ignition[658]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 14 21:24:23.982650 systemd-networkd[761]: eth0: Gained carrier Jul 14 21:24:23.992170 ignition[658]: op(1): [finished] loading QEMU firmware config module Jul 14 21:24:23.982657 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 21:24:23.983235 systemd[1]: Reached target network.target - Network. Jul 14 21:24:24.007159 systemd-networkd[761]: eth0: DHCPv4 address 10.0.0.115/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 21:24:24.038981 ignition[658]: parsing config with SHA512: 7bb8cafc65d4cfa62e67abbc4cc01507a7a590cefe376f07d81afe284ef8edb32056f25ebfe614898d304053f69d1f39e8d8f7d605dbebb7edbc78b34ae1604e Jul 14 21:24:24.047688 unknown[658]: fetched base config from "system" Jul 14 21:24:24.047697 unknown[658]: fetched user config from "qemu" Jul 14 21:24:24.048319 ignition[658]: fetch-offline: fetch-offline passed Jul 14 21:24:24.050409 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 14 21:24:24.048399 ignition[658]: Ignition finished successfully Jul 14 21:24:24.051902 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 14 21:24:24.061235 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 14 21:24:24.073940 ignition[769]: Ignition 2.20.0 Jul 14 21:24:24.073951 ignition[769]: Stage: kargs Jul 14 21:24:24.074124 ignition[769]: no configs at "/usr/lib/ignition/base.d" Jul 14 21:24:24.074134 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:24:24.074998 ignition[769]: kargs: kargs passed Jul 14 21:24:24.075038 ignition[769]: Ignition finished successfully Jul 14 21:24:24.077089 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 14 21:24:24.091261 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 14 21:24:24.101139 ignition[777]: Ignition 2.20.0 Jul 14 21:24:24.101150 ignition[777]: Stage: disks Jul 14 21:24:24.101311 ignition[777]: no configs at "/usr/lib/ignition/base.d" Jul 14 21:24:24.101320 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:24:24.103499 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 14 21:24:24.102242 ignition[777]: disks: disks passed Jul 14 21:24:24.105229 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 14 21:24:24.102288 ignition[777]: Ignition finished successfully Jul 14 21:24:24.106747 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 14 21:24:24.108441 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 14 21:24:24.109793 systemd[1]: Reached target sysinit.target - System Initialization. Jul 14 21:24:24.111430 systemd[1]: Reached target basic.target - Basic System. Jul 14 21:24:24.127234 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 14 21:24:24.138780 systemd-fsck[788]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 14 21:24:24.142827 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 14 21:24:24.152239 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 14 21:24:24.194116 kernel: EXT4-fs (vda9): mounted filesystem 1f284919-a74a-44e6-9216-e7c52f513833 r/w with ordered data mode. Quota mode: none. Jul 14 21:24:24.194834 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 14 21:24:24.196109 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 14 21:24:24.208165 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 14 21:24:24.209771 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 14 21:24:24.211228 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 14 21:24:24.211269 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 14 21:24:24.216510 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (796) Jul 14 21:24:24.211291 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 14 21:24:24.219397 kernel: BTRFS info (device vda6): first mount of filesystem 3c01b6ed-570d-4a64-bd1f-bb21004eb8d9 Jul 14 21:24:24.219415 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 14 21:24:24.219424 kernel: BTRFS info (device vda6): using free space tree Jul 14 21:24:24.217480 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 14 21:24:24.221281 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 14 21:24:24.225131 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 21:24:24.225674 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 14 21:24:24.265079 initrd-setup-root[820]: cut: /sysroot/etc/passwd: No such file or directory Jul 14 21:24:24.269038 initrd-setup-root[827]: cut: /sysroot/etc/group: No such file or directory Jul 14 21:24:24.272822 initrd-setup-root[834]: cut: /sysroot/etc/shadow: No such file or directory Jul 14 21:24:24.276725 initrd-setup-root[841]: cut: /sysroot/etc/gshadow: No such file or directory Jul 14 21:24:24.343447 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 14 21:24:24.352222 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 14 21:24:24.354555 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 14 21:24:24.359118 kernel: BTRFS info (device vda6): last unmount of filesystem 3c01b6ed-570d-4a64-bd1f-bb21004eb8d9 Jul 14 21:24:24.375250 ignition[909]: INFO : Ignition 2.20.0 Jul 14 21:24:24.377175 ignition[909]: INFO : Stage: mount Jul 14 21:24:24.377175 ignition[909]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 21:24:24.377175 ignition[909]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:24:24.377175 ignition[909]: INFO : mount: mount passed Jul 14 21:24:24.376136 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 14 21:24:24.381262 ignition[909]: INFO : Ignition finished successfully Jul 14 21:24:24.378530 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 14 21:24:24.391216 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 14 21:24:24.952390 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 14 21:24:24.961291 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 14 21:24:24.967265 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (923) Jul 14 21:24:24.967291 kernel: BTRFS info (device vda6): first mount of filesystem 3c01b6ed-570d-4a64-bd1f-bb21004eb8d9 Jul 14 21:24:24.967302 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 14 21:24:24.968504 kernel: BTRFS info (device vda6): using free space tree Jul 14 21:24:24.971122 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 21:24:24.971501 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 14 21:24:24.986611 ignition[940]: INFO : Ignition 2.20.0 Jul 14 21:24:24.986611 ignition[940]: INFO : Stage: files Jul 14 21:24:24.987904 ignition[940]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 21:24:24.987904 ignition[940]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:24:24.987904 ignition[940]: DEBUG : files: compiled without relabeling support, skipping Jul 14 21:24:24.990719 ignition[940]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 14 21:24:24.990719 ignition[940]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 14 21:24:24.993104 ignition[940]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 14 21:24:24.993104 ignition[940]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 14 21:24:24.993104 ignition[940]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 14 21:24:24.992634 unknown[940]: wrote ssh authorized keys file for user: core Jul 14 21:24:24.996913 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 14 21:24:24.996913 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 14 21:24:25.134444 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 14 21:24:25.415235 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 14 21:24:25.415235 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 14 21:24:25.418049 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 14 21:24:25.817996 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 14 21:24:25.911268 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 14 21:24:25.912787 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 14 21:24:25.912787 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 14 21:24:25.912787 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 14 21:24:25.912787 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 14 21:24:25.912787 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 21:24:25.912787 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 21:24:25.912787 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 21:24:25.912787 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 21:24:25.912787 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 21:24:25.912787 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 21:24:25.912787 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 14 21:24:25.912787 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 14 21:24:25.912787 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 14 21:24:25.912787 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 14 21:24:25.962276 systemd-networkd[761]: eth0: Gained IPv6LL Jul 14 21:24:26.270236 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 14 21:24:26.663442 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 14 21:24:26.663442 ignition[940]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 14 21:24:26.666211 ignition[940]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 21:24:26.666211 ignition[940]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 21:24:26.666211 ignition[940]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 14 21:24:26.666211 ignition[940]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 14 21:24:26.666211 ignition[940]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 21:24:26.666211 ignition[940]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 21:24:26.666211 ignition[940]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 14 21:24:26.666211 ignition[940]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 14 21:24:26.679797 ignition[940]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 21:24:26.683175 ignition[940]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 21:24:26.684403 ignition[940]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 14 21:24:26.684403 ignition[940]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 14 21:24:26.684403 ignition[940]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 14 21:24:26.684403 ignition[940]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 14 21:24:26.684403 ignition[940]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 14 21:24:26.684403 ignition[940]: INFO : files: files passed Jul 14 21:24:26.684403 ignition[940]: INFO : Ignition finished successfully Jul 14 21:24:26.685815 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 14 21:24:26.697310 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 14 21:24:26.700253 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 14 21:24:26.702812 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 14 21:24:26.702900 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 14 21:24:26.706620 initrd-setup-root-after-ignition[969]: grep: /sysroot/oem/oem-release: No such file or directory Jul 14 21:24:26.710200 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 21:24:26.710200 initrd-setup-root-after-ignition[971]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 14 21:24:26.712715 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 21:24:26.714180 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 14 21:24:26.715637 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 14 21:24:26.724263 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 14 21:24:26.744445 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 14 21:24:26.744607 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 14 21:24:26.747257 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 14 21:24:26.748044 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 14 21:24:26.748892 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 14 21:24:26.749729 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 14 21:24:26.768173 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 14 21:24:26.784276 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 14 21:24:26.792294 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 14 21:24:26.793540 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 21:24:26.795282 systemd[1]: Stopped target timers.target - Timer Units. Jul 14 21:24:26.796767 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 14 21:24:26.796899 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 14 21:24:26.798991 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 14 21:24:26.800776 systemd[1]: Stopped target basic.target - Basic System. Jul 14 21:24:26.802171 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 14 21:24:26.803673 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 14 21:24:26.805416 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 14 21:24:26.807161 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 14 21:24:26.808794 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 14 21:24:26.810458 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 14 21:24:26.812182 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 14 21:24:26.813780 systemd[1]: Stopped target swap.target - Swaps. Jul 14 21:24:26.815032 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 14 21:24:26.815178 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 14 21:24:26.817087 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 14 21:24:26.818788 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 21:24:26.820420 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 14 21:24:26.821845 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 21:24:26.823970 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 14 21:24:26.824115 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 14 21:24:26.826404 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 14 21:24:26.826523 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 14 21:24:26.828356 systemd[1]: Stopped target paths.target - Path Units. Jul 14 21:24:26.829843 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 14 21:24:26.829949 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 21:24:26.831564 systemd[1]: Stopped target slices.target - Slice Units. Jul 14 21:24:26.833070 systemd[1]: Stopped target sockets.target - Socket Units. Jul 14 21:24:26.834953 systemd[1]: iscsid.socket: Deactivated successfully. Jul 14 21:24:26.835037 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 14 21:24:26.836342 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 14 21:24:26.836424 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 14 21:24:26.837799 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 14 21:24:26.837912 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 14 21:24:26.839422 systemd[1]: ignition-files.service: Deactivated successfully. Jul 14 21:24:26.839523 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 14 21:24:26.848271 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 14 21:24:26.849178 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 14 21:24:26.849314 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 21:24:26.851799 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 14 21:24:26.853128 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 14 21:24:26.853253 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 21:24:26.854927 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 14 21:24:26.860258 ignition[995]: INFO : Ignition 2.20.0 Jul 14 21:24:26.860258 ignition[995]: INFO : Stage: umount Jul 14 21:24:26.860258 ignition[995]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 21:24:26.860258 ignition[995]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:24:26.855029 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 14 21:24:26.867400 ignition[995]: INFO : umount: umount passed Jul 14 21:24:26.867400 ignition[995]: INFO : Ignition finished successfully Jul 14 21:24:26.861376 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 14 21:24:26.861476 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 14 21:24:26.863326 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 14 21:24:26.863402 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 14 21:24:26.867494 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 14 21:24:26.867906 systemd[1]: Stopped target network.target - Network. Jul 14 21:24:26.869072 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 14 21:24:26.869226 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 14 21:24:26.871000 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 14 21:24:26.871049 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 14 21:24:26.872934 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 14 21:24:26.872976 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 14 21:24:26.875025 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 14 21:24:26.875069 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 14 21:24:26.876937 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 14 21:24:26.878642 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 14 21:24:26.883776 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 14 21:24:26.883886 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 14 21:24:26.888209 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 14 21:24:26.888451 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 14 21:24:26.888487 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 21:24:26.891789 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 14 21:24:26.897172 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 14 21:24:26.897301 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 14 21:24:26.901998 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 14 21:24:26.902171 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 14 21:24:26.902199 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 14 21:24:26.915221 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 14 21:24:26.916134 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 14 21:24:26.916205 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 14 21:24:26.918315 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 14 21:24:26.918362 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 14 21:24:26.921380 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 14 21:24:26.921443 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 14 21:24:26.923384 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 21:24:26.929085 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 14 21:24:26.934937 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 14 21:24:26.935060 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 14 21:24:26.939909 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 14 21:24:26.940904 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 21:24:26.942448 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 14 21:24:26.942532 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 14 21:24:26.944724 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 14 21:24:26.944807 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 14 21:24:26.945976 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 14 21:24:26.946009 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 21:24:26.947783 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 14 21:24:26.947837 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 14 21:24:26.950550 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 14 21:24:26.950601 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 14 21:24:26.953408 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 21:24:26.953459 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 21:24:26.956376 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 14 21:24:26.956429 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 14 21:24:26.968263 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 14 21:24:26.969295 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 14 21:24:26.969363 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 21:24:26.972562 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 21:24:26.972607 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 21:24:26.976075 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 14 21:24:26.976177 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 14 21:24:26.978302 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 14 21:24:26.980623 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 14 21:24:26.989538 systemd[1]: Switching root. Jul 14 21:24:27.027833 systemd-journald[238]: Journal stopped Jul 14 21:24:27.748404 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jul 14 21:24:27.748460 kernel: SELinux: policy capability network_peer_controls=1 Jul 14 21:24:27.748471 kernel: SELinux: policy capability open_perms=1 Jul 14 21:24:27.748481 kernel: SELinux: policy capability extended_socket_class=1 Jul 14 21:24:27.748490 kernel: SELinux: policy capability always_check_network=0 Jul 14 21:24:27.748499 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 14 21:24:27.748513 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 14 21:24:27.748522 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 14 21:24:27.748531 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 14 21:24:27.748541 kernel: audit: type=1403 audit(1752528267.188:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 14 21:24:27.748551 systemd[1]: Successfully loaded SELinux policy in 31.318ms. Jul 14 21:24:27.748570 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.518ms. Jul 14 21:24:27.748581 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 14 21:24:27.748592 systemd[1]: Detected virtualization kvm. Jul 14 21:24:27.748602 systemd[1]: Detected architecture arm64. Jul 14 21:24:27.748614 systemd[1]: Detected first boot. Jul 14 21:24:27.748624 systemd[1]: Initializing machine ID from VM UUID. Jul 14 21:24:27.748634 zram_generator::config[1042]: No configuration found. Jul 14 21:24:27.748647 kernel: NET: Registered PF_VSOCK protocol family Jul 14 21:24:27.748657 systemd[1]: Populated /etc with preset unit settings. Jul 14 21:24:27.748668 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 14 21:24:27.748678 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 14 21:24:27.748688 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 14 21:24:27.748700 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 14 21:24:27.748711 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 14 21:24:27.748725 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 14 21:24:27.748739 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 14 21:24:27.748762 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 14 21:24:27.748777 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 14 21:24:27.748788 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 14 21:24:27.748798 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 14 21:24:27.748808 systemd[1]: Created slice user.slice - User and Session Slice. Jul 14 21:24:27.748821 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 21:24:27.748832 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 21:24:27.748842 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 14 21:24:27.748852 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 14 21:24:27.748862 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 14 21:24:27.748873 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 14 21:24:27.748883 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 14 21:24:27.748894 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 21:24:27.748905 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 14 21:24:27.748920 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 14 21:24:27.748930 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 14 21:24:27.748940 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 14 21:24:27.748951 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 21:24:27.748961 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 14 21:24:27.748971 systemd[1]: Reached target slices.target - Slice Units. Jul 14 21:24:27.748981 systemd[1]: Reached target swap.target - Swaps. Jul 14 21:24:27.748991 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 14 21:24:27.749003 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 14 21:24:27.749013 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 14 21:24:27.749023 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 14 21:24:27.749034 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 14 21:24:27.749044 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 21:24:27.749054 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 14 21:24:27.749064 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 14 21:24:27.749074 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 14 21:24:27.749084 systemd[1]: Mounting media.mount - External Media Directory... Jul 14 21:24:27.749103 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 14 21:24:27.749115 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 14 21:24:27.749127 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 14 21:24:27.749137 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 14 21:24:27.749148 systemd[1]: Reached target machines.target - Containers. Jul 14 21:24:27.749158 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 14 21:24:27.749169 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 21:24:27.749179 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 14 21:24:27.749191 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 14 21:24:27.749203 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 21:24:27.749219 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 14 21:24:27.749230 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 21:24:27.749240 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 14 21:24:27.749252 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 21:24:27.749263 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 14 21:24:27.749273 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 14 21:24:27.749285 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 14 21:24:27.749295 kernel: fuse: init (API version 7.39) Jul 14 21:24:27.749305 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 14 21:24:27.749315 systemd[1]: Stopped systemd-fsck-usr.service. Jul 14 21:24:27.749325 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 14 21:24:27.749335 kernel: loop: module loaded Jul 14 21:24:27.749345 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 14 21:24:27.749356 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 14 21:24:27.749367 kernel: ACPI: bus type drm_connector registered Jul 14 21:24:27.749378 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 14 21:24:27.749388 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 14 21:24:27.749398 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 14 21:24:27.749409 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 14 21:24:27.749419 systemd[1]: verity-setup.service: Deactivated successfully. Jul 14 21:24:27.749430 systemd[1]: Stopped verity-setup.service. Jul 14 21:24:27.749440 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 14 21:24:27.749471 systemd-journald[1110]: Collecting audit messages is disabled. Jul 14 21:24:27.749493 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 14 21:24:27.749503 systemd[1]: Mounted media.mount - External Media Directory. Jul 14 21:24:27.749514 systemd-journald[1110]: Journal started Jul 14 21:24:27.749537 systemd-journald[1110]: Runtime Journal (/run/log/journal/1e778c19452a466391ff140f8267561b) is 5.9M, max 47.3M, 41.4M free. Jul 14 21:24:27.578806 systemd[1]: Queued start job for default target multi-user.target. Jul 14 21:24:27.588891 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 14 21:24:27.589278 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 14 21:24:27.752544 systemd[1]: Started systemd-journald.service - Journal Service. Jul 14 21:24:27.753172 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 14 21:24:27.754358 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 14 21:24:27.755557 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 14 21:24:27.756766 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 14 21:24:27.758205 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 21:24:27.759601 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 14 21:24:27.759769 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 14 21:24:27.761204 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 21:24:27.761363 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 21:24:27.762660 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 21:24:27.762838 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 14 21:24:27.764312 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 21:24:27.764464 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 21:24:27.765859 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 14 21:24:27.766022 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 14 21:24:27.767466 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 21:24:27.767626 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 21:24:27.768949 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 14 21:24:27.770344 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 14 21:24:27.771952 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 14 21:24:27.773433 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 14 21:24:27.785415 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 14 21:24:27.794203 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 14 21:24:27.796195 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 14 21:24:27.797293 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 14 21:24:27.797320 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 14 21:24:27.799112 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 14 21:24:27.801161 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 14 21:24:27.803122 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 14 21:24:27.804180 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 21:24:27.805460 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 14 21:24:27.807420 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 14 21:24:27.808594 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 21:24:27.810255 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 14 21:24:27.811309 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 14 21:24:27.814278 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 21:24:27.815447 systemd-journald[1110]: Time spent on flushing to /var/log/journal/1e778c19452a466391ff140f8267561b is 17.757ms for 867 entries. Jul 14 21:24:27.815447 systemd-journald[1110]: System Journal (/var/log/journal/1e778c19452a466391ff140f8267561b) is 8M, max 195.6M, 187.6M free. Jul 14 21:24:27.841187 systemd-journald[1110]: Received client request to flush runtime journal. Jul 14 21:24:27.816396 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 14 21:24:27.819405 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 14 21:24:27.823602 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 21:24:27.831463 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 14 21:24:27.832718 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 14 21:24:27.842119 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 14 21:24:27.844805 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 14 21:24:27.846473 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 14 21:24:27.854685 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 21:24:27.855239 kernel: loop0: detected capacity change from 0 to 203944 Jul 14 21:24:27.857599 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 14 21:24:27.866309 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 14 21:24:27.870228 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 14 21:24:27.872134 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 14 21:24:27.873116 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 14 21:24:27.878332 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 14 21:24:27.883883 udevadm[1175]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 14 21:24:27.888438 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 14 21:24:27.897196 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Jul 14 21:24:27.897210 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Jul 14 21:24:27.901726 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 21:24:27.905120 kernel: loop1: detected capacity change from 0 to 123192 Jul 14 21:24:27.947137 kernel: loop2: detected capacity change from 0 to 113512 Jul 14 21:24:27.995155 kernel: loop3: detected capacity change from 0 to 203944 Jul 14 21:24:28.001145 kernel: loop4: detected capacity change from 0 to 123192 Jul 14 21:24:28.006209 kernel: loop5: detected capacity change from 0 to 113512 Jul 14 21:24:28.011408 (sd-merge)[1184]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 14 21:24:28.011806 (sd-merge)[1184]: Merged extensions into '/usr'. Jul 14 21:24:28.014834 systemd[1]: Reload requested from client PID 1159 ('systemd-sysext') (unit systemd-sysext.service)... Jul 14 21:24:28.014847 systemd[1]: Reloading... Jul 14 21:24:28.062121 zram_generator::config[1208]: No configuration found. Jul 14 21:24:28.094816 ldconfig[1154]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 14 21:24:28.163998 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 21:24:28.213323 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 14 21:24:28.213757 systemd[1]: Reloading finished in 198 ms. Jul 14 21:24:28.231935 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 14 21:24:28.233407 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 14 21:24:28.248377 systemd[1]: Starting ensure-sysext.service... Jul 14 21:24:28.250246 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 14 21:24:28.259352 systemd[1]: Reload requested from client PID 1246 ('systemctl') (unit ensure-sysext.service)... Jul 14 21:24:28.259370 systemd[1]: Reloading... Jul 14 21:24:28.270765 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 14 21:24:28.270975 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 14 21:24:28.271612 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 14 21:24:28.271831 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Jul 14 21:24:28.271885 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Jul 14 21:24:28.274336 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Jul 14 21:24:28.274351 systemd-tmpfiles[1247]: Skipping /boot Jul 14 21:24:28.282948 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Jul 14 21:24:28.282957 systemd-tmpfiles[1247]: Skipping /boot Jul 14 21:24:28.308200 zram_generator::config[1276]: No configuration found. Jul 14 21:24:28.395041 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 21:24:28.444971 systemd[1]: Reloading finished in 185 ms. Jul 14 21:24:28.457709 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 14 21:24:28.474188 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 21:24:28.484521 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 14 21:24:28.486663 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 14 21:24:28.487629 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 21:24:28.488911 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 21:24:28.494429 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 21:24:28.496584 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 21:24:28.497583 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 21:24:28.497723 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 14 21:24:28.499255 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 14 21:24:28.504373 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 14 21:24:28.512817 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 21:24:28.520085 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 14 21:24:28.524593 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 21:24:28.524805 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 21:24:28.526648 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 21:24:28.526857 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 21:24:28.529226 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 21:24:28.529442 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 21:24:28.537473 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 21:24:28.547507 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 21:24:28.549573 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 21:24:28.553560 systemd-udevd[1325]: Using default interface naming scheme 'v255'. Jul 14 21:24:28.554420 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 21:24:28.555513 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 21:24:28.555680 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 14 21:24:28.558440 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 14 21:24:28.566129 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 14 21:24:28.567701 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 14 21:24:28.569382 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 21:24:28.569546 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 21:24:28.571157 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 21:24:28.571326 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 21:24:28.572981 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 21:24:28.573193 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 21:24:28.581689 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 14 21:24:28.587683 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 21:24:28.591518 augenrules[1355]: No rules Jul 14 21:24:28.592277 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 21:24:28.594397 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 14 21:24:28.598599 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 21:24:28.600353 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 21:24:28.601613 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 21:24:28.601660 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 14 21:24:28.603257 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 14 21:24:28.604034 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 14 21:24:28.606413 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 21:24:28.608739 systemd[1]: Finished ensure-sysext.service. Jul 14 21:24:28.609589 systemd[1]: audit-rules.service: Deactivated successfully. Jul 14 21:24:28.611911 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 14 21:24:28.612967 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 21:24:28.613170 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 21:24:28.614396 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 21:24:28.614572 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 14 21:24:28.619273 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 14 21:24:28.626868 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 21:24:28.627806 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 21:24:28.629009 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 21:24:28.629195 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 21:24:28.632390 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 14 21:24:28.651205 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1372) Jul 14 21:24:28.651007 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 14 21:24:28.654294 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 21:24:28.654374 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 14 21:24:28.658329 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 14 21:24:28.661463 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 14 21:24:28.712973 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 14 21:24:28.722276 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 14 21:24:28.748570 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 14 21:24:28.749695 systemd[1]: Reached target time-set.target - System Time Set. Jul 14 21:24:28.755179 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 14 21:24:28.761240 systemd-resolved[1324]: Positive Trust Anchors: Jul 14 21:24:28.763005 systemd-resolved[1324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 21:24:28.763040 systemd-resolved[1324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 14 21:24:28.768598 systemd-networkd[1395]: lo: Link UP Jul 14 21:24:28.768618 systemd-networkd[1395]: lo: Gained carrier Jul 14 21:24:28.769399 systemd-resolved[1324]: Defaulting to hostname 'linux'. Jul 14 21:24:28.769512 systemd-networkd[1395]: Enumeration completed Jul 14 21:24:28.769621 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 14 21:24:28.770586 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 21:24:28.770599 systemd-networkd[1395]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 21:24:28.774112 systemd-networkd[1395]: eth0: Link UP Jul 14 21:24:28.774120 systemd-networkd[1395]: eth0: Gained carrier Jul 14 21:24:28.774133 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 21:24:28.779499 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 14 21:24:28.781605 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 14 21:24:28.782707 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 14 21:24:28.784034 systemd[1]: Reached target network.target - Network. Jul 14 21:24:28.784828 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 14 21:24:28.794220 systemd-networkd[1395]: eth0: DHCPv4 address 10.0.0.115/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 21:24:28.794984 systemd-timesyncd[1396]: Network configuration changed, trying to establish connection. Jul 14 21:24:28.795590 systemd-timesyncd[1396]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 14 21:24:28.795647 systemd-timesyncd[1396]: Initial clock synchronization to Mon 2025-07-14 21:24:28.617275 UTC. Jul 14 21:24:28.798996 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 14 21:24:28.818332 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 21:24:28.826255 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 14 21:24:28.835337 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 14 21:24:28.849737 lvm[1418]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 21:24:28.855404 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 21:24:28.892543 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 14 21:24:28.893679 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 14 21:24:28.894513 systemd[1]: Reached target sysinit.target - System Initialization. Jul 14 21:24:28.895339 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 14 21:24:28.896216 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 14 21:24:28.897223 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 14 21:24:28.898084 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 14 21:24:28.898951 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 14 21:24:28.899843 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 14 21:24:28.899873 systemd[1]: Reached target paths.target - Path Units. Jul 14 21:24:28.900656 systemd[1]: Reached target timers.target - Timer Units. Jul 14 21:24:28.903151 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 14 21:24:28.905261 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 14 21:24:28.908051 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 14 21:24:28.909150 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 14 21:24:28.910023 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 14 21:24:28.914936 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 14 21:24:28.916058 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 14 21:24:28.917929 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 14 21:24:28.919261 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 14 21:24:28.920088 systemd[1]: Reached target sockets.target - Socket Units. Jul 14 21:24:28.920785 systemd[1]: Reached target basic.target - Basic System. Jul 14 21:24:28.921471 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 14 21:24:28.921498 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 14 21:24:28.922356 systemd[1]: Starting containerd.service - containerd container runtime... Jul 14 21:24:28.924043 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 14 21:24:28.924731 lvm[1425]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 21:24:28.926889 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 14 21:24:28.929947 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 14 21:24:28.934432 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 14 21:24:28.935620 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 14 21:24:28.937486 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 14 21:24:28.940645 jq[1428]: false Jul 14 21:24:28.940683 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 14 21:24:28.942606 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 14 21:24:28.946007 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 14 21:24:28.949761 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 14 21:24:28.950260 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 14 21:24:28.951179 systemd[1]: Starting update-engine.service - Update Engine... Jul 14 21:24:28.953869 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 14 21:24:28.955284 extend-filesystems[1429]: Found loop3 Jul 14 21:24:28.955284 extend-filesystems[1429]: Found loop4 Jul 14 21:24:28.955284 extend-filesystems[1429]: Found loop5 Jul 14 21:24:28.955284 extend-filesystems[1429]: Found vda Jul 14 21:24:28.955284 extend-filesystems[1429]: Found vda1 Jul 14 21:24:28.955284 extend-filesystems[1429]: Found vda2 Jul 14 21:24:28.955284 extend-filesystems[1429]: Found vda3 Jul 14 21:24:28.955284 extend-filesystems[1429]: Found usr Jul 14 21:24:28.955284 extend-filesystems[1429]: Found vda4 Jul 14 21:24:28.955284 extend-filesystems[1429]: Found vda6 Jul 14 21:24:28.955284 extend-filesystems[1429]: Found vda7 Jul 14 21:24:28.955284 extend-filesystems[1429]: Found vda9 Jul 14 21:24:28.955284 extend-filesystems[1429]: Checking size of /dev/vda9 Jul 14 21:24:28.973120 extend-filesystems[1429]: Resized partition /dev/vda9 Jul 14 21:24:28.960157 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 14 21:24:28.962484 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 14 21:24:28.962649 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 14 21:24:28.974313 jq[1442]: true Jul 14 21:24:28.964562 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 14 21:24:28.964734 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 14 21:24:28.974596 dbus-daemon[1427]: [system] SELinux support is enabled Jul 14 21:24:28.975260 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 14 21:24:28.977502 extend-filesystems[1452]: resize2fs 1.47.1 (20-May-2024) Jul 14 21:24:28.981306 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 14 21:24:28.981607 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 14 21:24:28.981641 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 14 21:24:28.987284 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 14 21:24:28.987327 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 14 21:24:28.989724 systemd[1]: motdgen.service: Deactivated successfully. Jul 14 21:24:28.990075 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 14 21:24:28.995297 jq[1454]: true Jul 14 21:24:28.999384 (ntainerd)[1460]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 14 21:24:29.016111 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 14 21:24:29.027578 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1381) Jul 14 21:24:29.021718 systemd[1]: Started update-engine.service - Update Engine. Jul 14 21:24:29.027694 update_engine[1439]: I20250714 21:24:29.016026 1439 main.cc:92] Flatcar Update Engine starting Jul 14 21:24:29.027694 update_engine[1439]: I20250714 21:24:29.023662 1439 update_check_scheduler.cc:74] Next update check in 9m39s Jul 14 21:24:29.034550 tar[1448]: linux-arm64/helm Jul 14 21:24:29.033403 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 14 21:24:29.034770 extend-filesystems[1452]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 14 21:24:29.034770 extend-filesystems[1452]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 14 21:24:29.034770 extend-filesystems[1452]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 14 21:24:29.034723 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 14 21:24:29.041162 extend-filesystems[1429]: Resized filesystem in /dev/vda9 Jul 14 21:24:29.034911 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 14 21:24:29.075899 systemd-logind[1436]: Watching system buttons on /dev/input/event0 (Power Button) Jul 14 21:24:29.076167 systemd-logind[1436]: New seat seat0. Jul 14 21:24:29.077337 systemd[1]: Started systemd-logind.service - User Login Management. Jul 14 21:24:29.081935 bash[1483]: Updated "/home/core/.ssh/authorized_keys" Jul 14 21:24:29.085625 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 14 21:24:29.085890 locksmithd[1471]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 14 21:24:29.087432 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 14 21:24:29.190629 containerd[1460]: time="2025-07-14T21:24:29.190506030Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jul 14 21:24:29.217261 containerd[1460]: time="2025-07-14T21:24:29.217121920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 14 21:24:29.218482 containerd[1460]: time="2025-07-14T21:24:29.218439066Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.97-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 14 21:24:29.218482 containerd[1460]: time="2025-07-14T21:24:29.218471800Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 14 21:24:29.218550 containerd[1460]: time="2025-07-14T21:24:29.218487600Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 14 21:24:29.218651 containerd[1460]: time="2025-07-14T21:24:29.218632224Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 14 21:24:29.218680 containerd[1460]: time="2025-07-14T21:24:29.218655455Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 14 21:24:29.218725 containerd[1460]: time="2025-07-14T21:24:29.218710481Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 21:24:29.218745 containerd[1460]: time="2025-07-14T21:24:29.218725655Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 14 21:24:29.218934 containerd[1460]: time="2025-07-14T21:24:29.218916193Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 21:24:29.218956 containerd[1460]: time="2025-07-14T21:24:29.218935983Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 14 21:24:29.218956 containerd[1460]: time="2025-07-14T21:24:29.218948576Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 21:24:29.218987 containerd[1460]: time="2025-07-14T21:24:29.218956945Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 14 21:24:29.219034 containerd[1460]: time="2025-07-14T21:24:29.219021201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 14 21:24:29.219241 containerd[1460]: time="2025-07-14T21:24:29.219225114Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 14 21:24:29.219358 containerd[1460]: time="2025-07-14T21:24:29.219343535Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 21:24:29.219377 containerd[1460]: time="2025-07-14T21:24:29.219359257Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 14 21:24:29.219439 containerd[1460]: time="2025-07-14T21:24:29.219427306Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 14 21:24:29.219488 containerd[1460]: time="2025-07-14T21:24:29.219477405Z" level=info msg="metadata content store policy set" policy=shared Jul 14 21:24:29.223006 containerd[1460]: time="2025-07-14T21:24:29.222969704Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 14 21:24:29.223059 containerd[1460]: time="2025-07-14T21:24:29.223016361Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 14 21:24:29.223059 containerd[1460]: time="2025-07-14T21:24:29.223032708Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 14 21:24:29.223059 containerd[1460]: time="2025-07-14T21:24:29.223047765Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 14 21:24:29.223129 containerd[1460]: time="2025-07-14T21:24:29.223060515Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 14 21:24:29.223234 containerd[1460]: time="2025-07-14T21:24:29.223206351Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 14 21:24:29.223435 containerd[1460]: time="2025-07-14T21:24:29.223415505Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 14 21:24:29.223530 containerd[1460]: time="2025-07-14T21:24:29.223510149Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 14 21:24:29.223551 containerd[1460]: time="2025-07-14T21:24:29.223530290Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 14 21:24:29.223551 containerd[1460]: time="2025-07-14T21:24:29.223544212Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 14 21:24:29.223582 containerd[1460]: time="2025-07-14T21:24:29.223557627Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 14 21:24:29.223582 containerd[1460]: time="2025-07-14T21:24:29.223569672Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 14 21:24:29.223582 containerd[1460]: time="2025-07-14T21:24:29.223580740Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 14 21:24:29.223634 containerd[1460]: time="2025-07-14T21:24:29.223594115Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 14 21:24:29.223634 containerd[1460]: time="2025-07-14T21:24:29.223607021Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 14 21:24:29.223634 containerd[1460]: time="2025-07-14T21:24:29.223619732Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 14 21:24:29.223634 containerd[1460]: time="2025-07-14T21:24:29.223631151Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 14 21:24:29.223700 containerd[1460]: time="2025-07-14T21:24:29.223641945Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 14 21:24:29.223700 containerd[1460]: time="2025-07-14T21:24:29.223669634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 14 21:24:29.223700 containerd[1460]: time="2025-07-14T21:24:29.223682931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 14 21:24:29.223700 containerd[1460]: time="2025-07-14T21:24:29.223695485Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 14 21:24:29.223768 containerd[1460]: time="2025-07-14T21:24:29.223707022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 14 21:24:29.223768 containerd[1460]: time="2025-07-14T21:24:29.223719459Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 14 21:24:29.223768 containerd[1460]: time="2025-07-14T21:24:29.223731739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 14 21:24:29.223768 containerd[1460]: time="2025-07-14T21:24:29.223743120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 14 21:24:29.223768 containerd[1460]: time="2025-07-14T21:24:29.223755165Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 14 21:24:29.223768 containerd[1460]: time="2025-07-14T21:24:29.223767367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 14 21:24:29.223859 containerd[1460]: time="2025-07-14T21:24:29.223781642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 14 21:24:29.223859 containerd[1460]: time="2025-07-14T21:24:29.223793101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 14 21:24:29.223859 containerd[1460]: time="2025-07-14T21:24:29.223804051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 14 21:24:29.223859 containerd[1460]: time="2025-07-14T21:24:29.223816019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 14 21:24:29.223859 containerd[1460]: time="2025-07-14T21:24:29.223830606Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 14 21:24:29.223859 containerd[1460]: time="2025-07-14T21:24:29.223855284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 14 21:24:29.223951 containerd[1460]: time="2025-07-14T21:24:29.223869637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 14 21:24:29.223951 containerd[1460]: time="2025-07-14T21:24:29.223880470Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 14 21:24:29.224056 containerd[1460]: time="2025-07-14T21:24:29.224036788Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 14 21:24:29.224081 containerd[1460]: time="2025-07-14T21:24:29.224058298Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 14 21:24:29.224081 containerd[1460]: time="2025-07-14T21:24:29.224075975Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 14 21:24:29.224136 containerd[1460]: time="2025-07-14T21:24:29.224087395Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 14 21:24:29.224136 containerd[1460]: time="2025-07-14T21:24:29.224116844Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 14 21:24:29.224136 containerd[1460]: time="2025-07-14T21:24:29.224128576Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 14 21:24:29.224191 containerd[1460]: time="2025-07-14T21:24:29.224137610Z" level=info msg="NRI interface is disabled by configuration." Jul 14 21:24:29.224191 containerd[1460]: time="2025-07-14T21:24:29.224148404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 14 21:24:29.224527 containerd[1460]: time="2025-07-14T21:24:29.224475941Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 14 21:24:29.224631 containerd[1460]: time="2025-07-14T21:24:29.224531866Z" level=info msg="Connect containerd service" Jul 14 21:24:29.224631 containerd[1460]: time="2025-07-14T21:24:29.224564679Z" level=info msg="using legacy CRI server" Jul 14 21:24:29.224631 containerd[1460]: time="2025-07-14T21:24:29.224570975Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 14 21:24:29.224802 containerd[1460]: time="2025-07-14T21:24:29.224787677Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 14 21:24:29.225504 containerd[1460]: time="2025-07-14T21:24:29.225476305Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 14 21:24:29.225781 containerd[1460]: time="2025-07-14T21:24:29.225730551Z" level=info msg="Start subscribing containerd event" Jul 14 21:24:29.226127 containerd[1460]: time="2025-07-14T21:24:29.226031376Z" level=info msg="Start recovering state" Jul 14 21:24:29.226436 containerd[1460]: time="2025-07-14T21:24:29.226412218Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 14 21:24:29.228289 containerd[1460]: time="2025-07-14T21:24:29.226532556Z" level=info msg="Start event monitor" Jul 14 21:24:29.228379 containerd[1460]: time="2025-07-14T21:24:29.228347439Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 14 21:24:29.228626 containerd[1460]: time="2025-07-14T21:24:29.228606535Z" level=info msg="Start snapshots syncer" Jul 14 21:24:29.228691 containerd[1460]: time="2025-07-14T21:24:29.228675445Z" level=info msg="Start cni network conf syncer for default" Jul 14 21:24:29.228736 containerd[1460]: time="2025-07-14T21:24:29.228724683Z" level=info msg="Start streaming server" Jul 14 21:24:29.228939 containerd[1460]: time="2025-07-14T21:24:29.228920579Z" level=info msg="containerd successfully booted in 0.039368s" Jul 14 21:24:29.229001 systemd[1]: Started containerd.service - containerd container runtime. Jul 14 21:24:29.386118 tar[1448]: linux-arm64/LICENSE Jul 14 21:24:29.386310 tar[1448]: linux-arm64/README.md Jul 14 21:24:29.399205 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 14 21:24:30.250223 systemd-networkd[1395]: eth0: Gained IPv6LL Jul 14 21:24:30.255610 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 14 21:24:30.256963 systemd[1]: Reached target network-online.target - Network is Online. Jul 14 21:24:30.262313 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 14 21:24:30.264399 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:24:30.266124 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 14 21:24:30.283793 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 14 21:24:30.284006 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 14 21:24:30.285560 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 14 21:24:30.291562 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 14 21:24:30.501153 sshd_keygen[1449]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 14 21:24:30.521020 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 14 21:24:30.530353 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 14 21:24:30.535163 systemd[1]: issuegen.service: Deactivated successfully. Jul 14 21:24:30.535375 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 14 21:24:30.539174 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 14 21:24:30.550972 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 14 21:24:30.556197 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 14 21:24:30.558397 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 14 21:24:30.559833 systemd[1]: Reached target getty.target - Login Prompts. Jul 14 21:24:30.830947 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:24:30.832484 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 14 21:24:30.836496 (kubelet)[1540]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 21:24:30.837190 systemd[1]: Startup finished in 548ms (kernel) + 5.474s (initrd) + 3.682s (userspace) = 9.706s. Jul 14 21:24:31.248407 kubelet[1540]: E0714 21:24:31.248306 1540 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 21:24:31.250864 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 21:24:31.251002 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 21:24:31.251310 systemd[1]: kubelet.service: Consumed 827ms CPU time, 260.4M memory peak. Jul 14 21:24:34.479464 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 14 21:24:34.480559 systemd[1]: Started sshd@0-10.0.0.115:22-10.0.0.1:42340.service - OpenSSH per-connection server daemon (10.0.0.1:42340). Jul 14 21:24:34.533797 sshd[1554]: Accepted publickey for core from 10.0.0.1 port 42340 ssh2: RSA SHA256:qod1KuvK/X/ss8Lj9PYiIkvlhHqhxXhx4UqduPTMsgY Jul 14 21:24:34.535245 sshd-session[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:24:34.544442 systemd-logind[1436]: New session 1 of user core. Jul 14 21:24:34.545379 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 14 21:24:34.554318 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 14 21:24:34.564132 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 14 21:24:34.566073 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 14 21:24:34.571323 (systemd)[1558]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:24:34.573239 systemd-logind[1436]: New session c1 of user core. Jul 14 21:24:34.674169 systemd[1558]: Queued start job for default target default.target. Jul 14 21:24:34.688958 systemd[1558]: Created slice app.slice - User Application Slice. Jul 14 21:24:34.688987 systemd[1558]: Reached target paths.target - Paths. Jul 14 21:24:34.689022 systemd[1558]: Reached target timers.target - Timers. Jul 14 21:24:34.690216 systemd[1558]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 14 21:24:34.699083 systemd[1558]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 14 21:24:34.699163 systemd[1558]: Reached target sockets.target - Sockets. Jul 14 21:24:34.699208 systemd[1558]: Reached target basic.target - Basic System. Jul 14 21:24:34.699237 systemd[1558]: Reached target default.target - Main User Target. Jul 14 21:24:34.699261 systemd[1558]: Startup finished in 121ms. Jul 14 21:24:34.699430 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 14 21:24:34.700679 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 14 21:24:34.761387 systemd[1]: Started sshd@1-10.0.0.115:22-10.0.0.1:42354.service - OpenSSH per-connection server daemon (10.0.0.1:42354). Jul 14 21:24:34.810179 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 42354 ssh2: RSA SHA256:qod1KuvK/X/ss8Lj9PYiIkvlhHqhxXhx4UqduPTMsgY Jul 14 21:24:34.811490 sshd-session[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:24:34.815673 systemd-logind[1436]: New session 2 of user core. Jul 14 21:24:34.828284 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 14 21:24:34.878922 sshd[1571]: Connection closed by 10.0.0.1 port 42354 Jul 14 21:24:34.879326 sshd-session[1569]: pam_unix(sshd:session): session closed for user core Jul 14 21:24:34.894301 systemd[1]: sshd@1-10.0.0.115:22-10.0.0.1:42354.service: Deactivated successfully. Jul 14 21:24:34.896525 systemd[1]: session-2.scope: Deactivated successfully. Jul 14 21:24:34.897320 systemd-logind[1436]: Session 2 logged out. Waiting for processes to exit. Jul 14 21:24:34.907419 systemd[1]: Started sshd@2-10.0.0.115:22-10.0.0.1:42360.service - OpenSSH per-connection server daemon (10.0.0.1:42360). Jul 14 21:24:34.908514 systemd-logind[1436]: Removed session 2. Jul 14 21:24:34.944242 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 42360 ssh2: RSA SHA256:qod1KuvK/X/ss8Lj9PYiIkvlhHqhxXhx4UqduPTMsgY Jul 14 21:24:34.945586 sshd-session[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:24:34.950168 systemd-logind[1436]: New session 3 of user core. Jul 14 21:24:34.959289 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 14 21:24:35.007806 sshd[1579]: Connection closed by 10.0.0.1 port 42360 Jul 14 21:24:35.006804 sshd-session[1576]: pam_unix(sshd:session): session closed for user core Jul 14 21:24:35.028243 systemd[1]: sshd@2-10.0.0.115:22-10.0.0.1:42360.service: Deactivated successfully. Jul 14 21:24:35.029785 systemd[1]: session-3.scope: Deactivated successfully. Jul 14 21:24:35.032693 systemd-logind[1436]: Session 3 logged out. Waiting for processes to exit. Jul 14 21:24:35.034231 systemd[1]: Started sshd@3-10.0.0.115:22-10.0.0.1:42362.service - OpenSSH per-connection server daemon (10.0.0.1:42362). Jul 14 21:24:35.035693 systemd-logind[1436]: Removed session 3. Jul 14 21:24:35.078249 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 42362 ssh2: RSA SHA256:qod1KuvK/X/ss8Lj9PYiIkvlhHqhxXhx4UqduPTMsgY Jul 14 21:24:35.079536 sshd-session[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:24:35.083835 systemd-logind[1436]: New session 4 of user core. Jul 14 21:24:35.104316 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 14 21:24:35.157059 sshd[1587]: Connection closed by 10.0.0.1 port 42362 Jul 14 21:24:35.157378 sshd-session[1584]: pam_unix(sshd:session): session closed for user core Jul 14 21:24:35.169067 systemd[1]: sshd@3-10.0.0.115:22-10.0.0.1:42362.service: Deactivated successfully. Jul 14 21:24:35.170579 systemd[1]: session-4.scope: Deactivated successfully. Jul 14 21:24:35.172284 systemd-logind[1436]: Session 4 logged out. Waiting for processes to exit. Jul 14 21:24:35.173553 systemd[1]: Started sshd@4-10.0.0.115:22-10.0.0.1:42378.service - OpenSSH per-connection server daemon (10.0.0.1:42378). Jul 14 21:24:35.174228 systemd-logind[1436]: Removed session 4. Jul 14 21:24:35.213259 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 42378 ssh2: RSA SHA256:qod1KuvK/X/ss8Lj9PYiIkvlhHqhxXhx4UqduPTMsgY Jul 14 21:24:35.214397 sshd-session[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:24:35.218164 systemd-logind[1436]: New session 5 of user core. Jul 14 21:24:35.225297 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 14 21:24:35.288078 sudo[1596]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 14 21:24:35.288379 sudo[1596]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 21:24:35.301141 sudo[1596]: pam_unix(sudo:session): session closed for user root Jul 14 21:24:35.305256 sshd[1595]: Connection closed by 10.0.0.1 port 42378 Jul 14 21:24:35.304977 sshd-session[1592]: pam_unix(sshd:session): session closed for user core Jul 14 21:24:35.324479 systemd[1]: sshd@4-10.0.0.115:22-10.0.0.1:42378.service: Deactivated successfully. Jul 14 21:24:35.327808 systemd[1]: session-5.scope: Deactivated successfully. Jul 14 21:24:35.328629 systemd-logind[1436]: Session 5 logged out. Waiting for processes to exit. Jul 14 21:24:35.338395 systemd[1]: Started sshd@5-10.0.0.115:22-10.0.0.1:42382.service - OpenSSH per-connection server daemon (10.0.0.1:42382). Jul 14 21:24:35.339257 systemd-logind[1436]: Removed session 5. Jul 14 21:24:35.376691 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 42382 ssh2: RSA SHA256:qod1KuvK/X/ss8Lj9PYiIkvlhHqhxXhx4UqduPTMsgY Jul 14 21:24:35.377998 sshd-session[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:24:35.381895 systemd-logind[1436]: New session 6 of user core. Jul 14 21:24:35.389249 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 14 21:24:35.439420 sudo[1606]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 14 21:24:35.439693 sudo[1606]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 21:24:35.447386 sudo[1606]: pam_unix(sudo:session): session closed for user root Jul 14 21:24:35.454276 sudo[1605]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 14 21:24:35.454547 sudo[1605]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 21:24:35.473572 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 14 21:24:35.495399 augenrules[1628]: No rules Jul 14 21:24:35.496650 systemd[1]: audit-rules.service: Deactivated successfully. Jul 14 21:24:35.496918 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 14 21:24:35.498319 sudo[1605]: pam_unix(sudo:session): session closed for user root Jul 14 21:24:35.499551 sshd[1604]: Connection closed by 10.0.0.1 port 42382 Jul 14 21:24:35.499987 sshd-session[1601]: pam_unix(sshd:session): session closed for user core Jul 14 21:24:35.513032 systemd[1]: sshd@5-10.0.0.115:22-10.0.0.1:42382.service: Deactivated successfully. Jul 14 21:24:35.514664 systemd[1]: session-6.scope: Deactivated successfully. Jul 14 21:24:35.516752 systemd-logind[1436]: Session 6 logged out. Waiting for processes to exit. Jul 14 21:24:35.524405 systemd[1]: Started sshd@6-10.0.0.115:22-10.0.0.1:42394.service - OpenSSH per-connection server daemon (10.0.0.1:42394). Jul 14 21:24:35.525498 systemd-logind[1436]: Removed session 6. Jul 14 21:24:35.561534 sshd[1636]: Accepted publickey for core from 10.0.0.1 port 42394 ssh2: RSA SHA256:qod1KuvK/X/ss8Lj9PYiIkvlhHqhxXhx4UqduPTMsgY Jul 14 21:24:35.563527 sshd-session[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:24:35.568178 systemd-logind[1436]: New session 7 of user core. Jul 14 21:24:35.585300 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 14 21:24:35.636272 sudo[1641]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 14 21:24:35.636558 sudo[1641]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 21:24:35.990354 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 14 21:24:35.990404 (dockerd)[1661]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 14 21:24:36.238639 dockerd[1661]: time="2025-07-14T21:24:36.238571545Z" level=info msg="Starting up" Jul 14 21:24:36.409593 dockerd[1661]: time="2025-07-14T21:24:36.409324569Z" level=info msg="Loading containers: start." Jul 14 21:24:36.559121 kernel: Initializing XFRM netlink socket Jul 14 21:24:36.619744 systemd-networkd[1395]: docker0: Link UP Jul 14 21:24:36.657282 dockerd[1661]: time="2025-07-14T21:24:36.657241870Z" level=info msg="Loading containers: done." Jul 14 21:24:36.671151 dockerd[1661]: time="2025-07-14T21:24:36.670548421Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 14 21:24:36.671151 dockerd[1661]: time="2025-07-14T21:24:36.670636325Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jul 14 21:24:36.671151 dockerd[1661]: time="2025-07-14T21:24:36.670816296Z" level=info msg="Daemon has completed initialization" Jul 14 21:24:36.698393 dockerd[1661]: time="2025-07-14T21:24:36.698334489Z" level=info msg="API listen on /run/docker.sock" Jul 14 21:24:36.698683 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 14 21:24:37.293978 containerd[1460]: time="2025-07-14T21:24:37.293918223Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 14 21:24:37.945948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2521099970.mount: Deactivated successfully. Jul 14 21:24:39.320860 containerd[1460]: time="2025-07-14T21:24:39.320774447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:24:39.321692 containerd[1460]: time="2025-07-14T21:24:39.321398686Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=25651795" Jul 14 21:24:39.322528 containerd[1460]: time="2025-07-14T21:24:39.322465159Z" level=info msg="ImageCreate event name:\"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:24:39.325449 containerd[1460]: time="2025-07-14T21:24:39.325392778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:24:39.326595 containerd[1460]: time="2025-07-14T21:24:39.326560175Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"25648593\" in 2.032599425s" Jul 14 21:24:39.326656 containerd[1460]: time="2025-07-14T21:24:39.326600657Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 14 21:24:39.329494 containerd[1460]: time="2025-07-14T21:24:39.329468389Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 14 21:24:40.780720 containerd[1460]: time="2025-07-14T21:24:40.780667935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:24:40.782410 containerd[1460]: time="2025-07-14T21:24:40.782295345Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=22459679" Jul 14 21:24:40.783269 containerd[1460]: time="2025-07-14T21:24:40.783236295Z" level=info msg="ImageCreate event name:\"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:24:40.786050 containerd[1460]: time="2025-07-14T21:24:40.786013498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:24:40.789122 containerd[1460]: time="2025-07-14T21:24:40.788138106Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"23995467\" in 1.458638495s" Jul 14 21:24:40.789122 containerd[1460]: time="2025-07-14T21:24:40.788185024Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 14 21:24:40.789646 containerd[1460]: time="2025-07-14T21:24:40.789605541Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 14 21:24:41.327871 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 14 21:24:41.337252 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:24:41.436269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:24:41.438535 (kubelet)[1925]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 21:24:41.473187 kubelet[1925]: E0714 21:24:41.473133 1925 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 21:24:41.476039 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 21:24:41.476208 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 21:24:41.476546 systemd[1]: kubelet.service: Consumed 130ms CPU time, 108.3M memory peak. Jul 14 21:24:42.061678 containerd[1460]: time="2025-07-14T21:24:42.061515009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:24:42.062548 containerd[1460]: time="2025-07-14T21:24:42.062312226Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=17125068" Jul 14 21:24:42.063185 containerd[1460]: time="2025-07-14T21:24:42.063128249Z" level=info msg="ImageCreate event name:\"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:24:42.065990 containerd[1460]: time="2025-07-14T21:24:42.065954747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:24:42.067198 containerd[1460]: time="2025-07-14T21:24:42.067154139Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"18660874\" in 1.277507678s" Jul 14 21:24:42.067198 containerd[1460]: time="2025-07-14T21:24:42.067187966Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 14 21:24:42.067737 containerd[1460]: time="2025-07-14T21:24:42.067717956Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 14 21:24:43.118545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount638828617.mount: Deactivated successfully. Jul 14 21:24:43.325240 containerd[1460]: time="2025-07-14T21:24:43.324908308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:24:43.326244 containerd[1460]: time="2025-07-14T21:24:43.326192002Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=26915959" Jul 14 21:24:43.327033 containerd[1460]: time="2025-07-14T21:24:43.326977853Z" level=info msg="ImageCreate event name:\"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:24:43.329633 containerd[1460]: time="2025-07-14T21:24:43.329478272Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:24:43.330696 containerd[1460]: time="2025-07-14T21:24:43.330670602Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"26914976\" in 1.262864508s" Jul 14 21:24:43.330893 containerd[1460]: time="2025-07-14T21:24:43.330790030Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 14 21:24:43.331405 containerd[1460]: time="2025-07-14T21:24:43.331378162Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 14 21:24:43.933516 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3285688666.mount: Deactivated successfully. Jul 14 21:24:44.736324 containerd[1460]: time="2025-07-14T21:24:44.736225233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:24:44.737198 containerd[1460]: time="2025-07-14T21:24:44.736824426Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 14 21:24:44.740504 containerd[1460]: time="2025-07-14T21:24:44.740473982Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:24:44.744222 containerd[1460]: time="2025-07-14T21:24:44.744185789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:24:44.745503 containerd[1460]: time="2025-07-14T21:24:44.745472788Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.413961282s" Jul 14 21:24:44.745572 containerd[1460]: time="2025-07-14T21:24:44.745507564Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 14 21:24:44.746192 containerd[1460]: time="2025-07-14T21:24:44.745937746Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 14 21:24:45.184259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2859446569.mount: Deactivated successfully. Jul 14 21:24:45.189043 containerd[1460]: time="2025-07-14T21:24:45.188767821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:24:45.189662 containerd[1460]: time="2025-07-14T21:24:45.189415433Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 14 21:24:45.190501 containerd[1460]: time="2025-07-14T21:24:45.190451061Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:24:45.193264 containerd[1460]: time="2025-07-14T21:24:45.193225903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:24:45.194121 containerd[1460]: time="2025-07-14T21:24:45.194078015Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 448.111473ms" Jul 14 21:24:45.194194 containerd[1460]: time="2025-07-14T21:24:45.194125530Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 14 21:24:45.194642 containerd[1460]: time="2025-07-14T21:24:45.194617872Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 14 21:24:45.713240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount65156702.mount: Deactivated successfully. Jul 14 21:24:47.964575 containerd[1460]: time="2025-07-14T21:24:47.964513233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:24:47.965118 containerd[1460]: time="2025-07-14T21:24:47.965058214Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" Jul 14 21:24:47.965964 containerd[1460]: time="2025-07-14T21:24:47.965924266Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:24:47.969001 containerd[1460]: time="2025-07-14T21:24:47.968965608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:24:47.970428 containerd[1460]: time="2025-07-14T21:24:47.970369056Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.775716511s" Jul 14 21:24:47.970428 containerd[1460]: time="2025-07-14T21:24:47.970405383Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 14 21:24:51.577835 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 14 21:24:51.586292 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:24:51.722983 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:24:51.727641 (kubelet)[2086]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 21:24:51.767467 kubelet[2086]: E0714 21:24:51.767412 2086 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 21:24:51.770043 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 21:24:51.770206 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 21:24:51.770628 systemd[1]: kubelet.service: Consumed 131ms CPU time, 106.7M memory peak. Jul 14 21:24:53.516198 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:24:53.516446 systemd[1]: kubelet.service: Consumed 131ms CPU time, 106.7M memory peak. Jul 14 21:24:53.532377 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:24:53.554252 systemd[1]: Reload requested from client PID 2102 ('systemctl') (unit session-7.scope)... Jul 14 21:24:53.554272 systemd[1]: Reloading... Jul 14 21:24:53.650485 zram_generator::config[2146]: No configuration found. Jul 14 21:24:53.777809 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 21:24:53.853869 systemd[1]: Reloading finished in 299 ms. Jul 14 21:24:53.894976 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:24:53.896442 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:24:53.898894 systemd[1]: kubelet.service: Deactivated successfully. Jul 14 21:24:53.899085 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:24:53.899132 systemd[1]: kubelet.service: Consumed 91ms CPU time, 95.1M memory peak. Jul 14 21:24:53.900529 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:24:54.004123 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:24:54.008010 (kubelet)[2193]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 14 21:24:54.045749 kubelet[2193]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 21:24:54.045749 kubelet[2193]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 14 21:24:54.045749 kubelet[2193]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 21:24:54.045749 kubelet[2193]: I0714 21:24:54.045727 2193 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 21:24:54.695072 kubelet[2193]: I0714 21:24:54.695035 2193 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 14 21:24:54.695072 kubelet[2193]: I0714 21:24:54.695066 2193 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 21:24:54.695317 kubelet[2193]: I0714 21:24:54.695304 2193 server.go:934] "Client rotation is on, will bootstrap in background" Jul 14 21:24:54.737996 kubelet[2193]: E0714 21:24:54.737954 2193 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.115:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:24:54.740906 kubelet[2193]: I0714 21:24:54.740875 2193 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 21:24:54.749172 kubelet[2193]: E0714 21:24:54.749126 2193 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 21:24:54.749172 kubelet[2193]: I0714 21:24:54.749163 2193 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 21:24:54.754323 kubelet[2193]: I0714 21:24:54.754294 2193 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 21:24:54.755060 kubelet[2193]: I0714 21:24:54.755033 2193 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 14 21:24:54.755218 kubelet[2193]: I0714 21:24:54.755187 2193 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 21:24:54.755380 kubelet[2193]: I0714 21:24:54.755218 2193 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 14 21:24:54.755493 kubelet[2193]: I0714 21:24:54.755440 2193 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 21:24:54.755493 kubelet[2193]: I0714 21:24:54.755449 2193 container_manager_linux.go:300] "Creating device plugin manager" Jul 14 21:24:54.755694 kubelet[2193]: I0714 21:24:54.755670 2193 state_mem.go:36] "Initialized new in-memory state store" Jul 14 21:24:54.757508 kubelet[2193]: I0714 21:24:54.757484 2193 kubelet.go:408] "Attempting to sync node with API server" Jul 14 21:24:54.757544 kubelet[2193]: I0714 21:24:54.757513 2193 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 21:24:54.757544 kubelet[2193]: I0714 21:24:54.757533 2193 kubelet.go:314] "Adding apiserver pod source" Jul 14 21:24:54.757627 kubelet[2193]: I0714 21:24:54.757610 2193 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 21:24:54.758516 kubelet[2193]: W0714 21:24:54.758381 2193 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.115:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jul 14 21:24:54.758516 kubelet[2193]: E0714 21:24:54.758502 2193 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.115:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:24:54.759386 kubelet[2193]: W0714 21:24:54.758881 2193 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jul 14 21:24:54.759386 kubelet[2193]: E0714 21:24:54.758936 2193 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:24:54.760582 kubelet[2193]: I0714 21:24:54.760548 2193 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 14 21:24:54.761382 kubelet[2193]: I0714 21:24:54.761367 2193 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 14 21:24:54.761491 kubelet[2193]: W0714 21:24:54.761477 2193 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 14 21:24:54.762695 kubelet[2193]: I0714 21:24:54.762558 2193 server.go:1274] "Started kubelet" Jul 14 21:24:54.763156 kubelet[2193]: I0714 21:24:54.763125 2193 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 21:24:54.764284 kubelet[2193]: I0714 21:24:54.764201 2193 server.go:449] "Adding debug handlers to kubelet server" Jul 14 21:24:54.767963 kubelet[2193]: I0714 21:24:54.765359 2193 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 21:24:54.767963 kubelet[2193]: I0714 21:24:54.765603 2193 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 21:24:54.767963 kubelet[2193]: I0714 21:24:54.766481 2193 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 21:24:54.767963 kubelet[2193]: I0714 21:24:54.766568 2193 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 21:24:54.767963 kubelet[2193]: I0714 21:24:54.767046 2193 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 14 21:24:54.767963 kubelet[2193]: I0714 21:24:54.767171 2193 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 14 21:24:54.767963 kubelet[2193]: I0714 21:24:54.767207 2193 reconciler.go:26] "Reconciler: start to sync state" Jul 14 21:24:54.767963 kubelet[2193]: E0714 21:24:54.767707 2193 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:24:54.767963 kubelet[2193]: E0714 21:24:54.767872 2193 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="200ms" Jul 14 21:24:54.767963 kubelet[2193]: W0714 21:24:54.767877 2193 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jul 14 21:24:54.767963 kubelet[2193]: E0714 21:24:54.767912 2193 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:24:54.769540 kubelet[2193]: E0714 21:24:54.767913 2193 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.115:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.115:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18523b32e4244999 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-14 21:24:54.762531225 +0000 UTC m=+0.751249899,LastTimestamp:2025-07-14 21:24:54.762531225 +0000 UTC m=+0.751249899,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 14 21:24:54.770047 kubelet[2193]: I0714 21:24:54.769629 2193 factory.go:221] Registration of the systemd container factory successfully Jul 14 21:24:54.770047 kubelet[2193]: I0714 21:24:54.769724 2193 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 21:24:54.771010 kubelet[2193]: I0714 21:24:54.770974 2193 factory.go:221] Registration of the containerd container factory successfully Jul 14 21:24:54.772181 kubelet[2193]: E0714 21:24:54.772011 2193 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 21:24:54.781654 kubelet[2193]: I0714 21:24:54.781459 2193 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 14 21:24:54.781654 kubelet[2193]: I0714 21:24:54.781472 2193 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 14 21:24:54.781654 kubelet[2193]: I0714 21:24:54.781486 2193 state_mem.go:36] "Initialized new in-memory state store" Jul 14 21:24:54.783256 kubelet[2193]: I0714 21:24:54.783220 2193 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 14 21:24:54.784213 kubelet[2193]: I0714 21:24:54.784189 2193 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 14 21:24:54.784284 kubelet[2193]: I0714 21:24:54.784224 2193 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 14 21:24:54.784284 kubelet[2193]: I0714 21:24:54.784244 2193 kubelet.go:2321] "Starting kubelet main sync loop" Jul 14 21:24:54.784322 kubelet[2193]: E0714 21:24:54.784284 2193 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 21:24:54.784750 kubelet[2193]: W0714 21:24:54.784653 2193 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jul 14 21:24:54.784750 kubelet[2193]: E0714 21:24:54.784698 2193 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:24:54.851497 kubelet[2193]: I0714 21:24:54.851402 2193 policy_none.go:49] "None policy: Start" Jul 14 21:24:54.852228 kubelet[2193]: I0714 21:24:54.852202 2193 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 14 21:24:54.852228 kubelet[2193]: I0714 21:24:54.852231 2193 state_mem.go:35] "Initializing new in-memory state store" Jul 14 21:24:54.861946 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 14 21:24:54.868153 kubelet[2193]: E0714 21:24:54.868122 2193 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:24:54.876616 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 14 21:24:54.879228 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 14 21:24:54.885209 kubelet[2193]: E0714 21:24:54.885183 2193 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 14 21:24:54.891317 kubelet[2193]: I0714 21:24:54.891292 2193 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 14 21:24:54.891504 kubelet[2193]: I0714 21:24:54.891481 2193 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 21:24:54.891534 kubelet[2193]: I0714 21:24:54.891505 2193 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 21:24:54.891784 kubelet[2193]: I0714 21:24:54.891763 2193 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 21:24:54.893039 kubelet[2193]: E0714 21:24:54.893002 2193 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 14 21:24:54.968986 kubelet[2193]: E0714 21:24:54.968891 2193 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="400ms" Jul 14 21:24:54.992851 kubelet[2193]: I0714 21:24:54.992791 2193 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 21:24:54.993257 kubelet[2193]: E0714 21:24:54.993221 2193 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Jul 14 21:24:55.092982 systemd[1]: Created slice kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice - libcontainer container kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice. Jul 14 21:24:55.107118 systemd[1]: Created slice kubepods-burstable-poda48f26facbd4e151d32ff1fe926753f4.slice - libcontainer container kubepods-burstable-poda48f26facbd4e151d32ff1fe926753f4.slice. Jul 14 21:24:55.111079 systemd[1]: Created slice kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice - libcontainer container kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice. Jul 14 21:24:55.170419 kubelet[2193]: I0714 21:24:55.170382 2193 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:24:55.170419 kubelet[2193]: I0714 21:24:55.170419 2193 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 14 21:24:55.170708 kubelet[2193]: I0714 21:24:55.170436 2193 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a48f26facbd4e151d32ff1fe926753f4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a48f26facbd4e151d32ff1fe926753f4\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:24:55.170708 kubelet[2193]: I0714 21:24:55.170452 2193 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:24:55.170708 kubelet[2193]: I0714 21:24:55.170466 2193 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:24:55.170708 kubelet[2193]: I0714 21:24:55.170480 2193 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:24:55.170708 kubelet[2193]: I0714 21:24:55.170493 2193 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:24:55.170818 kubelet[2193]: I0714 21:24:55.170506 2193 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a48f26facbd4e151d32ff1fe926753f4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a48f26facbd4e151d32ff1fe926753f4\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:24:55.170818 kubelet[2193]: I0714 21:24:55.170554 2193 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a48f26facbd4e151d32ff1fe926753f4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a48f26facbd4e151d32ff1fe926753f4\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:24:55.194461 kubelet[2193]: I0714 21:24:55.194437 2193 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 21:24:55.194724 kubelet[2193]: E0714 21:24:55.194702 2193 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Jul 14 21:24:55.370120 kubelet[2193]: E0714 21:24:55.370052 2193 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="800ms" Jul 14 21:24:55.406356 containerd[1460]: time="2025-07-14T21:24:55.406289021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 14 21:24:55.410572 containerd[1460]: time="2025-07-14T21:24:55.410525207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a48f26facbd4e151d32ff1fe926753f4,Namespace:kube-system,Attempt:0,}" Jul 14 21:24:55.413456 containerd[1460]: time="2025-07-14T21:24:55.413388105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 14 21:24:55.596539 kubelet[2193]: I0714 21:24:55.596500 2193 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 21:24:55.596824 kubelet[2193]: E0714 21:24:55.596798 2193 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Jul 14 21:24:55.747134 kubelet[2193]: W0714 21:24:55.746901 2193 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.115:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jul 14 21:24:55.747134 kubelet[2193]: E0714 21:24:55.746998 2193 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.115:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:24:55.821088 kubelet[2193]: W0714 21:24:55.820998 2193 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jul 14 21:24:55.821088 kubelet[2193]: E0714 21:24:55.821048 2193 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:24:55.953542 kubelet[2193]: W0714 21:24:55.953436 2193 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jul 14 21:24:55.953542 kubelet[2193]: E0714 21:24:55.953508 2193 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:24:56.004756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3900414554.mount: Deactivated successfully. Jul 14 21:24:56.008644 containerd[1460]: time="2025-07-14T21:24:56.008565110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 21:24:56.011139 containerd[1460]: time="2025-07-14T21:24:56.010805633Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 21:24:56.011945 containerd[1460]: time="2025-07-14T21:24:56.011895093Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 14 21:24:56.013137 containerd[1460]: time="2025-07-14T21:24:56.013090248Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 14 21:24:56.014218 containerd[1460]: time="2025-07-14T21:24:56.014182866Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 21:24:56.015851 containerd[1460]: time="2025-07-14T21:24:56.015823592Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 21:24:56.016490 containerd[1460]: time="2025-07-14T21:24:56.016445336Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 14 21:24:56.017481 containerd[1460]: time="2025-07-14T21:24:56.017430139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 21:24:56.020017 containerd[1460]: time="2025-07-14T21:24:56.019928026Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 609.321513ms" Jul 14 21:24:56.020654 containerd[1460]: time="2025-07-14T21:24:56.020605615Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 607.152874ms" Jul 14 21:24:56.022746 containerd[1460]: time="2025-07-14T21:24:56.022609121Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 616.2314ms" Jul 14 21:24:56.065921 kubelet[2193]: W0714 21:24:56.063547 2193 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jul 14 21:24:56.065921 kubelet[2193]: E0714 21:24:56.063616 2193 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:24:56.170459 containerd[1460]: time="2025-07-14T21:24:56.170288364Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:24:56.170873 containerd[1460]: time="2025-07-14T21:24:56.170597417Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:24:56.170873 containerd[1460]: time="2025-07-14T21:24:56.170668574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:24:56.170873 containerd[1460]: time="2025-07-14T21:24:56.170687642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:24:56.170873 containerd[1460]: time="2025-07-14T21:24:56.170766514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:24:56.170991 kubelet[2193]: E0714 21:24:56.170562 2193 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="1.6s" Jul 14 21:24:56.171341 containerd[1460]: time="2025-07-14T21:24:56.170874249Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:24:56.171341 containerd[1460]: time="2025-07-14T21:24:56.170906310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:24:56.171341 containerd[1460]: time="2025-07-14T21:24:56.171005130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:24:56.173534 containerd[1460]: time="2025-07-14T21:24:56.173436217Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:24:56.173632 containerd[1460]: time="2025-07-14T21:24:56.173602916Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:24:56.173632 containerd[1460]: time="2025-07-14T21:24:56.173620505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:24:56.173946 containerd[1460]: time="2025-07-14T21:24:56.173898257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:24:56.199302 systemd[1]: Started cri-containerd-5961753b737a2afad641f24208638c629f4af0517446874528f2e1fac5634007.scope - libcontainer container 5961753b737a2afad641f24208638c629f4af0517446874528f2e1fac5634007. Jul 14 21:24:56.200646 systemd[1]: Started cri-containerd-5a44810f70e01c83964c8fc8b355bf23dba11aa256b71b3d6e6378261750d0bc.scope - libcontainer container 5a44810f70e01c83964c8fc8b355bf23dba11aa256b71b3d6e6378261750d0bc. Jul 14 21:24:56.201836 systemd[1]: Started cri-containerd-e8dd0fc509cc7b0fb5ce549c9cd89414f098b8580fd1d17f4d638b3ea246a4cf.scope - libcontainer container e8dd0fc509cc7b0fb5ce549c9cd89414f098b8580fd1d17f4d638b3ea246a4cf. Jul 14 21:24:56.241869 containerd[1460]: time="2025-07-14T21:24:56.241821423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8dd0fc509cc7b0fb5ce549c9cd89414f098b8580fd1d17f4d638b3ea246a4cf\"" Jul 14 21:24:56.242433 containerd[1460]: time="2025-07-14T21:24:56.242241928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"5961753b737a2afad641f24208638c629f4af0517446874528f2e1fac5634007\"" Jul 14 21:24:56.248232 containerd[1460]: time="2025-07-14T21:24:56.248198519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a48f26facbd4e151d32ff1fe926753f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a44810f70e01c83964c8fc8b355bf23dba11aa256b71b3d6e6378261750d0bc\"" Jul 14 21:24:56.249585 containerd[1460]: time="2025-07-14T21:24:56.249554938Z" level=info msg="CreateContainer within sandbox \"e8dd0fc509cc7b0fb5ce549c9cd89414f098b8580fd1d17f4d638b3ea246a4cf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 14 21:24:56.250209 containerd[1460]: time="2025-07-14T21:24:56.249952217Z" level=info msg="CreateContainer within sandbox \"5961753b737a2afad641f24208638c629f4af0517446874528f2e1fac5634007\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 14 21:24:56.252550 containerd[1460]: time="2025-07-14T21:24:56.252397415Z" level=info msg="CreateContainer within sandbox \"5a44810f70e01c83964c8fc8b355bf23dba11aa256b71b3d6e6378261750d0bc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 14 21:24:56.265711 containerd[1460]: time="2025-07-14T21:24:56.265607252Z" level=info msg="CreateContainer within sandbox \"5961753b737a2afad641f24208638c629f4af0517446874528f2e1fac5634007\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fc12eccff3a991e6d6a8740c49d34f58b0895a96105213c1cc4422a59a1a5577\"" Jul 14 21:24:56.266746 containerd[1460]: time="2025-07-14T21:24:56.266697831Z" level=info msg="StartContainer for \"fc12eccff3a991e6d6a8740c49d34f58b0895a96105213c1cc4422a59a1a5577\"" Jul 14 21:24:56.270142 containerd[1460]: time="2025-07-14T21:24:56.270046522Z" level=info msg="CreateContainer within sandbox \"e8dd0fc509cc7b0fb5ce549c9cd89414f098b8580fd1d17f4d638b3ea246a4cf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4a2c59c5aeb867dc5eb6bcbfbf3f099b22b841c919bc696cd9c7754dd02a4c5e\"" Jul 14 21:24:56.270771 containerd[1460]: time="2025-07-14T21:24:56.270746258Z" level=info msg="StartContainer for \"4a2c59c5aeb867dc5eb6bcbfbf3f099b22b841c919bc696cd9c7754dd02a4c5e\"" Jul 14 21:24:56.271858 containerd[1460]: time="2025-07-14T21:24:56.271826603Z" level=info msg="CreateContainer within sandbox \"5a44810f70e01c83964c8fc8b355bf23dba11aa256b71b3d6e6378261750d0bc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"410742d85ae8aa51c5475b7fc9ca4c10b885159ebd7699092bd5b41e14744efe\"" Jul 14 21:24:56.272302 containerd[1460]: time="2025-07-14T21:24:56.272258462Z" level=info msg="StartContainer for \"410742d85ae8aa51c5475b7fc9ca4c10b885159ebd7699092bd5b41e14744efe\"" Jul 14 21:24:56.291268 systemd[1]: Started cri-containerd-fc12eccff3a991e6d6a8740c49d34f58b0895a96105213c1cc4422a59a1a5577.scope - libcontainer container fc12eccff3a991e6d6a8740c49d34f58b0895a96105213c1cc4422a59a1a5577. Jul 14 21:24:56.300253 systemd[1]: Started cri-containerd-4a2c59c5aeb867dc5eb6bcbfbf3f099b22b841c919bc696cd9c7754dd02a4c5e.scope - libcontainer container 4a2c59c5aeb867dc5eb6bcbfbf3f099b22b841c919bc696cd9c7754dd02a4c5e. Jul 14 21:24:56.302890 systemd[1]: Started cri-containerd-410742d85ae8aa51c5475b7fc9ca4c10b885159ebd7699092bd5b41e14744efe.scope - libcontainer container 410742d85ae8aa51c5475b7fc9ca4c10b885159ebd7699092bd5b41e14744efe. Jul 14 21:24:56.334198 containerd[1460]: time="2025-07-14T21:24:56.334076847Z" level=info msg="StartContainer for \"fc12eccff3a991e6d6a8740c49d34f58b0895a96105213c1cc4422a59a1a5577\" returns successfully" Jul 14 21:24:56.363695 containerd[1460]: time="2025-07-14T21:24:56.361360276Z" level=info msg="StartContainer for \"410742d85ae8aa51c5475b7fc9ca4c10b885159ebd7699092bd5b41e14744efe\" returns successfully" Jul 14 21:24:56.363695 containerd[1460]: time="2025-07-14T21:24:56.361455938Z" level=info msg="StartContainer for \"4a2c59c5aeb867dc5eb6bcbfbf3f099b22b841c919bc696cd9c7754dd02a4c5e\" returns successfully" Jul 14 21:24:56.398472 kubelet[2193]: I0714 21:24:56.398418 2193 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 21:24:56.398975 kubelet[2193]: E0714 21:24:56.398922 2193 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Jul 14 21:24:57.774975 kubelet[2193]: E0714 21:24:57.774938 2193 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 14 21:24:57.977792 kubelet[2193]: E0714 21:24:57.977760 2193 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 14 21:24:58.000531 kubelet[2193]: I0714 21:24:58.000495 2193 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 21:24:58.005179 kubelet[2193]: I0714 21:24:58.005155 2193 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 14 21:24:58.005286 kubelet[2193]: E0714 21:24:58.005185 2193 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 14 21:24:58.758948 kubelet[2193]: I0714 21:24:58.758878 2193 apiserver.go:52] "Watching apiserver" Jul 14 21:24:58.768115 kubelet[2193]: I0714 21:24:58.768060 2193 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 14 21:24:59.465398 systemd[1]: Reload requested from client PID 2475 ('systemctl') (unit session-7.scope)... Jul 14 21:24:59.465414 systemd[1]: Reloading... Jul 14 21:24:59.543124 zram_generator::config[2525]: No configuration found. Jul 14 21:24:59.618398 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 21:24:59.703565 systemd[1]: Reloading finished in 237 ms. Jul 14 21:24:59.725539 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:24:59.737088 systemd[1]: kubelet.service: Deactivated successfully. Jul 14 21:24:59.737387 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:24:59.737436 systemd[1]: kubelet.service: Consumed 1.127s CPU time, 132.6M memory peak. Jul 14 21:24:59.747327 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:24:59.848588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:24:59.861469 (kubelet)[2561]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 14 21:24:59.900366 kubelet[2561]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 21:24:59.900366 kubelet[2561]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 14 21:24:59.900366 kubelet[2561]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 21:24:59.900725 kubelet[2561]: I0714 21:24:59.900410 2561 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 21:24:59.905432 kubelet[2561]: I0714 21:24:59.905387 2561 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 14 21:24:59.905432 kubelet[2561]: I0714 21:24:59.905417 2561 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 21:24:59.905669 kubelet[2561]: I0714 21:24:59.905620 2561 server.go:934] "Client rotation is on, will bootstrap in background" Jul 14 21:24:59.906928 kubelet[2561]: I0714 21:24:59.906899 2561 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 14 21:24:59.908978 kubelet[2561]: I0714 21:24:59.908948 2561 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 21:24:59.911774 kubelet[2561]: E0714 21:24:59.911739 2561 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 21:24:59.911774 kubelet[2561]: I0714 21:24:59.911767 2561 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 21:24:59.914907 kubelet[2561]: I0714 21:24:59.914854 2561 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 21:24:59.914995 kubelet[2561]: I0714 21:24:59.914984 2561 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 14 21:24:59.915150 kubelet[2561]: I0714 21:24:59.915121 2561 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 21:24:59.915304 kubelet[2561]: I0714 21:24:59.915152 2561 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 14 21:24:59.915385 kubelet[2561]: I0714 21:24:59.915314 2561 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 21:24:59.915385 kubelet[2561]: I0714 21:24:59.915323 2561 container_manager_linux.go:300] "Creating device plugin manager" Jul 14 21:24:59.915385 kubelet[2561]: I0714 21:24:59.915360 2561 state_mem.go:36] "Initialized new in-memory state store" Jul 14 21:24:59.915482 kubelet[2561]: I0714 21:24:59.915448 2561 kubelet.go:408] "Attempting to sync node with API server" Jul 14 21:24:59.915482 kubelet[2561]: I0714 21:24:59.915460 2561 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 21:24:59.915482 kubelet[2561]: I0714 21:24:59.915477 2561 kubelet.go:314] "Adding apiserver pod source" Jul 14 21:24:59.916159 kubelet[2561]: I0714 21:24:59.915489 2561 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 21:24:59.916364 kubelet[2561]: I0714 21:24:59.916339 2561 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 14 21:24:59.916896 kubelet[2561]: I0714 21:24:59.916868 2561 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 14 21:24:59.917278 kubelet[2561]: I0714 21:24:59.917257 2561 server.go:1274] "Started kubelet" Jul 14 21:24:59.918765 kubelet[2561]: I0714 21:24:59.918731 2561 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 21:24:59.920017 kubelet[2561]: I0714 21:24:59.919982 2561 server.go:449] "Adding debug handlers to kubelet server" Jul 14 21:24:59.920111 kubelet[2561]: I0714 21:24:59.918720 2561 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 21:24:59.920436 kubelet[2561]: I0714 21:24:59.920408 2561 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 21:24:59.921002 kubelet[2561]: E0714 21:24:59.920984 2561 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 21:24:59.921352 kubelet[2561]: I0714 21:24:59.921325 2561 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 21:24:59.921655 kubelet[2561]: I0714 21:24:59.921625 2561 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 14 21:24:59.921729 kubelet[2561]: I0714 21:24:59.921712 2561 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 14 21:24:59.921833 kubelet[2561]: I0714 21:24:59.921817 2561 reconciler.go:26] "Reconciler: start to sync state" Jul 14 21:24:59.922419 kubelet[2561]: I0714 21:24:59.921907 2561 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 21:24:59.922419 kubelet[2561]: E0714 21:24:59.922029 2561 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:24:59.931103 kubelet[2561]: I0714 21:24:59.928776 2561 factory.go:221] Registration of the containerd container factory successfully Jul 14 21:24:59.931103 kubelet[2561]: I0714 21:24:59.928797 2561 factory.go:221] Registration of the systemd container factory successfully Jul 14 21:24:59.931103 kubelet[2561]: I0714 21:24:59.928885 2561 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 21:24:59.941087 kubelet[2561]: I0714 21:24:59.941033 2561 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 14 21:24:59.948003 kubelet[2561]: I0714 21:24:59.947960 2561 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 14 21:24:59.948003 kubelet[2561]: I0714 21:24:59.947990 2561 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 14 21:24:59.948446 kubelet[2561]: I0714 21:24:59.948026 2561 kubelet.go:2321] "Starting kubelet main sync loop" Jul 14 21:24:59.948446 kubelet[2561]: E0714 21:24:59.948076 2561 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 21:24:59.971277 kubelet[2561]: I0714 21:24:59.971249 2561 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 14 21:24:59.971277 kubelet[2561]: I0714 21:24:59.971270 2561 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 14 21:24:59.971429 kubelet[2561]: I0714 21:24:59.971292 2561 state_mem.go:36] "Initialized new in-memory state store" Jul 14 21:24:59.971450 kubelet[2561]: I0714 21:24:59.971440 2561 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 14 21:24:59.971476 kubelet[2561]: I0714 21:24:59.971452 2561 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 14 21:24:59.971476 kubelet[2561]: I0714 21:24:59.971470 2561 policy_none.go:49] "None policy: Start" Jul 14 21:24:59.972399 kubelet[2561]: I0714 21:24:59.972069 2561 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 14 21:24:59.972399 kubelet[2561]: I0714 21:24:59.972106 2561 state_mem.go:35] "Initializing new in-memory state store" Jul 14 21:24:59.972399 kubelet[2561]: I0714 21:24:59.972250 2561 state_mem.go:75] "Updated machine memory state" Jul 14 21:24:59.975900 kubelet[2561]: I0714 21:24:59.975810 2561 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 14 21:24:59.976139 kubelet[2561]: I0714 21:24:59.975970 2561 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 21:24:59.976139 kubelet[2561]: I0714 21:24:59.975988 2561 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 21:24:59.976233 kubelet[2561]: I0714 21:24:59.976206 2561 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 21:25:00.078202 kubelet[2561]: I0714 21:25:00.078161 2561 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 21:25:00.109973 kubelet[2561]: E0714 21:25:00.109929 2561 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 14 21:25:00.113137 kubelet[2561]: I0714 21:25:00.113020 2561 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 14 21:25:00.113137 kubelet[2561]: I0714 21:25:00.113117 2561 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 14 21:25:00.123125 kubelet[2561]: I0714 21:25:00.122926 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:25:00.123125 kubelet[2561]: I0714 21:25:00.122952 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:25:00.123125 kubelet[2561]: I0714 21:25:00.122972 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 14 21:25:00.123125 kubelet[2561]: I0714 21:25:00.122996 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a48f26facbd4e151d32ff1fe926753f4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a48f26facbd4e151d32ff1fe926753f4\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:25:00.123125 kubelet[2561]: I0714 21:25:00.123015 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a48f26facbd4e151d32ff1fe926753f4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a48f26facbd4e151d32ff1fe926753f4\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:25:00.123322 kubelet[2561]: I0714 21:25:00.123028 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:25:00.123322 kubelet[2561]: I0714 21:25:00.123043 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:25:00.123322 kubelet[2561]: I0714 21:25:00.123058 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:25:00.123322 kubelet[2561]: I0714 21:25:00.123071 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a48f26facbd4e151d32ff1fe926753f4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a48f26facbd4e151d32ff1fe926753f4\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:25:00.465040 sudo[2596]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 14 21:25:00.465328 sudo[2596]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 14 21:25:00.887716 sudo[2596]: pam_unix(sudo:session): session closed for user root Jul 14 21:25:00.916419 kubelet[2561]: I0714 21:25:00.916368 2561 apiserver.go:52] "Watching apiserver" Jul 14 21:25:00.922596 kubelet[2561]: I0714 21:25:00.922572 2561 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 14 21:25:00.992639 kubelet[2561]: I0714 21:25:00.991722 2561 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.991703236 podStartE2EDuration="991.703236ms" podCreationTimestamp="2025-07-14 21:25:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:25:00.983438346 +0000 UTC m=+1.119016317" watchObservedRunningTime="2025-07-14 21:25:00.991703236 +0000 UTC m=+1.127281207" Jul 14 21:25:00.992639 kubelet[2561]: I0714 21:25:00.991861 2561 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.991855861 podStartE2EDuration="991.855861ms" podCreationTimestamp="2025-07-14 21:25:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:25:00.990950705 +0000 UTC m=+1.126528636" watchObservedRunningTime="2025-07-14 21:25:00.991855861 +0000 UTC m=+1.127433792" Jul 14 21:25:01.001614 kubelet[2561]: I0714 21:25:01.001491 2561 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.001474974 podStartE2EDuration="3.001474974s" podCreationTimestamp="2025-07-14 21:24:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:25:01.000874744 +0000 UTC m=+1.136452715" watchObservedRunningTime="2025-07-14 21:25:01.001474974 +0000 UTC m=+1.137052945" Jul 14 21:25:02.471664 sudo[1641]: pam_unix(sudo:session): session closed for user root Jul 14 21:25:02.473071 sshd[1640]: Connection closed by 10.0.0.1 port 42394 Jul 14 21:25:02.473540 sshd-session[1636]: pam_unix(sshd:session): session closed for user core Jul 14 21:25:02.476693 systemd-logind[1436]: Session 7 logged out. Waiting for processes to exit. Jul 14 21:25:02.477431 systemd[1]: sshd@6-10.0.0.115:22-10.0.0.1:42394.service: Deactivated successfully. Jul 14 21:25:02.479287 systemd[1]: session-7.scope: Deactivated successfully. Jul 14 21:25:02.479455 systemd[1]: session-7.scope: Consumed 7.694s CPU time, 261.4M memory peak. Jul 14 21:25:02.480566 systemd-logind[1436]: Removed session 7. Jul 14 21:25:07.065412 kubelet[2561]: I0714 21:25:07.065372 2561 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 14 21:25:07.065802 containerd[1460]: time="2025-07-14T21:25:07.065734358Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 14 21:25:07.065989 kubelet[2561]: I0714 21:25:07.065950 2561 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 14 21:25:08.076085 kubelet[2561]: I0714 21:25:08.075906 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-host-proc-sys-kernel\") pod \"cilium-q8z6h\" (UID: \"1b45ef3c-20a2-4b5e-a3e8-6e09d146d810\") " pod="kube-system/cilium-q8z6h" Jul 14 21:25:08.076085 kubelet[2561]: I0714 21:25:08.075955 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3a44ad7f-3497-4b56-b2d8-1f52ea5ca515-kube-proxy\") pod \"kube-proxy-2dj84\" (UID: \"3a44ad7f-3497-4b56-b2d8-1f52ea5ca515\") " pod="kube-system/kube-proxy-2dj84" Jul 14 21:25:08.076085 kubelet[2561]: I0714 21:25:08.075977 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-bpf-maps\") pod \"cilium-q8z6h\" (UID: \"1b45ef3c-20a2-4b5e-a3e8-6e09d146d810\") " pod="kube-system/cilium-q8z6h" Jul 14 21:25:08.076085 kubelet[2561]: I0714 21:25:08.075993 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74jff\" (UniqueName: \"kubernetes.io/projected/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-kube-api-access-74jff\") pod \"cilium-q8z6h\" (UID: \"1b45ef3c-20a2-4b5e-a3e8-6e09d146d810\") " pod="kube-system/cilium-q8z6h" Jul 14 21:25:08.076085 kubelet[2561]: I0714 21:25:08.076017 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a44ad7f-3497-4b56-b2d8-1f52ea5ca515-lib-modules\") pod \"kube-proxy-2dj84\" (UID: \"3a44ad7f-3497-4b56-b2d8-1f52ea5ca515\") " pod="kube-system/kube-proxy-2dj84" Jul 14 21:25:08.076085 kubelet[2561]: I0714 21:25:08.076034 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-hubble-tls\") pod \"cilium-q8z6h\" (UID: \"1b45ef3c-20a2-4b5e-a3e8-6e09d146d810\") " pod="kube-system/cilium-q8z6h" Jul 14 21:25:08.076526 kubelet[2561]: I0714 21:25:08.076054 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-etc-cni-netd\") pod \"cilium-q8z6h\" (UID: \"1b45ef3c-20a2-4b5e-a3e8-6e09d146d810\") " pod="kube-system/cilium-q8z6h" Jul 14 21:25:08.076526 kubelet[2561]: I0714 21:25:08.076070 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-hostproc\") pod \"cilium-q8z6h\" (UID: \"1b45ef3c-20a2-4b5e-a3e8-6e09d146d810\") " pod="kube-system/cilium-q8z6h" Jul 14 21:25:08.076526 kubelet[2561]: I0714 21:25:08.076084 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-xtables-lock\") pod \"cilium-q8z6h\" (UID: \"1b45ef3c-20a2-4b5e-a3e8-6e09d146d810\") " pod="kube-system/cilium-q8z6h" Jul 14 21:25:08.076526 kubelet[2561]: I0714 21:25:08.076111 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-cilium-config-path\") pod \"cilium-q8z6h\" (UID: \"1b45ef3c-20a2-4b5e-a3e8-6e09d146d810\") " pod="kube-system/cilium-q8z6h" Jul 14 21:25:08.076526 kubelet[2561]: I0714 21:25:08.076127 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-cilium-run\") pod \"cilium-q8z6h\" (UID: \"1b45ef3c-20a2-4b5e-a3e8-6e09d146d810\") " pod="kube-system/cilium-q8z6h" Jul 14 21:25:08.076526 kubelet[2561]: I0714 21:25:08.076144 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-host-proc-sys-net\") pod \"cilium-q8z6h\" (UID: \"1b45ef3c-20a2-4b5e-a3e8-6e09d146d810\") " pod="kube-system/cilium-q8z6h" Jul 14 21:25:08.076663 kubelet[2561]: I0714 21:25:08.076180 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-lib-modules\") pod \"cilium-q8z6h\" (UID: \"1b45ef3c-20a2-4b5e-a3e8-6e09d146d810\") " pod="kube-system/cilium-q8z6h" Jul 14 21:25:08.076663 kubelet[2561]: I0714 21:25:08.076221 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-clustermesh-secrets\") pod \"cilium-q8z6h\" (UID: \"1b45ef3c-20a2-4b5e-a3e8-6e09d146d810\") " pod="kube-system/cilium-q8z6h" Jul 14 21:25:08.076663 kubelet[2561]: I0714 21:25:08.076241 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-cilium-cgroup\") pod \"cilium-q8z6h\" (UID: \"1b45ef3c-20a2-4b5e-a3e8-6e09d146d810\") " pod="kube-system/cilium-q8z6h" Jul 14 21:25:08.076663 kubelet[2561]: I0714 21:25:08.076262 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-cni-path\") pod \"cilium-q8z6h\" (UID: \"1b45ef3c-20a2-4b5e-a3e8-6e09d146d810\") " pod="kube-system/cilium-q8z6h" Jul 14 21:25:08.076663 kubelet[2561]: I0714 21:25:08.076278 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a44ad7f-3497-4b56-b2d8-1f52ea5ca515-xtables-lock\") pod \"kube-proxy-2dj84\" (UID: \"3a44ad7f-3497-4b56-b2d8-1f52ea5ca515\") " pod="kube-system/kube-proxy-2dj84" Jul 14 21:25:08.076663 kubelet[2561]: I0714 21:25:08.076303 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tbqv\" (UniqueName: \"kubernetes.io/projected/3a44ad7f-3497-4b56-b2d8-1f52ea5ca515-kube-api-access-6tbqv\") pod \"kube-proxy-2dj84\" (UID: \"3a44ad7f-3497-4b56-b2d8-1f52ea5ca515\") " pod="kube-system/kube-proxy-2dj84" Jul 14 21:25:08.083523 systemd[1]: Created slice kubepods-besteffort-pod3a44ad7f_3497_4b56_b2d8_1f52ea5ca515.slice - libcontainer container kubepods-besteffort-pod3a44ad7f_3497_4b56_b2d8_1f52ea5ca515.slice. Jul 14 21:25:08.093628 systemd[1]: Created slice kubepods-burstable-pod1b45ef3c_20a2_4b5e_a3e8_6e09d146d810.slice - libcontainer container kubepods-burstable-pod1b45ef3c_20a2_4b5e_a3e8_6e09d146d810.slice. Jul 14 21:25:08.127837 systemd[1]: Created slice kubepods-besteffort-pod3821cadb_0744_47a5_9884_f28b496ce748.slice - libcontainer container kubepods-besteffort-pod3821cadb_0744_47a5_9884_f28b496ce748.slice. Jul 14 21:25:08.176640 kubelet[2561]: I0714 21:25:08.176580 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dlfg\" (UniqueName: \"kubernetes.io/projected/3821cadb-0744-47a5-9884-f28b496ce748-kube-api-access-2dlfg\") pod \"cilium-operator-5d85765b45-qcn7s\" (UID: \"3821cadb-0744-47a5-9884-f28b496ce748\") " pod="kube-system/cilium-operator-5d85765b45-qcn7s" Jul 14 21:25:08.176793 kubelet[2561]: I0714 21:25:08.176732 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3821cadb-0744-47a5-9884-f28b496ce748-cilium-config-path\") pod \"cilium-operator-5d85765b45-qcn7s\" (UID: \"3821cadb-0744-47a5-9884-f28b496ce748\") " pod="kube-system/cilium-operator-5d85765b45-qcn7s" Jul 14 21:25:08.393927 containerd[1460]: time="2025-07-14T21:25:08.393342446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2dj84,Uid:3a44ad7f-3497-4b56-b2d8-1f52ea5ca515,Namespace:kube-system,Attempt:0,}" Jul 14 21:25:08.397623 containerd[1460]: time="2025-07-14T21:25:08.397570637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q8z6h,Uid:1b45ef3c-20a2-4b5e-a3e8-6e09d146d810,Namespace:kube-system,Attempt:0,}" Jul 14 21:25:08.420440 containerd[1460]: time="2025-07-14T21:25:08.420318132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:25:08.420440 containerd[1460]: time="2025-07-14T21:25:08.420382251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:25:08.420440 containerd[1460]: time="2025-07-14T21:25:08.420397971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:25:08.423296 containerd[1460]: time="2025-07-14T21:25:08.423155699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:25:08.427430 containerd[1460]: time="2025-07-14T21:25:08.427301211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:25:08.427521 containerd[1460]: time="2025-07-14T21:25:08.427447769Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:25:08.427521 containerd[1460]: time="2025-07-14T21:25:08.427476209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:25:08.429321 containerd[1460]: time="2025-07-14T21:25:08.427655967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:25:08.434217 containerd[1460]: time="2025-07-14T21:25:08.434168171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-qcn7s,Uid:3821cadb-0744-47a5-9884-f28b496ce748,Namespace:kube-system,Attempt:0,}" Jul 14 21:25:08.440380 systemd[1]: Started cri-containerd-87fae220817707305205b0e7d37719349f34468d923e9ead9eef45387f9cdf54.scope - libcontainer container 87fae220817707305205b0e7d37719349f34468d923e9ead9eef45387f9cdf54. Jul 14 21:25:08.444513 systemd[1]: Started cri-containerd-6617df5cb8e4d10ed01111a58d0188d3fddef9e4627e9b0f60244b924c83b1d2.scope - libcontainer container 6617df5cb8e4d10ed01111a58d0188d3fddef9e4627e9b0f60244b924c83b1d2. Jul 14 21:25:08.462004 containerd[1460]: time="2025-07-14T21:25:08.461679412Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:25:08.462004 containerd[1460]: time="2025-07-14T21:25:08.461739811Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:25:08.462004 containerd[1460]: time="2025-07-14T21:25:08.461753931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:25:08.462004 containerd[1460]: time="2025-07-14T21:25:08.461829490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:25:08.475821 containerd[1460]: time="2025-07-14T21:25:08.475774208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2dj84,Uid:3a44ad7f-3497-4b56-b2d8-1f52ea5ca515,Namespace:kube-system,Attempt:0,} returns sandbox id \"87fae220817707305205b0e7d37719349f34468d923e9ead9eef45387f9cdf54\"" Jul 14 21:25:08.476372 containerd[1460]: time="2025-07-14T21:25:08.476333121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q8z6h,Uid:1b45ef3c-20a2-4b5e-a3e8-6e09d146d810,Namespace:kube-system,Attempt:0,} returns sandbox id \"6617df5cb8e4d10ed01111a58d0188d3fddef9e4627e9b0f60244b924c83b1d2\"" Jul 14 21:25:08.479857 containerd[1460]: time="2025-07-14T21:25:08.479509164Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 14 21:25:08.482760 containerd[1460]: time="2025-07-14T21:25:08.482666648Z" level=info msg="CreateContainer within sandbox \"87fae220817707305205b0e7d37719349f34468d923e9ead9eef45387f9cdf54\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 14 21:25:08.489312 systemd[1]: Started cri-containerd-5d06e566d7826e11cf24ecd3d8fcba6aa1c5a3d43a685d314256a09a4817befe.scope - libcontainer container 5d06e566d7826e11cf24ecd3d8fcba6aa1c5a3d43a685d314256a09a4817befe. Jul 14 21:25:08.497330 containerd[1460]: time="2025-07-14T21:25:08.497279678Z" level=info msg="CreateContainer within sandbox \"87fae220817707305205b0e7d37719349f34468d923e9ead9eef45387f9cdf54\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"820a6cd6ea8115374a8bfa3d32290fdd0384b9164541aa1774345e3f47309c6f\"" Jul 14 21:25:08.498018 containerd[1460]: time="2025-07-14T21:25:08.497950750Z" level=info msg="StartContainer for \"820a6cd6ea8115374a8bfa3d32290fdd0384b9164541aa1774345e3f47309c6f\"" Jul 14 21:25:08.523797 containerd[1460]: time="2025-07-14T21:25:08.523757210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-qcn7s,Uid:3821cadb-0744-47a5-9884-f28b496ce748,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d06e566d7826e11cf24ecd3d8fcba6aa1c5a3d43a685d314256a09a4817befe\"" Jul 14 21:25:08.527281 systemd[1]: Started cri-containerd-820a6cd6ea8115374a8bfa3d32290fdd0384b9164541aa1774345e3f47309c6f.scope - libcontainer container 820a6cd6ea8115374a8bfa3d32290fdd0384b9164541aa1774345e3f47309c6f. Jul 14 21:25:08.552572 containerd[1460]: time="2025-07-14T21:25:08.552517756Z" level=info msg="StartContainer for \"820a6cd6ea8115374a8bfa3d32290fdd0384b9164541aa1774345e3f47309c6f\" returns successfully" Jul 14 21:25:08.987122 kubelet[2561]: I0714 21:25:08.987034 2561 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2dj84" podStartSLOduration=0.987015547 podStartE2EDuration="987.015547ms" podCreationTimestamp="2025-07-14 21:25:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:25:08.986551593 +0000 UTC m=+9.122129564" watchObservedRunningTime="2025-07-14 21:25:08.987015547 +0000 UTC m=+9.122593518" Jul 14 21:25:14.137071 update_engine[1439]: I20250714 21:25:14.136525 1439 update_attempter.cc:509] Updating boot flags... Jul 14 21:25:14.199212 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2937) Jul 14 21:25:14.235428 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2940) Jul 14 21:25:19.334064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount185945277.mount: Deactivated successfully. Jul 14 21:25:20.705316 containerd[1460]: time="2025-07-14T21:25:20.705257172Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:25:20.705930 containerd[1460]: time="2025-07-14T21:25:20.705879408Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 14 21:25:20.707186 containerd[1460]: time="2025-07-14T21:25:20.707142160Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:25:20.711110 containerd[1460]: time="2025-07-14T21:25:20.709482105Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 12.229904101s" Jul 14 21:25:20.711110 containerd[1460]: time="2025-07-14T21:25:20.709524025Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 14 21:25:20.715215 containerd[1460]: time="2025-07-14T21:25:20.715187949Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 14 21:25:20.728063 containerd[1460]: time="2025-07-14T21:25:20.728016627Z" level=info msg="CreateContainer within sandbox \"6617df5cb8e4d10ed01111a58d0188d3fddef9e4627e9b0f60244b924c83b1d2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 14 21:25:20.764901 containerd[1460]: time="2025-07-14T21:25:20.764846512Z" level=info msg="CreateContainer within sandbox \"6617df5cb8e4d10ed01111a58d0188d3fddef9e4627e9b0f60244b924c83b1d2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"86436a74d465b692c02fa50a6780b60eceee77b70d9acae6cf348c4e7e312fab\"" Jul 14 21:25:20.765594 containerd[1460]: time="2025-07-14T21:25:20.765555187Z" level=info msg="StartContainer for \"86436a74d465b692c02fa50a6780b60eceee77b70d9acae6cf348c4e7e312fab\"" Jul 14 21:25:20.795347 systemd[1]: Started cri-containerd-86436a74d465b692c02fa50a6780b60eceee77b70d9acae6cf348c4e7e312fab.scope - libcontainer container 86436a74d465b692c02fa50a6780b60eceee77b70d9acae6cf348c4e7e312fab. Jul 14 21:25:20.863383 containerd[1460]: time="2025-07-14T21:25:20.863263524Z" level=info msg="StartContainer for \"86436a74d465b692c02fa50a6780b60eceee77b70d9acae6cf348c4e7e312fab\" returns successfully" Jul 14 21:25:20.866205 systemd[1]: cri-containerd-86436a74d465b692c02fa50a6780b60eceee77b70d9acae6cf348c4e7e312fab.scope: Deactivated successfully. Jul 14 21:25:20.925401 containerd[1460]: time="2025-07-14T21:25:20.925327927Z" level=info msg="shim disconnected" id=86436a74d465b692c02fa50a6780b60eceee77b70d9acae6cf348c4e7e312fab namespace=k8s.io Jul 14 21:25:20.925401 containerd[1460]: time="2025-07-14T21:25:20.925380967Z" level=warning msg="cleaning up after shim disconnected" id=86436a74d465b692c02fa50a6780b60eceee77b70d9acae6cf348c4e7e312fab namespace=k8s.io Jul 14 21:25:20.925401 containerd[1460]: time="2025-07-14T21:25:20.925400007Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 21:25:21.013439 containerd[1460]: time="2025-07-14T21:25:21.013324529Z" level=info msg="CreateContainer within sandbox \"6617df5cb8e4d10ed01111a58d0188d3fddef9e4627e9b0f60244b924c83b1d2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 14 21:25:21.026599 containerd[1460]: time="2025-07-14T21:25:21.026519408Z" level=info msg="CreateContainer within sandbox \"6617df5cb8e4d10ed01111a58d0188d3fddef9e4627e9b0f60244b924c83b1d2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"02c74240babedf33499f9471b443f84e56134d72f39431a7e9ca088341be4b79\"" Jul 14 21:25:21.028011 containerd[1460]: time="2025-07-14T21:25:21.026976406Z" level=info msg="StartContainer for \"02c74240babedf33499f9471b443f84e56134d72f39431a7e9ca088341be4b79\"" Jul 14 21:25:21.052245 systemd[1]: Started cri-containerd-02c74240babedf33499f9471b443f84e56134d72f39431a7e9ca088341be4b79.scope - libcontainer container 02c74240babedf33499f9471b443f84e56134d72f39431a7e9ca088341be4b79. Jul 14 21:25:21.074143 containerd[1460]: time="2025-07-14T21:25:21.074086478Z" level=info msg="StartContainer for \"02c74240babedf33499f9471b443f84e56134d72f39431a7e9ca088341be4b79\" returns successfully" Jul 14 21:25:21.088753 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 14 21:25:21.089038 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 14 21:25:21.089647 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 14 21:25:21.095523 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 21:25:21.095697 systemd[1]: cri-containerd-02c74240babedf33499f9471b443f84e56134d72f39431a7e9ca088341be4b79.scope: Deactivated successfully. Jul 14 21:25:21.109713 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 21:25:21.122611 containerd[1460]: time="2025-07-14T21:25:21.122545862Z" level=info msg="shim disconnected" id=02c74240babedf33499f9471b443f84e56134d72f39431a7e9ca088341be4b79 namespace=k8s.io Jul 14 21:25:21.122611 containerd[1460]: time="2025-07-14T21:25:21.122599382Z" level=warning msg="cleaning up after shim disconnected" id=02c74240babedf33499f9471b443f84e56134d72f39431a7e9ca088341be4b79 namespace=k8s.io Jul 14 21:25:21.122611 containerd[1460]: time="2025-07-14T21:25:21.122608822Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 21:25:21.760844 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86436a74d465b692c02fa50a6780b60eceee77b70d9acae6cf348c4e7e312fab-rootfs.mount: Deactivated successfully. Jul 14 21:25:21.865567 containerd[1460]: time="2025-07-14T21:25:21.865510487Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:25:21.866193 containerd[1460]: time="2025-07-14T21:25:21.866139523Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 14 21:25:21.866851 containerd[1460]: time="2025-07-14T21:25:21.866811439Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:25:21.868356 containerd[1460]: time="2025-07-14T21:25:21.868290950Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.152975722s" Jul 14 21:25:21.868356 containerd[1460]: time="2025-07-14T21:25:21.868331190Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 14 21:25:21.871530 containerd[1460]: time="2025-07-14T21:25:21.871369731Z" level=info msg="CreateContainer within sandbox \"5d06e566d7826e11cf24ecd3d8fcba6aa1c5a3d43a685d314256a09a4817befe\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 14 21:25:21.883233 containerd[1460]: time="2025-07-14T21:25:21.883186499Z" level=info msg="CreateContainer within sandbox \"5d06e566d7826e11cf24ecd3d8fcba6aa1c5a3d43a685d314256a09a4817befe\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9eccac98485fc3e560159eb0df0d0574874f3ece6d72d7ca24c8faff76273322\"" Jul 14 21:25:21.883908 containerd[1460]: time="2025-07-14T21:25:21.883735136Z" level=info msg="StartContainer for \"9eccac98485fc3e560159eb0df0d0574874f3ece6d72d7ca24c8faff76273322\"" Jul 14 21:25:21.910312 systemd[1]: Started cri-containerd-9eccac98485fc3e560159eb0df0d0574874f3ece6d72d7ca24c8faff76273322.scope - libcontainer container 9eccac98485fc3e560159eb0df0d0574874f3ece6d72d7ca24c8faff76273322. Jul 14 21:25:21.933817 containerd[1460]: time="2025-07-14T21:25:21.932691397Z" level=info msg="StartContainer for \"9eccac98485fc3e560159eb0df0d0574874f3ece6d72d7ca24c8faff76273322\" returns successfully" Jul 14 21:25:22.013869 containerd[1460]: time="2025-07-14T21:25:22.013352668Z" level=info msg="CreateContainer within sandbox \"6617df5cb8e4d10ed01111a58d0188d3fddef9e4627e9b0f60244b924c83b1d2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 14 21:25:22.017067 kubelet[2561]: I0714 21:25:22.016974 2561 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-qcn7s" podStartSLOduration=0.673352053 podStartE2EDuration="14.016955327s" podCreationTimestamp="2025-07-14 21:25:08 +0000 UTC" firstStartedPulling="2025-07-14 21:25:08.525441071 +0000 UTC m=+8.661019042" lastFinishedPulling="2025-07-14 21:25:21.869044385 +0000 UTC m=+22.004622316" observedRunningTime="2025-07-14 21:25:22.016752328 +0000 UTC m=+22.152330299" watchObservedRunningTime="2025-07-14 21:25:22.016955327 +0000 UTC m=+22.152533298" Jul 14 21:25:22.045444 containerd[1460]: time="2025-07-14T21:25:22.045372601Z" level=info msg="CreateContainer within sandbox \"6617df5cb8e4d10ed01111a58d0188d3fddef9e4627e9b0f60244b924c83b1d2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9641e1f13aeb97c01263f336db6df7e9a1941e9bf87aa7f5159e57e1eff7b52a\"" Jul 14 21:25:22.049187 containerd[1460]: time="2025-07-14T21:25:22.046420074Z" level=info msg="StartContainer for \"9641e1f13aeb97c01263f336db6df7e9a1941e9bf87aa7f5159e57e1eff7b52a\"" Jul 14 21:25:22.088478 systemd[1]: Started cri-containerd-9641e1f13aeb97c01263f336db6df7e9a1941e9bf87aa7f5159e57e1eff7b52a.scope - libcontainer container 9641e1f13aeb97c01263f336db6df7e9a1941e9bf87aa7f5159e57e1eff7b52a. Jul 14 21:25:22.180025 containerd[1460]: time="2025-07-14T21:25:22.178700302Z" level=info msg="StartContainer for \"9641e1f13aeb97c01263f336db6df7e9a1941e9bf87aa7f5159e57e1eff7b52a\" returns successfully" Jul 14 21:25:22.203730 systemd[1]: cri-containerd-9641e1f13aeb97c01263f336db6df7e9a1941e9bf87aa7f5159e57e1eff7b52a.scope: Deactivated successfully. Jul 14 21:25:22.311572 containerd[1460]: time="2025-07-14T21:25:22.311364367Z" level=info msg="shim disconnected" id=9641e1f13aeb97c01263f336db6df7e9a1941e9bf87aa7f5159e57e1eff7b52a namespace=k8s.io Jul 14 21:25:22.311572 containerd[1460]: time="2025-07-14T21:25:22.311503206Z" level=warning msg="cleaning up after shim disconnected" id=9641e1f13aeb97c01263f336db6df7e9a1941e9bf87aa7f5159e57e1eff7b52a namespace=k8s.io Jul 14 21:25:22.311572 containerd[1460]: time="2025-07-14T21:25:22.311514606Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 21:25:23.017440 containerd[1460]: time="2025-07-14T21:25:23.017385926Z" level=info msg="CreateContainer within sandbox \"6617df5cb8e4d10ed01111a58d0188d3fddef9e4627e9b0f60244b924c83b1d2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 14 21:25:23.031349 containerd[1460]: time="2025-07-14T21:25:23.031293768Z" level=info msg="CreateContainer within sandbox \"6617df5cb8e4d10ed01111a58d0188d3fddef9e4627e9b0f60244b924c83b1d2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"77e5cce98dbffecfd6dae4a9074ed867d8fb700aff4cee61ce3de4df955b3513\"" Jul 14 21:25:23.032084 containerd[1460]: time="2025-07-14T21:25:23.031816725Z" level=info msg="StartContainer for \"77e5cce98dbffecfd6dae4a9074ed867d8fb700aff4cee61ce3de4df955b3513\"" Jul 14 21:25:23.064305 systemd[1]: Started cri-containerd-77e5cce98dbffecfd6dae4a9074ed867d8fb700aff4cee61ce3de4df955b3513.scope - libcontainer container 77e5cce98dbffecfd6dae4a9074ed867d8fb700aff4cee61ce3de4df955b3513. Jul 14 21:25:23.087040 systemd[1]: cri-containerd-77e5cce98dbffecfd6dae4a9074ed867d8fb700aff4cee61ce3de4df955b3513.scope: Deactivated successfully. Jul 14 21:25:23.088598 containerd[1460]: time="2025-07-14T21:25:23.088559688Z" level=info msg="StartContainer for \"77e5cce98dbffecfd6dae4a9074ed867d8fb700aff4cee61ce3de4df955b3513\" returns successfully" Jul 14 21:25:23.110781 containerd[1460]: time="2025-07-14T21:25:23.110716484Z" level=info msg="shim disconnected" id=77e5cce98dbffecfd6dae4a9074ed867d8fb700aff4cee61ce3de4df955b3513 namespace=k8s.io Jul 14 21:25:23.110781 containerd[1460]: time="2025-07-14T21:25:23.110774203Z" level=warning msg="cleaning up after shim disconnected" id=77e5cce98dbffecfd6dae4a9074ed867d8fb700aff4cee61ce3de4df955b3513 namespace=k8s.io Jul 14 21:25:23.110781 containerd[1460]: time="2025-07-14T21:25:23.110783843Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 21:25:23.764661 systemd[1]: run-containerd-runc-k8s.io-77e5cce98dbffecfd6dae4a9074ed867d8fb700aff4cee61ce3de4df955b3513-runc.K4rFtT.mount: Deactivated successfully. Jul 14 21:25:23.765037 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77e5cce98dbffecfd6dae4a9074ed867d8fb700aff4cee61ce3de4df955b3513-rootfs.mount: Deactivated successfully. Jul 14 21:25:24.021230 containerd[1460]: time="2025-07-14T21:25:24.021121233Z" level=info msg="CreateContainer within sandbox \"6617df5cb8e4d10ed01111a58d0188d3fddef9e4627e9b0f60244b924c83b1d2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 14 21:25:24.037556 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3094044351.mount: Deactivated successfully. Jul 14 21:25:24.045726 containerd[1460]: time="2025-07-14T21:25:24.045659142Z" level=info msg="CreateContainer within sandbox \"6617df5cb8e4d10ed01111a58d0188d3fddef9e4627e9b0f60244b924c83b1d2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d143eb942aeec8c04c566df88492003528c208e100ce7951f721c321a96691e8\"" Jul 14 21:25:24.048314 containerd[1460]: time="2025-07-14T21:25:24.046378058Z" level=info msg="StartContainer for \"d143eb942aeec8c04c566df88492003528c208e100ce7951f721c321a96691e8\"" Jul 14 21:25:24.085914 systemd[1]: Started cri-containerd-d143eb942aeec8c04c566df88492003528c208e100ce7951f721c321a96691e8.scope - libcontainer container d143eb942aeec8c04c566df88492003528c208e100ce7951f721c321a96691e8. Jul 14 21:25:24.116339 containerd[1460]: time="2025-07-14T21:25:24.116284523Z" level=info msg="StartContainer for \"d143eb942aeec8c04c566df88492003528c208e100ce7951f721c321a96691e8\" returns successfully" Jul 14 21:25:24.302956 kubelet[2561]: I0714 21:25:24.302687 2561 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 14 21:25:24.343328 systemd[1]: Created slice kubepods-burstable-pod4b739299_d92a_4e2c_a0a0_017a93f55d9e.slice - libcontainer container kubepods-burstable-pod4b739299_d92a_4e2c_a0a0_017a93f55d9e.slice. Jul 14 21:25:24.357449 systemd[1]: Created slice kubepods-burstable-pod004b5a3c_8da4_4a51_92e1_fe56232fb772.slice - libcontainer container kubepods-burstable-pod004b5a3c_8da4_4a51_92e1_fe56232fb772.slice. Jul 14 21:25:24.489897 kubelet[2561]: I0714 21:25:24.489848 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttvxw\" (UniqueName: \"kubernetes.io/projected/4b739299-d92a-4e2c-a0a0-017a93f55d9e-kube-api-access-ttvxw\") pod \"coredns-7c65d6cfc9-7jrqb\" (UID: \"4b739299-d92a-4e2c-a0a0-017a93f55d9e\") " pod="kube-system/coredns-7c65d6cfc9-7jrqb" Jul 14 21:25:24.489897 kubelet[2561]: I0714 21:25:24.489895 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2lb7\" (UniqueName: \"kubernetes.io/projected/004b5a3c-8da4-4a51-92e1-fe56232fb772-kube-api-access-r2lb7\") pod \"coredns-7c65d6cfc9-bgw2m\" (UID: \"004b5a3c-8da4-4a51-92e1-fe56232fb772\") " pod="kube-system/coredns-7c65d6cfc9-bgw2m" Jul 14 21:25:24.490063 kubelet[2561]: I0714 21:25:24.489914 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b739299-d92a-4e2c-a0a0-017a93f55d9e-config-volume\") pod \"coredns-7c65d6cfc9-7jrqb\" (UID: \"4b739299-d92a-4e2c-a0a0-017a93f55d9e\") " pod="kube-system/coredns-7c65d6cfc9-7jrqb" Jul 14 21:25:24.490063 kubelet[2561]: I0714 21:25:24.489932 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/004b5a3c-8da4-4a51-92e1-fe56232fb772-config-volume\") pod \"coredns-7c65d6cfc9-bgw2m\" (UID: \"004b5a3c-8da4-4a51-92e1-fe56232fb772\") " pod="kube-system/coredns-7c65d6cfc9-bgw2m" Jul 14 21:25:24.650640 containerd[1460]: time="2025-07-14T21:25:24.650596016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7jrqb,Uid:4b739299-d92a-4e2c-a0a0-017a93f55d9e,Namespace:kube-system,Attempt:0,}" Jul 14 21:25:24.666455 containerd[1460]: time="2025-07-14T21:25:24.666413771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bgw2m,Uid:004b5a3c-8da4-4a51-92e1-fe56232fb772,Namespace:kube-system,Attempt:0,}" Jul 14 21:25:25.040785 kubelet[2561]: I0714 21:25:25.040628 2561 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-q8z6h" podStartSLOduration=4.803632683 podStartE2EDuration="17.040606372s" podCreationTimestamp="2025-07-14 21:25:08 +0000 UTC" firstStartedPulling="2025-07-14 21:25:08.478052341 +0000 UTC m=+8.613630312" lastFinishedPulling="2025-07-14 21:25:20.71502603 +0000 UTC m=+20.850604001" observedRunningTime="2025-07-14 21:25:25.03903954 +0000 UTC m=+25.174617511" watchObservedRunningTime="2025-07-14 21:25:25.040606372 +0000 UTC m=+25.176184343" Jul 14 21:25:26.262116 systemd-networkd[1395]: cilium_host: Link UP Jul 14 21:25:26.265126 systemd-networkd[1395]: cilium_net: Link UP Jul 14 21:25:26.265484 systemd-networkd[1395]: cilium_net: Gained carrier Jul 14 21:25:26.265751 systemd-networkd[1395]: cilium_host: Gained carrier Jul 14 21:25:26.353460 systemd-networkd[1395]: cilium_vxlan: Link UP Jul 14 21:25:26.353467 systemd-networkd[1395]: cilium_vxlan: Gained carrier Jul 14 21:25:26.442239 systemd-networkd[1395]: cilium_host: Gained IPv6LL Jul 14 21:25:26.659138 kernel: NET: Registered PF_ALG protocol family Jul 14 21:25:27.071313 systemd[1]: Started sshd@7-10.0.0.115:22-10.0.0.1:38470.service - OpenSSH per-connection server daemon (10.0.0.1:38470). Jul 14 21:25:27.121279 sshd[3643]: Accepted publickey for core from 10.0.0.1 port 38470 ssh2: RSA SHA256:qod1KuvK/X/ss8Lj9PYiIkvlhHqhxXhx4UqduPTMsgY Jul 14 21:25:27.122641 sshd-session[3643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:25:27.127957 systemd-logind[1436]: New session 8 of user core. Jul 14 21:25:27.136915 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 14 21:25:27.210216 systemd-networkd[1395]: cilium_net: Gained IPv6LL Jul 14 21:25:27.257061 systemd-networkd[1395]: lxc_health: Link UP Jul 14 21:25:27.271329 systemd-networkd[1395]: lxc_health: Gained carrier Jul 14 21:25:27.303520 sshd[3705]: Connection closed by 10.0.0.1 port 38470 Jul 14 21:25:27.306728 sshd-session[3643]: pam_unix(sshd:session): session closed for user core Jul 14 21:25:27.311560 systemd[1]: sshd@7-10.0.0.115:22-10.0.0.1:38470.service: Deactivated successfully. Jul 14 21:25:27.314001 systemd[1]: session-8.scope: Deactivated successfully. Jul 14 21:25:27.314971 systemd-logind[1436]: Session 8 logged out. Waiting for processes to exit. Jul 14 21:25:27.316123 systemd-logind[1436]: Removed session 8. Jul 14 21:25:27.424127 kernel: eth0: renamed from tmpea1a3 Jul 14 21:25:27.441215 kernel: eth0: renamed from tmpd1e06 Jul 14 21:25:27.450002 systemd-networkd[1395]: lxc08d43558501e: Link UP Jul 14 21:25:27.450769 systemd-networkd[1395]: lxc08d43558501e: Gained carrier Jul 14 21:25:27.450891 systemd-networkd[1395]: lxc03551111fb96: Link UP Jul 14 21:25:27.452394 systemd-networkd[1395]: lxc03551111fb96: Gained carrier Jul 14 21:25:27.467172 systemd-networkd[1395]: cilium_vxlan: Gained IPv6LL Jul 14 21:25:28.746279 systemd-networkd[1395]: lxc_health: Gained IPv6LL Jul 14 21:25:29.066223 systemd-networkd[1395]: lxc08d43558501e: Gained IPv6LL Jul 14 21:25:29.066504 systemd-networkd[1395]: lxc03551111fb96: Gained IPv6LL Jul 14 21:25:31.064759 containerd[1460]: time="2025-07-14T21:25:31.064507455Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:25:31.064759 containerd[1460]: time="2025-07-14T21:25:31.064562975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:25:31.064759 containerd[1460]: time="2025-07-14T21:25:31.064575015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:25:31.064759 containerd[1460]: time="2025-07-14T21:25:31.064666055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:25:31.069137 containerd[1460]: time="2025-07-14T21:25:31.066669046Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:25:31.069137 containerd[1460]: time="2025-07-14T21:25:31.066714166Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:25:31.069137 containerd[1460]: time="2025-07-14T21:25:31.066724006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:25:31.069137 containerd[1460]: time="2025-07-14T21:25:31.066786766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:25:31.090273 systemd[1]: Started cri-containerd-d1e06522bdd4bbb7857144ae3b66c3372fdc429ecdd03263f96595a9355277bb.scope - libcontainer container d1e06522bdd4bbb7857144ae3b66c3372fdc429ecdd03263f96595a9355277bb. Jul 14 21:25:31.092024 systemd[1]: Started cri-containerd-ea1a343408379a094796baa1fcab704793006054bff0b74c7b80bead46a05647.scope - libcontainer container ea1a343408379a094796baa1fcab704793006054bff0b74c7b80bead46a05647. Jul 14 21:25:31.110463 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:25:31.112063 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:25:31.130407 containerd[1460]: time="2025-07-14T21:25:31.130332265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7jrqb,Uid:4b739299-d92a-4e2c-a0a0-017a93f55d9e,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea1a343408379a094796baa1fcab704793006054bff0b74c7b80bead46a05647\"" Jul 14 21:25:31.133928 containerd[1460]: time="2025-07-14T21:25:31.133893130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bgw2m,Uid:004b5a3c-8da4-4a51-92e1-fe56232fb772,Namespace:kube-system,Attempt:0,} returns sandbox id \"d1e06522bdd4bbb7857144ae3b66c3372fdc429ecdd03263f96595a9355277bb\"" Jul 14 21:25:31.134872 containerd[1460]: time="2025-07-14T21:25:31.134804406Z" level=info msg="CreateContainer within sandbox \"ea1a343408379a094796baa1fcab704793006054bff0b74c7b80bead46a05647\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 21:25:31.139123 containerd[1460]: time="2025-07-14T21:25:31.138990069Z" level=info msg="CreateContainer within sandbox \"d1e06522bdd4bbb7857144ae3b66c3372fdc429ecdd03263f96595a9355277bb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 21:25:31.150591 containerd[1460]: time="2025-07-14T21:25:31.150480622Z" level=info msg="CreateContainer within sandbox \"ea1a343408379a094796baa1fcab704793006054bff0b74c7b80bead46a05647\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7cf0940a26c7b19e416730f1716840bf15d63a7a0ef5f3a53a8ca01c7209ac69\"" Jul 14 21:25:31.151440 containerd[1460]: time="2025-07-14T21:25:31.151175219Z" level=info msg="StartContainer for \"7cf0940a26c7b19e416730f1716840bf15d63a7a0ef5f3a53a8ca01c7209ac69\"" Jul 14 21:25:31.155148 containerd[1460]: time="2025-07-14T21:25:31.155113283Z" level=info msg="CreateContainer within sandbox \"d1e06522bdd4bbb7857144ae3b66c3372fdc429ecdd03263f96595a9355277bb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f8a60d597204d0a7bbf0bfc8abb7204c45f79142ef9d0a30de00859d8c2ead0b\"" Jul 14 21:25:31.156033 containerd[1460]: time="2025-07-14T21:25:31.156005719Z" level=info msg="StartContainer for \"f8a60d597204d0a7bbf0bfc8abb7204c45f79142ef9d0a30de00859d8c2ead0b\"" Jul 14 21:25:31.177297 systemd[1]: Started cri-containerd-7cf0940a26c7b19e416730f1716840bf15d63a7a0ef5f3a53a8ca01c7209ac69.scope - libcontainer container 7cf0940a26c7b19e416730f1716840bf15d63a7a0ef5f3a53a8ca01c7209ac69. Jul 14 21:25:31.180071 systemd[1]: Started cri-containerd-f8a60d597204d0a7bbf0bfc8abb7204c45f79142ef9d0a30de00859d8c2ead0b.scope - libcontainer container f8a60d597204d0a7bbf0bfc8abb7204c45f79142ef9d0a30de00859d8c2ead0b. Jul 14 21:25:31.204401 containerd[1460]: time="2025-07-14T21:25:31.204359921Z" level=info msg="StartContainer for \"7cf0940a26c7b19e416730f1716840bf15d63a7a0ef5f3a53a8ca01c7209ac69\" returns successfully" Jul 14 21:25:31.215828 containerd[1460]: time="2025-07-14T21:25:31.215775194Z" level=info msg="StartContainer for \"f8a60d597204d0a7bbf0bfc8abb7204c45f79142ef9d0a30de00859d8c2ead0b\" returns successfully" Jul 14 21:25:32.048608 kubelet[2561]: I0714 21:25:32.048548 2561 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-bgw2m" podStartSLOduration=24.048531739 podStartE2EDuration="24.048531739s" podCreationTimestamp="2025-07-14 21:25:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:25:32.04821626 +0000 UTC m=+32.183794231" watchObservedRunningTime="2025-07-14 21:25:32.048531739 +0000 UTC m=+32.184109710" Jul 14 21:25:32.061795 kubelet[2561]: I0714 21:25:32.061687 2561 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-7jrqb" podStartSLOduration=24.061671727 podStartE2EDuration="24.061671727s" podCreationTimestamp="2025-07-14 21:25:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:25:32.061647327 +0000 UTC m=+32.197225298" watchObservedRunningTime="2025-07-14 21:25:32.061671727 +0000 UTC m=+32.197249658" Jul 14 21:25:32.319478 systemd[1]: Started sshd@8-10.0.0.115:22-10.0.0.1:38486.service - OpenSSH per-connection server daemon (10.0.0.1:38486). Jul 14 21:25:32.366776 sshd[3995]: Accepted publickey for core from 10.0.0.1 port 38486 ssh2: RSA SHA256:qod1KuvK/X/ss8Lj9PYiIkvlhHqhxXhx4UqduPTMsgY Jul 14 21:25:32.368149 sshd-session[3995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:25:32.372735 systemd-logind[1436]: New session 9 of user core. Jul 14 21:25:32.381291 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 14 21:25:32.498971 sshd[3997]: Connection closed by 10.0.0.1 port 38486 Jul 14 21:25:32.499793 sshd-session[3995]: pam_unix(sshd:session): session closed for user core Jul 14 21:25:32.503121 systemd[1]: sshd@8-10.0.0.115:22-10.0.0.1:38486.service: Deactivated successfully. Jul 14 21:25:32.505878 systemd[1]: session-9.scope: Deactivated successfully. Jul 14 21:25:32.506712 systemd-logind[1436]: Session 9 logged out. Waiting for processes to exit. Jul 14 21:25:32.507894 systemd-logind[1436]: Removed session 9. Jul 14 21:25:37.510602 systemd[1]: Started sshd@9-10.0.0.115:22-10.0.0.1:51832.service - OpenSSH per-connection server daemon (10.0.0.1:51832). Jul 14 21:25:37.551069 sshd[4012]: Accepted publickey for core from 10.0.0.1 port 51832 ssh2: RSA SHA256:qod1KuvK/X/ss8Lj9PYiIkvlhHqhxXhx4UqduPTMsgY Jul 14 21:25:37.552250 sshd-session[4012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:25:37.555751 systemd-logind[1436]: New session 10 of user core. Jul 14 21:25:37.569304 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 14 21:25:37.682062 sshd[4014]: Connection closed by 10.0.0.1 port 51832 Jul 14 21:25:37.682588 sshd-session[4012]: pam_unix(sshd:session): session closed for user core Jul 14 21:25:37.691461 systemd[1]: sshd@9-10.0.0.115:22-10.0.0.1:51832.service: Deactivated successfully. Jul 14 21:25:37.692980 systemd[1]: session-10.scope: Deactivated successfully. Jul 14 21:25:37.693683 systemd-logind[1436]: Session 10 logged out. Waiting for processes to exit. Jul 14 21:25:37.705441 systemd[1]: Started sshd@10-10.0.0.115:22-10.0.0.1:51844.service - OpenSSH per-connection server daemon (10.0.0.1:51844). Jul 14 21:25:37.707157 systemd-logind[1436]: Removed session 10. Jul 14 21:25:37.744930 sshd[4027]: Accepted publickey for core from 10.0.0.1 port 51844 ssh2: RSA SHA256:qod1KuvK/X/ss8Lj9PYiIkvlhHqhxXhx4UqduPTMsgY Jul 14 21:25:37.745972 sshd-session[4027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:25:37.750358 systemd-logind[1436]: New session 11 of user core. Jul 14 21:25:37.762242 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 14 21:25:37.917216 sshd[4030]: Connection closed by 10.0.0.1 port 51844 Jul 14 21:25:37.917587 sshd-session[4027]: pam_unix(sshd:session): session closed for user core Jul 14 21:25:37.937025 systemd[1]: sshd@10-10.0.0.115:22-10.0.0.1:51844.service: Deactivated successfully. Jul 14 21:25:37.940822 systemd[1]: session-11.scope: Deactivated successfully. Jul 14 21:25:37.941721 systemd-logind[1436]: Session 11 logged out. Waiting for processes to exit. Jul 14 21:25:37.949380 systemd[1]: Started sshd@11-10.0.0.115:22-10.0.0.1:51846.service - OpenSSH per-connection server daemon (10.0.0.1:51846). Jul 14 21:25:37.949981 systemd-logind[1436]: Removed session 11. Jul 14 21:25:37.993356 sshd[4041]: Accepted publickey for core from 10.0.0.1 port 51846 ssh2: RSA SHA256:qod1KuvK/X/ss8Lj9PYiIkvlhHqhxXhx4UqduPTMsgY Jul 14 21:25:37.994861 sshd-session[4041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:25:37.999157 systemd-logind[1436]: New session 12 of user core. Jul 14 21:25:38.007296 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 14 21:25:38.120579 sshd[4044]: Connection closed by 10.0.0.1 port 51846 Jul 14 21:25:38.120924 sshd-session[4041]: pam_unix(sshd:session): session closed for user core Jul 14 21:25:38.124351 systemd[1]: sshd@11-10.0.0.115:22-10.0.0.1:51846.service: Deactivated successfully. Jul 14 21:25:38.126838 systemd[1]: session-12.scope: Deactivated successfully. Jul 14 21:25:38.127513 systemd-logind[1436]: Session 12 logged out. Waiting for processes to exit. Jul 14 21:25:38.128877 systemd-logind[1436]: Removed session 12. Jul 14 21:25:43.135484 systemd[1]: Started sshd@12-10.0.0.115:22-10.0.0.1:33596.service - OpenSSH per-connection server daemon (10.0.0.1:33596). Jul 14 21:25:43.175541 sshd[4061]: Accepted publickey for core from 10.0.0.1 port 33596 ssh2: RSA SHA256:qod1KuvK/X/ss8Lj9PYiIkvlhHqhxXhx4UqduPTMsgY Jul 14 21:25:43.176652 sshd-session[4061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:25:43.180160 systemd-logind[1436]: New session 13 of user core. Jul 14 21:25:43.193323 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 14 21:25:43.299471 sshd[4063]: Connection closed by 10.0.0.1 port 33596 Jul 14 21:25:43.299947 sshd-session[4061]: pam_unix(sshd:session): session closed for user core Jul 14 21:25:43.303288 systemd[1]: sshd@12-10.0.0.115:22-10.0.0.1:33596.service: Deactivated successfully. Jul 14 21:25:43.305015 systemd[1]: session-13.scope: Deactivated successfully. Jul 14 21:25:43.307593 systemd-logind[1436]: Session 13 logged out. Waiting for processes to exit. Jul 14 21:25:43.308460 systemd-logind[1436]: Removed session 13. Jul 14 21:25:48.311535 systemd[1]: Started sshd@13-10.0.0.115:22-10.0.0.1:33602.service - OpenSSH per-connection server daemon (10.0.0.1:33602). Jul 14 21:25:48.357337 sshd[4076]: Accepted publickey for core from 10.0.0.1 port 33602 ssh2: RSA SHA256:qod1KuvK/X/ss8Lj9PYiIkvlhHqhxXhx4UqduPTMsgY Jul 14 21:25:48.358583 sshd-session[4076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:25:48.362944 systemd-logind[1436]: New session 14 of user core. Jul 14 21:25:48.377314 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 14 21:25:48.485611 sshd[4078]: Connection closed by 10.0.0.1 port 33602 Jul 14 21:25:48.486177 sshd-session[4076]: pam_unix(sshd:session): session closed for user core Jul 14 21:25:48.500646 systemd[1]: sshd@13-10.0.0.115:22-10.0.0.1:33602.service: Deactivated successfully. Jul 14 21:25:48.502518 systemd[1]: session-14.scope: Deactivated successfully. Jul 14 21:25:48.503300 systemd-logind[1436]: Session 14 logged out. Waiting for processes to exit. Jul 14 21:25:48.526388 systemd[1]: Started sshd@14-10.0.0.115:22-10.0.0.1:33618.service - OpenSSH per-connection server daemon (10.0.0.1:33618). Jul 14 21:25:48.527737 systemd-logind[1436]: Removed session 14. Jul 14 21:25:48.562321 sshd[4091]: Accepted publickey for core from 10.0.0.1 port 33618 ssh2: RSA SHA256:qod1KuvK/X/ss8Lj9PYiIkvlhHqhxXhx4UqduPTMsgY Jul 14 21:25:48.563384 sshd-session[4091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:25:48.566883 systemd-logind[1436]: New session 15 of user core. Jul 14 21:25:48.578228 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 14 21:25:48.799370 sshd[4094]: Connection closed by 10.0.0.1 port 33618 Jul 14 21:25:48.800016 sshd-session[4091]: pam_unix(sshd:session): session closed for user core Jul 14 21:25:48.813472 systemd[1]: sshd@14-10.0.0.115:22-10.0.0.1:33618.service: Deactivated successfully. Jul 14 21:25:48.815204 systemd[1]: session-15.scope: Deactivated successfully. Jul 14 21:25:48.815881 systemd-logind[1436]: Session 15 logged out. Waiting for processes to exit. Jul 14 21:25:48.823458 systemd[1]: Started sshd@15-10.0.0.115:22-10.0.0.1:33632.service - OpenSSH per-connection server daemon (10.0.0.1:33632). Jul 14 21:25:48.824457 systemd-logind[1436]: Removed session 15. Jul 14 21:25:48.868664 sshd[4104]: Accepted publickey for core from 10.0.0.1 port 33632 ssh2: RSA SHA256:qod1KuvK/X/ss8Lj9PYiIkvlhHqhxXhx4UqduPTMsgY Jul 14 21:25:48.870004 sshd-session[4104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:25:48.873981 systemd-logind[1436]: New session 16 of user core. Jul 14 21:25:48.885283 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 14 21:25:50.049304 sshd[4107]: Connection closed by 10.0.0.1 port 33632 Jul 14 21:25:50.049878 sshd-session[4104]: pam_unix(sshd:session): session closed for user core Jul 14 21:25:50.057927 systemd[1]: sshd@15-10.0.0.115:22-10.0.0.1:33632.service: Deactivated successfully. Jul 14 21:25:50.061593 systemd[1]: session-16.scope: Deactivated successfully. Jul 14 21:25:50.064365 systemd-logind[1436]: Session 16 logged out. Waiting for processes to exit. Jul 14 21:25:50.071460 systemd[1]: Started sshd@16-10.0.0.115:22-10.0.0.1:33644.service - OpenSSH per-connection server daemon (10.0.0.1:33644). Jul 14 21:25:50.074435 systemd-logind[1436]: Removed session 16. Jul 14 21:25:50.114281 sshd[4126]: Accepted publickey for core from 10.0.0.1 port 33644 ssh2: RSA SHA256:qod1KuvK/X/ss8Lj9PYiIkvlhHqhxXhx4UqduPTMsgY Jul 14 21:25:50.115509 sshd-session[4126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:25:50.119397 systemd-logind[1436]: New session 17 of user core. Jul 14 21:25:50.129258 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 14 21:25:50.340456 sshd[4130]: Connection closed by 10.0.0.1 port 33644 Jul 14 21:25:50.339514 sshd-session[4126]: pam_unix(sshd:session): session closed for user core Jul 14 21:25:50.351207 systemd[1]: sshd@16-10.0.0.115:22-10.0.0.1:33644.service: Deactivated successfully. Jul 14 21:25:50.352991 systemd[1]: session-17.scope: Deactivated successfully. Jul 14 21:25:50.355305 systemd-logind[1436]: Session 17 logged out. Waiting for processes to exit. Jul 14 21:25:50.368418 systemd[1]: Started sshd@17-10.0.0.115:22-10.0.0.1:33646.service - OpenSSH per-connection server daemon (10.0.0.1:33646). Jul 14 21:25:50.369134 systemd-logind[1436]: Removed session 17. Jul 14 21:25:50.406006 sshd[4141]: Accepted publickey for core from 10.0.0.1 port 33646 ssh2: RSA SHA256:qod1KuvK/X/ss8Lj9PYiIkvlhHqhxXhx4UqduPTMsgY Jul 14 21:25:50.407397 sshd-session[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:25:50.412352 systemd-logind[1436]: New session 18 of user core. Jul 14 21:25:50.424337 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 14 21:25:50.540305 sshd[4144]: Connection closed by 10.0.0.1 port 33646 Jul 14 21:25:50.540656 sshd-session[4141]: pam_unix(sshd:session): session closed for user core Jul 14 21:25:50.544320 systemd[1]: sshd@17-10.0.0.115:22-10.0.0.1:33646.service: Deactivated successfully. Jul 14 21:25:50.546985 systemd[1]: session-18.scope: Deactivated successfully. Jul 14 21:25:50.548143 systemd-logind[1436]: Session 18 logged out. Waiting for processes to exit. Jul 14 21:25:50.549543 systemd-logind[1436]: Removed session 18. Jul 14 21:25:55.552619 systemd[1]: Started sshd@18-10.0.0.115:22-10.0.0.1:33868.service - OpenSSH per-connection server daemon (10.0.0.1:33868). Jul 14 21:25:55.593324 sshd[4160]: Accepted publickey for core from 10.0.0.1 port 33868 ssh2: RSA SHA256:qod1KuvK/X/ss8Lj9PYiIkvlhHqhxXhx4UqduPTMsgY Jul 14 21:25:55.594526 sshd-session[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:25:55.598032 systemd-logind[1436]: New session 19 of user core. Jul 14 21:25:55.609280 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 14 21:25:55.721135 sshd[4162]: Connection closed by 10.0.0.1 port 33868 Jul 14 21:25:55.722908 sshd-session[4160]: pam_unix(sshd:session): session closed for user core Jul 14 21:25:55.726072 systemd[1]: sshd@18-10.0.0.115:22-10.0.0.1:33868.service: Deactivated successfully. Jul 14 21:25:55.728031 systemd[1]: session-19.scope: Deactivated successfully. Jul 14 21:25:55.728818 systemd-logind[1436]: Session 19 logged out. Waiting for processes to exit. Jul 14 21:25:55.729694 systemd-logind[1436]: Removed session 19. Jul 14 21:26:00.734896 systemd[1]: Started sshd@19-10.0.0.115:22-10.0.0.1:33878.service - OpenSSH per-connection server daemon (10.0.0.1:33878). Jul 14 21:26:00.777298 sshd[4179]: Accepted publickey for core from 10.0.0.1 port 33878 ssh2: RSA SHA256:qod1KuvK/X/ss8Lj9PYiIkvlhHqhxXhx4UqduPTMsgY Jul 14 21:26:00.778637 sshd-session[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:26:00.782575 systemd-logind[1436]: New session 20 of user core. Jul 14 21:26:00.790360 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 14 21:26:00.897926 sshd[4181]: Connection closed by 10.0.0.1 port 33878 Jul 14 21:26:00.899320 sshd-session[4179]: pam_unix(sshd:session): session closed for user core Jul 14 21:26:00.902811 systemd[1]: sshd@19-10.0.0.115:22-10.0.0.1:33878.service: Deactivated successfully. Jul 14 21:26:00.904764 systemd[1]: session-20.scope: Deactivated successfully. Jul 14 21:26:00.905556 systemd-logind[1436]: Session 20 logged out. Waiting for processes to exit. Jul 14 21:26:00.906401 systemd-logind[1436]: Removed session 20. Jul 14 21:26:05.925510 systemd[1]: Started sshd@20-10.0.0.115:22-10.0.0.1:33146.service - OpenSSH per-connection server daemon (10.0.0.1:33146). Jul 14 21:26:05.977706 sshd[4195]: Accepted publickey for core from 10.0.0.1 port 33146 ssh2: RSA SHA256:qod1KuvK/X/ss8Lj9PYiIkvlhHqhxXhx4UqduPTMsgY Jul 14 21:26:05.978254 sshd-session[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:26:05.986108 systemd-logind[1436]: New session 21 of user core. Jul 14 21:26:05.995385 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 14 21:26:06.136544 sshd[4197]: Connection closed by 10.0.0.1 port 33146 Jul 14 21:26:06.136874 sshd-session[4195]: pam_unix(sshd:session): session closed for user core Jul 14 21:26:06.153945 systemd[1]: sshd@20-10.0.0.115:22-10.0.0.1:33146.service: Deactivated successfully. Jul 14 21:26:06.155809 systemd[1]: session-21.scope: Deactivated successfully. Jul 14 21:26:06.156772 systemd-logind[1436]: Session 21 logged out. Waiting for processes to exit. Jul 14 21:26:06.170502 systemd[1]: Started sshd@21-10.0.0.115:22-10.0.0.1:33156.service - OpenSSH per-connection server daemon (10.0.0.1:33156). Jul 14 21:26:06.171602 systemd-logind[1436]: Removed session 21. Jul 14 21:26:06.218236 sshd[4209]: Accepted publickey for core from 10.0.0.1 port 33156 ssh2: RSA SHA256:qod1KuvK/X/ss8Lj9PYiIkvlhHqhxXhx4UqduPTMsgY Jul 14 21:26:06.219234 sshd-session[4209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:26:06.223689 systemd-logind[1436]: New session 22 of user core. Jul 14 21:26:06.231329 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 14 21:26:08.197925 containerd[1460]: time="2025-07-14T21:26:08.197870130Z" level=info msg="StopContainer for \"9eccac98485fc3e560159eb0df0d0574874f3ece6d72d7ca24c8faff76273322\" with timeout 30 (s)" Jul 14 21:26:08.199564 containerd[1460]: time="2025-07-14T21:26:08.199112503Z" level=info msg="Stop container \"9eccac98485fc3e560159eb0df0d0574874f3ece6d72d7ca24c8faff76273322\" with signal terminated" Jul 14 21:26:08.214655 systemd[1]: cri-containerd-9eccac98485fc3e560159eb0df0d0574874f3ece6d72d7ca24c8faff76273322.scope: Deactivated successfully. Jul 14 21:26:08.230031 containerd[1460]: time="2025-07-14T21:26:08.229795022Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 14 21:26:08.236063 containerd[1460]: time="2025-07-14T21:26:08.236020287Z" level=info msg="StopContainer for \"d143eb942aeec8c04c566df88492003528c208e100ce7951f721c321a96691e8\" with timeout 2 (s)" Jul 14 21:26:08.236513 containerd[1460]: time="2025-07-14T21:26:08.236481212Z" level=info msg="Stop container \"d143eb942aeec8c04c566df88492003528c208e100ce7951f721c321a96691e8\" with signal terminated" Jul 14 21:26:08.245566 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9eccac98485fc3e560159eb0df0d0574874f3ece6d72d7ca24c8faff76273322-rootfs.mount: Deactivated successfully. Jul 14 21:26:08.245845 systemd-networkd[1395]: lxc_health: Link DOWN Jul 14 21:26:08.245848 systemd-networkd[1395]: lxc_health: Lost carrier Jul 14 21:26:08.267203 systemd[1]: cri-containerd-d143eb942aeec8c04c566df88492003528c208e100ce7951f721c321a96691e8.scope: Deactivated successfully. Jul 14 21:26:08.267513 systemd[1]: cri-containerd-d143eb942aeec8c04c566df88492003528c208e100ce7951f721c321a96691e8.scope: Consumed 6.596s CPU time, 124.8M memory peak, 140K read from disk, 12.9M written to disk. Jul 14 21:26:08.279134 containerd[1460]: time="2025-07-14T21:26:08.278832852Z" level=info msg="shim disconnected" id=9eccac98485fc3e560159eb0df0d0574874f3ece6d72d7ca24c8faff76273322 namespace=k8s.io Jul 14 21:26:08.279134 containerd[1460]: time="2025-07-14T21:26:08.278984374Z" level=warning msg="cleaning up after shim disconnected" id=9eccac98485fc3e560159eb0df0d0574874f3ece6d72d7ca24c8faff76273322 namespace=k8s.io Jul 14 21:26:08.279134 containerd[1460]: time="2025-07-14T21:26:08.278998294Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 21:26:08.288974 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d143eb942aeec8c04c566df88492003528c208e100ce7951f721c321a96691e8-rootfs.mount: Deactivated successfully. Jul 14 21:26:08.299604 containerd[1460]: time="2025-07-14T21:26:08.299451786Z" level=info msg="shim disconnected" id=d143eb942aeec8c04c566df88492003528c208e100ce7951f721c321a96691e8 namespace=k8s.io Jul 14 21:26:08.299604 containerd[1460]: time="2025-07-14T21:26:08.299597468Z" level=warning msg="cleaning up after shim disconnected" id=d143eb942aeec8c04c566df88492003528c208e100ce7951f721c321a96691e8 namespace=k8s.io Jul 14 21:26:08.299604 containerd[1460]: time="2025-07-14T21:26:08.299608428Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 21:26:08.335897 containerd[1460]: time="2025-07-14T21:26:08.335584722Z" level=info msg="StopContainer for \"9eccac98485fc3e560159eb0df0d0574874f3ece6d72d7ca24c8faff76273322\" returns successfully" Jul 14 21:26:08.336343 containerd[1460]: time="2025-07-14T21:26:08.336255249Z" level=info msg="StopPodSandbox for \"5d06e566d7826e11cf24ecd3d8fcba6aa1c5a3d43a685d314256a09a4817befe\"" Jul 14 21:26:08.337506 containerd[1460]: time="2025-07-14T21:26:08.337452421Z" level=info msg="StopContainer for \"d143eb942aeec8c04c566df88492003528c208e100ce7951f721c321a96691e8\" returns successfully" Jul 14 21:26:08.338019 containerd[1460]: time="2025-07-14T21:26:08.337973907Z" level=info msg="StopPodSandbox for \"6617df5cb8e4d10ed01111a58d0188d3fddef9e4627e9b0f60244b924c83b1d2\"" Jul 14 21:26:08.338082 containerd[1460]: time="2025-07-14T21:26:08.338036348Z" level=info msg="Container to stop \"d143eb942aeec8c04c566df88492003528c208e100ce7951f721c321a96691e8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 21:26:08.338082 containerd[1460]: time="2025-07-14T21:26:08.338049748Z" level=info msg="Container to stop \"9641e1f13aeb97c01263f336db6df7e9a1941e9bf87aa7f5159e57e1eff7b52a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 21:26:08.338082 containerd[1460]: time="2025-07-14T21:26:08.338059028Z" level=info msg="Container to stop \"77e5cce98dbffecfd6dae4a9074ed867d8fb700aff4cee61ce3de4df955b3513\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 21:26:08.338082 containerd[1460]: time="2025-07-14T21:26:08.338067468Z" level=info msg="Container to stop \"86436a74d465b692c02fa50a6780b60eceee77b70d9acae6cf348c4e7e312fab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 21:26:08.338082 containerd[1460]: time="2025-07-14T21:26:08.338076308Z" level=info msg="Container to stop \"02c74240babedf33499f9471b443f84e56134d72f39431a7e9ca088341be4b79\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 21:26:08.339986 containerd[1460]: time="2025-07-14T21:26:08.339935087Z" level=info msg="Container to stop \"9eccac98485fc3e560159eb0df0d0574874f3ece6d72d7ca24c8faff76273322\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 21:26:08.344164 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5d06e566d7826e11cf24ecd3d8fcba6aa1c5a3d43a685d314256a09a4817befe-shm.mount: Deactivated successfully. Jul 14 21:26:08.344290 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6617df5cb8e4d10ed01111a58d0188d3fddef9e4627e9b0f60244b924c83b1d2-shm.mount: Deactivated successfully. Jul 14 21:26:08.349907 systemd[1]: cri-containerd-6617df5cb8e4d10ed01111a58d0188d3fddef9e4627e9b0f60244b924c83b1d2.scope: Deactivated successfully. Jul 14 21:26:08.351687 systemd[1]: cri-containerd-5d06e566d7826e11cf24ecd3d8fcba6aa1c5a3d43a685d314256a09a4817befe.scope: Deactivated successfully. Jul 14 21:26:08.381954 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6617df5cb8e4d10ed01111a58d0188d3fddef9e4627e9b0f60244b924c83b1d2-rootfs.mount: Deactivated successfully. Jul 14 21:26:08.385398 containerd[1460]: time="2025-07-14T21:26:08.385321519Z" level=info msg="shim disconnected" id=6617df5cb8e4d10ed01111a58d0188d3fddef9e4627e9b0f60244b924c83b1d2 namespace=k8s.io Jul 14 21:26:08.385398 containerd[1460]: time="2025-07-14T21:26:08.385384680Z" level=warning msg="cleaning up after shim disconnected" id=6617df5cb8e4d10ed01111a58d0188d3fddef9e4627e9b0f60244b924c83b1d2 namespace=k8s.io Jul 14 21:26:08.385398 containerd[1460]: time="2025-07-14T21:26:08.385395720Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 21:26:08.388419 containerd[1460]: time="2025-07-14T21:26:08.388366711Z" level=info msg="shim disconnected" id=5d06e566d7826e11cf24ecd3d8fcba6aa1c5a3d43a685d314256a09a4817befe namespace=k8s.io Jul 14 21:26:08.388419 containerd[1460]: time="2025-07-14T21:26:08.388418751Z" level=warning msg="cleaning up after shim disconnected" id=5d06e566d7826e11cf24ecd3d8fcba6aa1c5a3d43a685d314256a09a4817befe namespace=k8s.io Jul 14 21:26:08.388419 containerd[1460]: time="2025-07-14T21:26:08.388426431Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 21:26:08.400793 containerd[1460]: time="2025-07-14T21:26:08.400626118Z" level=info msg="TearDown network for sandbox \"6617df5cb8e4d10ed01111a58d0188d3fddef9e4627e9b0f60244b924c83b1d2\" successfully" Jul 14 21:26:08.400793 containerd[1460]: time="2025-07-14T21:26:08.400663719Z" level=info msg="StopPodSandbox for \"6617df5cb8e4d10ed01111a58d0188d3fddef9e4627e9b0f60244b924c83b1d2\" returns successfully" Jul 14 21:26:08.406799 containerd[1460]: time="2025-07-14T21:26:08.406686341Z" level=info msg="TearDown network for sandbox \"5d06e566d7826e11cf24ecd3d8fcba6aa1c5a3d43a685d314256a09a4817befe\" successfully" Jul 14 21:26:08.406799 containerd[1460]: time="2025-07-14T21:26:08.406798063Z" level=info msg="StopPodSandbox for \"5d06e566d7826e11cf24ecd3d8fcba6aa1c5a3d43a685d314256a09a4817befe\" returns successfully" Jul 14 21:26:08.543618 kubelet[2561]: I0714 21:26:08.543487 2561 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-xtables-lock\") pod \"1b45ef3c-20a2-4b5e-a3e8-6e09d146d810\" (UID: \"1b45ef3c-20a2-4b5e-a3e8-6e09d146d810\") " Jul 14 21:26:08.543618 kubelet[2561]: I0714 21:26:08.543532 2561 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-etc-cni-netd\") pod \"1b45ef3c-20a2-4b5e-a3e8-6e09d146d810\" (UID: \"1b45ef3c-20a2-4b5e-a3e8-6e09d146d810\") " Jul 14 21:26:08.543618 kubelet[2561]: I0714 21:26:08.543565 2561 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-cilium-config-path\") pod \"1b45ef3c-20a2-4b5e-a3e8-6e09d146d810\" (UID: \"1b45ef3c-20a2-4b5e-a3e8-6e09d146d810\") " Jul 14 21:26:08.543618 kubelet[2561]: I0714 21:26:08.543589 2561 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-cilium-cgroup\") pod \"1b45ef3c-20a2-4b5e-a3e8-6e09d146d810\" (UID: \"1b45ef3c-20a2-4b5e-a3e8-6e09d146d810\") " Jul 14 21:26:08.543618 kubelet[2561]: I0714 21:26:08.543606 2561 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74jff\" (UniqueName: \"kubernetes.io/projected/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-kube-api-access-74jff\") pod \"1b45ef3c-20a2-4b5e-a3e8-6e09d146d810\" (UID: \"1b45ef3c-20a2-4b5e-a3e8-6e09d146d810\") " Jul 14 21:26:08.543618 kubelet[2561]: I0714 21:26:08.543624 2561 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dlfg\" (UniqueName: \"kubernetes.io/projected/3821cadb-0744-47a5-9884-f28b496ce748-kube-api-access-2dlfg\") pod \"3821cadb-0744-47a5-9884-f28b496ce748\" (UID: \"3821cadb-0744-47a5-9884-f28b496ce748\") " Jul 14 21:26:08.544422 kubelet[2561]: I0714 21:26:08.543642 2561 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-cilium-run\") pod \"1b45ef3c-20a2-4b5e-a3e8-6e09d146d810\" (UID: \"1b45ef3c-20a2-4b5e-a3e8-6e09d146d810\") " Jul 14 21:26:08.544422 kubelet[2561]: I0714 21:26:08.543663 2561 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-clustermesh-secrets\") pod \"1b45ef3c-20a2-4b5e-a3e8-6e09d146d810\" (UID: \"1b45ef3c-20a2-4b5e-a3e8-6e09d146d810\") " Jul 14 21:26:08.544422 kubelet[2561]: I0714 21:26:08.543679 2561 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-host-proc-sys-kernel\") pod \"1b45ef3c-20a2-4b5e-a3e8-6e09d146d810\" (UID: \"1b45ef3c-20a2-4b5e-a3e8-6e09d146d810\") " Jul 14 21:26:08.544422 kubelet[2561]: I0714 21:26:08.543695 2561 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3821cadb-0744-47a5-9884-f28b496ce748-cilium-config-path\") pod \"3821cadb-0744-47a5-9884-f28b496ce748\" (UID: \"3821cadb-0744-47a5-9884-f28b496ce748\") " Jul 14 21:26:08.544422 kubelet[2561]: I0714 21:26:08.543710 2561 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-lib-modules\") pod \"1b45ef3c-20a2-4b5e-a3e8-6e09d146d810\" (UID: \"1b45ef3c-20a2-4b5e-a3e8-6e09d146d810\") " Jul 14 21:26:08.544422 kubelet[2561]: I0714 21:26:08.543724 2561 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-host-proc-sys-net\") pod \"1b45ef3c-20a2-4b5e-a3e8-6e09d146d810\" (UID: \"1b45ef3c-20a2-4b5e-a3e8-6e09d146d810\") " Jul 14 21:26:08.544554 kubelet[2561]: I0714 21:26:08.543739 2561 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-bpf-maps\") pod \"1b45ef3c-20a2-4b5e-a3e8-6e09d146d810\" (UID: \"1b45ef3c-20a2-4b5e-a3e8-6e09d146d810\") " Jul 14 21:26:08.544554 kubelet[2561]: I0714 21:26:08.543754 2561 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-cni-path\") pod \"1b45ef3c-20a2-4b5e-a3e8-6e09d146d810\" (UID: \"1b45ef3c-20a2-4b5e-a3e8-6e09d146d810\") " Jul 14 21:26:08.544554 kubelet[2561]: I0714 21:26:08.543767 2561 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-hostproc\") pod \"1b45ef3c-20a2-4b5e-a3e8-6e09d146d810\" (UID: \"1b45ef3c-20a2-4b5e-a3e8-6e09d146d810\") " Jul 14 21:26:08.544554 kubelet[2561]: I0714 21:26:08.543784 2561 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-hubble-tls\") pod \"1b45ef3c-20a2-4b5e-a3e8-6e09d146d810\" (UID: \"1b45ef3c-20a2-4b5e-a3e8-6e09d146d810\") " Jul 14 21:26:08.548987 kubelet[2561]: I0714 21:26:08.548552 2561 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1b45ef3c-20a2-4b5e-a3e8-6e09d146d810" (UID: "1b45ef3c-20a2-4b5e-a3e8-6e09d146d810"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 21:26:08.548987 kubelet[2561]: I0714 21:26:08.548789 2561 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1b45ef3c-20a2-4b5e-a3e8-6e09d146d810" (UID: "1b45ef3c-20a2-4b5e-a3e8-6e09d146d810"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 21:26:08.548987 kubelet[2561]: I0714 21:26:08.548844 2561 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1b45ef3c-20a2-4b5e-a3e8-6e09d146d810" (UID: "1b45ef3c-20a2-4b5e-a3e8-6e09d146d810"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 21:26:08.550131 kubelet[2561]: I0714 21:26:08.550058 2561 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1b45ef3c-20a2-4b5e-a3e8-6e09d146d810" (UID: "1b45ef3c-20a2-4b5e-a3e8-6e09d146d810"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 21:26:08.550762 kubelet[2561]: I0714 21:26:08.550725 2561 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1b45ef3c-20a2-4b5e-a3e8-6e09d146d810" (UID: "1b45ef3c-20a2-4b5e-a3e8-6e09d146d810"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 14 21:26:08.552434 kubelet[2561]: I0714 21:26:08.551175 2561 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1b45ef3c-20a2-4b5e-a3e8-6e09d146d810" (UID: "1b45ef3c-20a2-4b5e-a3e8-6e09d146d810"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 21:26:08.552434 kubelet[2561]: I0714 21:26:08.551274 2561 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-kube-api-access-74jff" (OuterVolumeSpecName: "kube-api-access-74jff") pod "1b45ef3c-20a2-4b5e-a3e8-6e09d146d810" (UID: "1b45ef3c-20a2-4b5e-a3e8-6e09d146d810"). InnerVolumeSpecName "kube-api-access-74jff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 14 21:26:08.552434 kubelet[2561]: I0714 21:26:08.551315 2561 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1b45ef3c-20a2-4b5e-a3e8-6e09d146d810" (UID: "1b45ef3c-20a2-4b5e-a3e8-6e09d146d810"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 21:26:08.552434 kubelet[2561]: I0714 21:26:08.551335 2561 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1b45ef3c-20a2-4b5e-a3e8-6e09d146d810" (UID: "1b45ef3c-20a2-4b5e-a3e8-6e09d146d810"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 21:26:08.552434 kubelet[2561]: I0714 21:26:08.551353 2561 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1b45ef3c-20a2-4b5e-a3e8-6e09d146d810" (UID: "1b45ef3c-20a2-4b5e-a3e8-6e09d146d810"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 21:26:08.552619 kubelet[2561]: I0714 21:26:08.551387 2561 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-cni-path" (OuterVolumeSpecName: "cni-path") pod "1b45ef3c-20a2-4b5e-a3e8-6e09d146d810" (UID: "1b45ef3c-20a2-4b5e-a3e8-6e09d146d810"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 21:26:08.552619 kubelet[2561]: I0714 21:26:08.551404 2561 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-hostproc" (OuterVolumeSpecName: "hostproc") pod "1b45ef3c-20a2-4b5e-a3e8-6e09d146d810" (UID: "1b45ef3c-20a2-4b5e-a3e8-6e09d146d810"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 21:26:08.552619 kubelet[2561]: I0714 21:26:08.551970 2561 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3821cadb-0744-47a5-9884-f28b496ce748-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3821cadb-0744-47a5-9884-f28b496ce748" (UID: "3821cadb-0744-47a5-9884-f28b496ce748"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 14 21:26:08.553259 kubelet[2561]: I0714 21:26:08.553219 2561 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3821cadb-0744-47a5-9884-f28b496ce748-kube-api-access-2dlfg" (OuterVolumeSpecName: "kube-api-access-2dlfg") pod "3821cadb-0744-47a5-9884-f28b496ce748" (UID: "3821cadb-0744-47a5-9884-f28b496ce748"). InnerVolumeSpecName "kube-api-access-2dlfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 14 21:26:08.553329 kubelet[2561]: I0714 21:26:08.553254 2561 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1b45ef3c-20a2-4b5e-a3e8-6e09d146d810" (UID: "1b45ef3c-20a2-4b5e-a3e8-6e09d146d810"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 14 21:26:08.553994 kubelet[2561]: I0714 21:26:08.553964 2561 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1b45ef3c-20a2-4b5e-a3e8-6e09d146d810" (UID: "1b45ef3c-20a2-4b5e-a3e8-6e09d146d810"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 14 21:26:08.644337 kubelet[2561]: I0714 21:26:08.644289 2561 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 14 21:26:08.644337 kubelet[2561]: I0714 21:26:08.644323 2561 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 14 21:26:08.644337 kubelet[2561]: I0714 21:26:08.644331 2561 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 14 21:26:08.644337 kubelet[2561]: I0714 21:26:08.644341 2561 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 14 21:26:08.644337 kubelet[2561]: I0714 21:26:08.644349 2561 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 14 21:26:08.644548 kubelet[2561]: I0714 21:26:08.644356 2561 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 14 21:26:08.644548 kubelet[2561]: I0714 21:26:08.644364 2561 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 14 21:26:08.644548 kubelet[2561]: I0714 21:26:08.644372 2561 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 14 21:26:08.644548 kubelet[2561]: I0714 21:26:08.644380 2561 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74jff\" (UniqueName: \"kubernetes.io/projected/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-kube-api-access-74jff\") on node \"localhost\" DevicePath \"\"" Jul 14 21:26:08.644548 kubelet[2561]: I0714 21:26:08.644388 2561 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dlfg\" (UniqueName: \"kubernetes.io/projected/3821cadb-0744-47a5-9884-f28b496ce748-kube-api-access-2dlfg\") on node \"localhost\" DevicePath \"\"" Jul 14 21:26:08.644548 kubelet[2561]: I0714 21:26:08.644396 2561 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 14 21:26:08.644548 kubelet[2561]: I0714 21:26:08.644403 2561 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 14 21:26:08.644548 kubelet[2561]: I0714 21:26:08.644411 2561 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 14 21:26:08.644701 kubelet[2561]: I0714 21:26:08.644418 2561 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 14 21:26:08.644701 kubelet[2561]: I0714 21:26:08.644428 2561 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3821cadb-0744-47a5-9884-f28b496ce748-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 14 21:26:08.644701 kubelet[2561]: I0714 21:26:08.644435 2561 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 14 21:26:09.115355 kubelet[2561]: I0714 21:26:09.115220 2561 scope.go:117] "RemoveContainer" containerID="9eccac98485fc3e560159eb0df0d0574874f3ece6d72d7ca24c8faff76273322" Jul 14 21:26:09.119188 containerd[1460]: time="2025-07-14T21:26:09.118975868Z" level=info msg="RemoveContainer for \"9eccac98485fc3e560159eb0df0d0574874f3ece6d72d7ca24c8faff76273322\"" Jul 14 21:26:09.121074 systemd[1]: Removed slice kubepods-besteffort-pod3821cadb_0744_47a5_9884_f28b496ce748.slice - libcontainer container kubepods-besteffort-pod3821cadb_0744_47a5_9884_f28b496ce748.slice. Jul 14 21:26:09.124892 containerd[1460]: time="2025-07-14T21:26:09.124856807Z" level=info msg="RemoveContainer for \"9eccac98485fc3e560159eb0df0d0574874f3ece6d72d7ca24c8faff76273322\" returns successfully" Jul 14 21:26:09.125993 kubelet[2561]: I0714 21:26:09.125427 2561 scope.go:117] "RemoveContainer" containerID="9eccac98485fc3e560159eb0df0d0574874f3ece6d72d7ca24c8faff76273322" Jul 14 21:26:09.125941 systemd[1]: Removed slice kubepods-burstable-pod1b45ef3c_20a2_4b5e_a3e8_6e09d146d810.slice - libcontainer container kubepods-burstable-pod1b45ef3c_20a2_4b5e_a3e8_6e09d146d810.slice. Jul 14 21:26:09.127281 containerd[1460]: time="2025-07-14T21:26:09.126558304Z" level=error msg="ContainerStatus for \"9eccac98485fc3e560159eb0df0d0574874f3ece6d72d7ca24c8faff76273322\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9eccac98485fc3e560159eb0df0d0574874f3ece6d72d7ca24c8faff76273322\": not found" Jul 14 21:26:09.126672 systemd[1]: kubepods-burstable-pod1b45ef3c_20a2_4b5e_a3e8_6e09d146d810.slice: Consumed 6.752s CPU time, 125.2M memory peak, 160K read from disk, 12.9M written to disk. Jul 14 21:26:09.134787 kubelet[2561]: E0714 21:26:09.134680 2561 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9eccac98485fc3e560159eb0df0d0574874f3ece6d72d7ca24c8faff76273322\": not found" containerID="9eccac98485fc3e560159eb0df0d0574874f3ece6d72d7ca24c8faff76273322" Jul 14 21:26:09.134909 kubelet[2561]: I0714 21:26:09.134729 2561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9eccac98485fc3e560159eb0df0d0574874f3ece6d72d7ca24c8faff76273322"} err="failed to get container status \"9eccac98485fc3e560159eb0df0d0574874f3ece6d72d7ca24c8faff76273322\": rpc error: code = NotFound desc = an error occurred when try to find container \"9eccac98485fc3e560159eb0df0d0574874f3ece6d72d7ca24c8faff76273322\": not found" Jul 14 21:26:09.134909 kubelet[2561]: I0714 21:26:09.134820 2561 scope.go:117] "RemoveContainer" containerID="d143eb942aeec8c04c566df88492003528c208e100ce7951f721c321a96691e8" Jul 14 21:26:09.136440 containerd[1460]: time="2025-07-14T21:26:09.136401763Z" level=info msg="RemoveContainer for \"d143eb942aeec8c04c566df88492003528c208e100ce7951f721c321a96691e8\"" Jul 14 21:26:09.143970 containerd[1460]: time="2025-07-14T21:26:09.143920359Z" level=info msg="RemoveContainer for \"d143eb942aeec8c04c566df88492003528c208e100ce7951f721c321a96691e8\" returns successfully" Jul 14 21:26:09.144264 kubelet[2561]: I0714 21:26:09.144227 2561 scope.go:117] "RemoveContainer" containerID="77e5cce98dbffecfd6dae4a9074ed867d8fb700aff4cee61ce3de4df955b3513" Jul 14 21:26:09.146429 containerd[1460]: time="2025-07-14T21:26:09.146308903Z" level=info msg="RemoveContainer for \"77e5cce98dbffecfd6dae4a9074ed867d8fb700aff4cee61ce3de4df955b3513\"" Jul 14 21:26:09.149364 containerd[1460]: time="2025-07-14T21:26:09.149327693Z" level=info msg="RemoveContainer for \"77e5cce98dbffecfd6dae4a9074ed867d8fb700aff4cee61ce3de4df955b3513\" returns successfully" Jul 14 21:26:09.150314 kubelet[2561]: I0714 21:26:09.150146 2561 scope.go:117] "RemoveContainer" containerID="9641e1f13aeb97c01263f336db6df7e9a1941e9bf87aa7f5159e57e1eff7b52a" Jul 14 21:26:09.151453 containerd[1460]: time="2025-07-14T21:26:09.151427034Z" level=info msg="RemoveContainer for \"9641e1f13aeb97c01263f336db6df7e9a1941e9bf87aa7f5159e57e1eff7b52a\"" Jul 14 21:26:09.154436 containerd[1460]: time="2025-07-14T21:26:09.154398224Z" level=info msg="RemoveContainer for \"9641e1f13aeb97c01263f336db6df7e9a1941e9bf87aa7f5159e57e1eff7b52a\" returns successfully" Jul 14 21:26:09.154712 kubelet[2561]: I0714 21:26:09.154684 2561 scope.go:117] "RemoveContainer" containerID="02c74240babedf33499f9471b443f84e56134d72f39431a7e9ca088341be4b79" Jul 14 21:26:09.156560 containerd[1460]: time="2025-07-14T21:26:09.156525845Z" level=info msg="RemoveContainer for \"02c74240babedf33499f9471b443f84e56134d72f39431a7e9ca088341be4b79\"" Jul 14 21:26:09.160292 containerd[1460]: time="2025-07-14T21:26:09.160251083Z" level=info msg="RemoveContainer for \"02c74240babedf33499f9471b443f84e56134d72f39431a7e9ca088341be4b79\" returns successfully" Jul 14 21:26:09.160894 kubelet[2561]: I0714 21:26:09.160533 2561 scope.go:117] "RemoveContainer" containerID="86436a74d465b692c02fa50a6780b60eceee77b70d9acae6cf348c4e7e312fab" Jul 14 21:26:09.161655 containerd[1460]: time="2025-07-14T21:26:09.161616657Z" level=info msg="RemoveContainer for \"86436a74d465b692c02fa50a6780b60eceee77b70d9acae6cf348c4e7e312fab\"" Jul 14 21:26:09.164026 containerd[1460]: time="2025-07-14T21:26:09.163986760Z" level=info msg="RemoveContainer for \"86436a74d465b692c02fa50a6780b60eceee77b70d9acae6cf348c4e7e312fab\" returns successfully" Jul 14 21:26:09.164308 kubelet[2561]: I0714 21:26:09.164282 2561 scope.go:117] "RemoveContainer" containerID="d143eb942aeec8c04c566df88492003528c208e100ce7951f721c321a96691e8" Jul 14 21:26:09.164597 containerd[1460]: time="2025-07-14T21:26:09.164555286Z" level=error msg="ContainerStatus for \"d143eb942aeec8c04c566df88492003528c208e100ce7951f721c321a96691e8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d143eb942aeec8c04c566df88492003528c208e100ce7951f721c321a96691e8\": not found" Jul 14 21:26:09.164834 kubelet[2561]: E0714 21:26:09.164721 2561 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d143eb942aeec8c04c566df88492003528c208e100ce7951f721c321a96691e8\": not found" containerID="d143eb942aeec8c04c566df88492003528c208e100ce7951f721c321a96691e8" Jul 14 21:26:09.164834 kubelet[2561]: I0714 21:26:09.164776 2561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d143eb942aeec8c04c566df88492003528c208e100ce7951f721c321a96691e8"} err="failed to get container status \"d143eb942aeec8c04c566df88492003528c208e100ce7951f721c321a96691e8\": rpc error: code = NotFound desc = an error occurred when try to find container \"d143eb942aeec8c04c566df88492003528c208e100ce7951f721c321a96691e8\": not found" Jul 14 21:26:09.164834 kubelet[2561]: I0714 21:26:09.164800 2561 scope.go:117] "RemoveContainer" containerID="77e5cce98dbffecfd6dae4a9074ed867d8fb700aff4cee61ce3de4df955b3513" Jul 14 21:26:09.165146 containerd[1460]: time="2025-07-14T21:26:09.165105732Z" level=error msg="ContainerStatus for \"77e5cce98dbffecfd6dae4a9074ed867d8fb700aff4cee61ce3de4df955b3513\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"77e5cce98dbffecfd6dae4a9074ed867d8fb700aff4cee61ce3de4df955b3513\": not found" Jul 14 21:26:09.165301 kubelet[2561]: E0714 21:26:09.165260 2561 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"77e5cce98dbffecfd6dae4a9074ed867d8fb700aff4cee61ce3de4df955b3513\": not found" containerID="77e5cce98dbffecfd6dae4a9074ed867d8fb700aff4cee61ce3de4df955b3513" Jul 14 21:26:09.165351 kubelet[2561]: I0714 21:26:09.165315 2561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"77e5cce98dbffecfd6dae4a9074ed867d8fb700aff4cee61ce3de4df955b3513"} err="failed to get container status \"77e5cce98dbffecfd6dae4a9074ed867d8fb700aff4cee61ce3de4df955b3513\": rpc error: code = NotFound desc = an error occurred when try to find container \"77e5cce98dbffecfd6dae4a9074ed867d8fb700aff4cee61ce3de4df955b3513\": not found" Jul 14 21:26:09.165351 kubelet[2561]: I0714 21:26:09.165335 2561 scope.go:117] "RemoveContainer" containerID="9641e1f13aeb97c01263f336db6df7e9a1941e9bf87aa7f5159e57e1eff7b52a" Jul 14 21:26:09.165640 containerd[1460]: time="2025-07-14T21:26:09.165594697Z" level=error msg="ContainerStatus for \"9641e1f13aeb97c01263f336db6df7e9a1941e9bf87aa7f5159e57e1eff7b52a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9641e1f13aeb97c01263f336db6df7e9a1941e9bf87aa7f5159e57e1eff7b52a\": not found" Jul 14 21:26:09.165760 kubelet[2561]: E0714 21:26:09.165735 2561 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9641e1f13aeb97c01263f336db6df7e9a1941e9bf87aa7f5159e57e1eff7b52a\": not found" containerID="9641e1f13aeb97c01263f336db6df7e9a1941e9bf87aa7f5159e57e1eff7b52a" Jul 14 21:26:09.165789 kubelet[2561]: I0714 21:26:09.165761 2561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9641e1f13aeb97c01263f336db6df7e9a1941e9bf87aa7f5159e57e1eff7b52a"} err="failed to get container status \"9641e1f13aeb97c01263f336db6df7e9a1941e9bf87aa7f5159e57e1eff7b52a\": rpc error: code = NotFound desc = an error occurred when try to find container \"9641e1f13aeb97c01263f336db6df7e9a1941e9bf87aa7f5159e57e1eff7b52a\": not found" Jul 14 21:26:09.165789 kubelet[2561]: I0714 21:26:09.165778 2561 scope.go:117] "RemoveContainer" containerID="02c74240babedf33499f9471b443f84e56134d72f39431a7e9ca088341be4b79" Jul 14 21:26:09.166044 containerd[1460]: time="2025-07-14T21:26:09.165960580Z" level=error msg="ContainerStatus for \"02c74240babedf33499f9471b443f84e56134d72f39431a7e9ca088341be4b79\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"02c74240babedf33499f9471b443f84e56134d72f39431a7e9ca088341be4b79\": not found" Jul 14 21:26:09.166088 kubelet[2561]: E0714 21:26:09.166067 2561 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"02c74240babedf33499f9471b443f84e56134d72f39431a7e9ca088341be4b79\": not found" containerID="02c74240babedf33499f9471b443f84e56134d72f39431a7e9ca088341be4b79" Jul 14 21:26:09.166128 kubelet[2561]: I0714 21:26:09.166105 2561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"02c74240babedf33499f9471b443f84e56134d72f39431a7e9ca088341be4b79"} err="failed to get container status \"02c74240babedf33499f9471b443f84e56134d72f39431a7e9ca088341be4b79\": rpc error: code = NotFound desc = an error occurred when try to find container \"02c74240babedf33499f9471b443f84e56134d72f39431a7e9ca088341be4b79\": not found" Jul 14 21:26:09.166128 kubelet[2561]: I0714 21:26:09.166122 2561 scope.go:117] "RemoveContainer" containerID="86436a74d465b692c02fa50a6780b60eceee77b70d9acae6cf348c4e7e312fab" Jul 14 21:26:09.166322 containerd[1460]: time="2025-07-14T21:26:09.166290544Z" level=error msg="ContainerStatus for \"86436a74d465b692c02fa50a6780b60eceee77b70d9acae6cf348c4e7e312fab\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"86436a74d465b692c02fa50a6780b60eceee77b70d9acae6cf348c4e7e312fab\": not found" Jul 14 21:26:09.166483 kubelet[2561]: E0714 21:26:09.166464 2561 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"86436a74d465b692c02fa50a6780b60eceee77b70d9acae6cf348c4e7e312fab\": not found" containerID="86436a74d465b692c02fa50a6780b60eceee77b70d9acae6cf348c4e7e312fab" Jul 14 21:26:09.166527 kubelet[2561]: I0714 21:26:09.166484 2561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"86436a74d465b692c02fa50a6780b60eceee77b70d9acae6cf348c4e7e312fab"} err="failed to get container status \"86436a74d465b692c02fa50a6780b60eceee77b70d9acae6cf348c4e7e312fab\": rpc error: code = NotFound desc = an error occurred when try to find container \"86436a74d465b692c02fa50a6780b60eceee77b70d9acae6cf348c4e7e312fab\": not found" Jul 14 21:26:09.207979 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d06e566d7826e11cf24ecd3d8fcba6aa1c5a3d43a685d314256a09a4817befe-rootfs.mount: Deactivated successfully. Jul 14 21:26:09.208082 systemd[1]: var-lib-kubelet-pods-3821cadb\x2d0744\x2d47a5\x2d9884\x2df28b496ce748-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2dlfg.mount: Deactivated successfully. Jul 14 21:26:09.208183 systemd[1]: var-lib-kubelet-pods-1b45ef3c\x2d20a2\x2d4b5e\x2da3e8\x2d6e09d146d810-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d74jff.mount: Deactivated successfully. Jul 14 21:26:09.208237 systemd[1]: var-lib-kubelet-pods-1b45ef3c\x2d20a2\x2d4b5e\x2da3e8\x2d6e09d146d810-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 14 21:26:09.208289 systemd[1]: var-lib-kubelet-pods-1b45ef3c\x2d20a2\x2d4b5e\x2da3e8\x2d6e09d146d810-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 14 21:26:09.951506 kubelet[2561]: I0714 21:26:09.951457 2561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b45ef3c-20a2-4b5e-a3e8-6e09d146d810" path="/var/lib/kubelet/pods/1b45ef3c-20a2-4b5e-a3e8-6e09d146d810/volumes" Jul 14 21:26:09.952011 kubelet[2561]: I0714 21:26:09.951978 2561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3821cadb-0744-47a5-9884-f28b496ce748" path="/var/lib/kubelet/pods/3821cadb-0744-47a5-9884-f28b496ce748/volumes" Jul 14 21:26:09.989978 kubelet[2561]: E0714 21:26:09.989930 2561 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 14 21:26:10.146053 sshd[4212]: Connection closed by 10.0.0.1 port 33156 Jul 14 21:26:10.147330 sshd-session[4209]: pam_unix(sshd:session): session closed for user core Jul 14 21:26:10.157626 systemd[1]: sshd@21-10.0.0.115:22-10.0.0.1:33156.service: Deactivated successfully. Jul 14 21:26:10.159460 systemd[1]: session-22.scope: Deactivated successfully. Jul 14 21:26:10.161219 systemd[1]: session-22.scope: Consumed 1.285s CPU time, 28.9M memory peak. Jul 14 21:26:10.161750 systemd-logind[1436]: Session 22 logged out. Waiting for processes to exit. Jul 14 21:26:10.170727 systemd[1]: Started sshd@22-10.0.0.115:22-10.0.0.1:33170.service - OpenSSH per-connection server daemon (10.0.0.1:33170). Jul 14 21:26:10.173204 systemd-logind[1436]: Removed session 22. Jul 14 21:26:10.209017 sshd[4374]: Accepted publickey for core from 10.0.0.1 port 33170 ssh2: RSA SHA256:qod1KuvK/X/ss8Lj9PYiIkvlhHqhxXhx4UqduPTMsgY Jul 14 21:26:10.210290 sshd-session[4374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:26:10.214240 systemd-logind[1436]: New session 23 of user core. Jul 14 21:26:10.224314 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 14 21:26:11.579445 sshd[4377]: Connection closed by 10.0.0.1 port 33170 Jul 14 21:26:11.579841 sshd-session[4374]: pam_unix(sshd:session): session closed for user core Jul 14 21:26:11.596574 systemd[1]: sshd@22-10.0.0.115:22-10.0.0.1:33170.service: Deactivated successfully. Jul 14 21:26:11.597820 kubelet[2561]: E0714 21:26:11.597358 2561 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1b45ef3c-20a2-4b5e-a3e8-6e09d146d810" containerName="mount-cgroup" Jul 14 21:26:11.597820 kubelet[2561]: E0714 21:26:11.597384 2561 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1b45ef3c-20a2-4b5e-a3e8-6e09d146d810" containerName="apply-sysctl-overwrites" Jul 14 21:26:11.597820 kubelet[2561]: E0714 21:26:11.597390 2561 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3821cadb-0744-47a5-9884-f28b496ce748" containerName="cilium-operator" Jul 14 21:26:11.597820 kubelet[2561]: E0714 21:26:11.597396 2561 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1b45ef3c-20a2-4b5e-a3e8-6e09d146d810" containerName="mount-bpf-fs" Jul 14 21:26:11.597820 kubelet[2561]: E0714 21:26:11.597402 2561 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1b45ef3c-20a2-4b5e-a3e8-6e09d146d810" containerName="clean-cilium-state" Jul 14 21:26:11.597820 kubelet[2561]: E0714 21:26:11.597410 2561 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1b45ef3c-20a2-4b5e-a3e8-6e09d146d810" containerName="cilium-agent" Jul 14 21:26:11.597820 kubelet[2561]: I0714 21:26:11.597431 2561 memory_manager.go:354] "RemoveStaleState removing state" podUID="3821cadb-0744-47a5-9884-f28b496ce748" containerName="cilium-operator" Jul 14 21:26:11.597820 kubelet[2561]: I0714 21:26:11.597439 2561 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b45ef3c-20a2-4b5e-a3e8-6e09d146d810" containerName="cilium-agent" Jul 14 21:26:11.598350 systemd[1]: session-23.scope: Deactivated successfully. Jul 14 21:26:11.598556 systemd[1]: session-23.scope: Consumed 1.258s CPU time, 24.3M memory peak. Jul 14 21:26:11.602508 systemd-logind[1436]: Session 23 logged out. Waiting for processes to exit. Jul 14 21:26:11.611500 systemd[1]: Started sshd@23-10.0.0.115:22-10.0.0.1:33176.service - OpenSSH per-connection server daemon (10.0.0.1:33176). Jul 14 21:26:11.613662 systemd-logind[1436]: Removed session 23. Jul 14 21:26:11.622592 systemd[1]: Created slice kubepods-burstable-podebf72b74_0015_49fd_bdca_c86c8211d5c4.slice - libcontainer container kubepods-burstable-podebf72b74_0015_49fd_bdca_c86c8211d5c4.slice. Jul 14 21:26:11.665151 sshd[4388]: Accepted publickey for core from 10.0.0.1 port 33176 ssh2: RSA SHA256:qod1KuvK/X/ss8Lj9PYiIkvlhHqhxXhx4UqduPTMsgY Jul 14 21:26:11.665679 sshd-session[4388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:26:11.670167 systemd-logind[1436]: New session 24 of user core. Jul 14 21:26:11.685324 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 14 21:26:11.734385 sshd[4391]: Connection closed by 10.0.0.1 port 33176 Jul 14 21:26:11.735600 sshd-session[4388]: pam_unix(sshd:session): session closed for user core Jul 14 21:26:11.753818 systemd[1]: sshd@23-10.0.0.115:22-10.0.0.1:33176.service: Deactivated successfully. Jul 14 21:26:11.755634 systemd[1]: session-24.scope: Deactivated successfully. Jul 14 21:26:11.756414 systemd-logind[1436]: Session 24 logged out. Waiting for processes to exit. Jul 14 21:26:11.762265 kubelet[2561]: I0714 21:26:11.762234 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ebf72b74-0015-49fd-bdca-c86c8211d5c4-cilium-cgroup\") pod \"cilium-blv6m\" (UID: \"ebf72b74-0015-49fd-bdca-c86c8211d5c4\") " pod="kube-system/cilium-blv6m" Jul 14 21:26:11.762411 kubelet[2561]: I0714 21:26:11.762393 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ebf72b74-0015-49fd-bdca-c86c8211d5c4-cni-path\") pod \"cilium-blv6m\" (UID: \"ebf72b74-0015-49fd-bdca-c86c8211d5c4\") " pod="kube-system/cilium-blv6m" Jul 14 21:26:11.762479 kubelet[2561]: I0714 21:26:11.762468 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ebf72b74-0015-49fd-bdca-c86c8211d5c4-etc-cni-netd\") pod \"cilium-blv6m\" (UID: \"ebf72b74-0015-49fd-bdca-c86c8211d5c4\") " pod="kube-system/cilium-blv6m" Jul 14 21:26:11.762716 kubelet[2561]: I0714 21:26:11.762530 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ebf72b74-0015-49fd-bdca-c86c8211d5c4-cilium-ipsec-secrets\") pod \"cilium-blv6m\" (UID: \"ebf72b74-0015-49fd-bdca-c86c8211d5c4\") " pod="kube-system/cilium-blv6m" Jul 14 21:26:11.762716 kubelet[2561]: I0714 21:26:11.762552 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ebf72b74-0015-49fd-bdca-c86c8211d5c4-hubble-tls\") pod \"cilium-blv6m\" (UID: \"ebf72b74-0015-49fd-bdca-c86c8211d5c4\") " pod="kube-system/cilium-blv6m" Jul 14 21:26:11.762716 kubelet[2561]: I0714 21:26:11.762570 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ebf72b74-0015-49fd-bdca-c86c8211d5c4-bpf-maps\") pod \"cilium-blv6m\" (UID: \"ebf72b74-0015-49fd-bdca-c86c8211d5c4\") " pod="kube-system/cilium-blv6m" Jul 14 21:26:11.762716 kubelet[2561]: I0714 21:26:11.762586 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ebf72b74-0015-49fd-bdca-c86c8211d5c4-xtables-lock\") pod \"cilium-blv6m\" (UID: \"ebf72b74-0015-49fd-bdca-c86c8211d5c4\") " pod="kube-system/cilium-blv6m" Jul 14 21:26:11.762716 kubelet[2561]: I0714 21:26:11.762600 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ebf72b74-0015-49fd-bdca-c86c8211d5c4-clustermesh-secrets\") pod \"cilium-blv6m\" (UID: \"ebf72b74-0015-49fd-bdca-c86c8211d5c4\") " pod="kube-system/cilium-blv6m" Jul 14 21:26:11.762716 kubelet[2561]: I0714 21:26:11.762616 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ebf72b74-0015-49fd-bdca-c86c8211d5c4-host-proc-sys-net\") pod \"cilium-blv6m\" (UID: \"ebf72b74-0015-49fd-bdca-c86c8211d5c4\") " pod="kube-system/cilium-blv6m" Jul 14 21:26:11.762857 kubelet[2561]: I0714 21:26:11.762637 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ebf72b74-0015-49fd-bdca-c86c8211d5c4-cilium-run\") pod \"cilium-blv6m\" (UID: \"ebf72b74-0015-49fd-bdca-c86c8211d5c4\") " pod="kube-system/cilium-blv6m" Jul 14 21:26:11.762857 kubelet[2561]: I0714 21:26:11.762651 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ebf72b74-0015-49fd-bdca-c86c8211d5c4-lib-modules\") pod \"cilium-blv6m\" (UID: \"ebf72b74-0015-49fd-bdca-c86c8211d5c4\") " pod="kube-system/cilium-blv6m" Jul 14 21:26:11.762857 kubelet[2561]: I0714 21:26:11.762669 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ebf72b74-0015-49fd-bdca-c86c8211d5c4-cilium-config-path\") pod \"cilium-blv6m\" (UID: \"ebf72b74-0015-49fd-bdca-c86c8211d5c4\") " pod="kube-system/cilium-blv6m" Jul 14 21:26:11.762857 kubelet[2561]: I0714 21:26:11.762685 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ebf72b74-0015-49fd-bdca-c86c8211d5c4-hostproc\") pod \"cilium-blv6m\" (UID: \"ebf72b74-0015-49fd-bdca-c86c8211d5c4\") " pod="kube-system/cilium-blv6m" Jul 14 21:26:11.762857 kubelet[2561]: I0714 21:26:11.762702 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ebf72b74-0015-49fd-bdca-c86c8211d5c4-host-proc-sys-kernel\") pod \"cilium-blv6m\" (UID: \"ebf72b74-0015-49fd-bdca-c86c8211d5c4\") " pod="kube-system/cilium-blv6m" Jul 14 21:26:11.762857 kubelet[2561]: I0714 21:26:11.762749 2561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d2d8\" (UniqueName: \"kubernetes.io/projected/ebf72b74-0015-49fd-bdca-c86c8211d5c4-kube-api-access-9d2d8\") pod \"cilium-blv6m\" (UID: \"ebf72b74-0015-49fd-bdca-c86c8211d5c4\") " pod="kube-system/cilium-blv6m" Jul 14 21:26:11.769441 systemd[1]: Started sshd@24-10.0.0.115:22-10.0.0.1:33188.service - OpenSSH per-connection server daemon (10.0.0.1:33188). Jul 14 21:26:11.770602 systemd-logind[1436]: Removed session 24. Jul 14 21:26:11.806669 sshd[4397]: Accepted publickey for core from 10.0.0.1 port 33188 ssh2: RSA SHA256:qod1KuvK/X/ss8Lj9PYiIkvlhHqhxXhx4UqduPTMsgY Jul 14 21:26:11.807867 sshd-session[4397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:26:11.811649 systemd-logind[1436]: New session 25 of user core. Jul 14 21:26:11.829312 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 14 21:26:11.927803 containerd[1460]: time="2025-07-14T21:26:11.927752751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-blv6m,Uid:ebf72b74-0015-49fd-bdca-c86c8211d5c4,Namespace:kube-system,Attempt:0,}" Jul 14 21:26:11.953956 containerd[1460]: time="2025-07-14T21:26:11.953874477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:26:11.953956 containerd[1460]: time="2025-07-14T21:26:11.953946078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:26:11.954332 containerd[1460]: time="2025-07-14T21:26:11.953963638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:26:11.954332 containerd[1460]: time="2025-07-14T21:26:11.954067079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:26:11.979307 systemd[1]: Started cri-containerd-ff752971da8aa5d3ccf96fb7591f40031720603fa8a2766ffe5e47652e326491.scope - libcontainer container ff752971da8aa5d3ccf96fb7591f40031720603fa8a2766ffe5e47652e326491. Jul 14 21:26:11.999548 containerd[1460]: time="2025-07-14T21:26:11.999506786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-blv6m,Uid:ebf72b74-0015-49fd-bdca-c86c8211d5c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff752971da8aa5d3ccf96fb7591f40031720603fa8a2766ffe5e47652e326491\"" Jul 14 21:26:12.006228 containerd[1460]: time="2025-07-14T21:26:12.006193848Z" level=info msg="CreateContainer within sandbox \"ff752971da8aa5d3ccf96fb7591f40031720603fa8a2766ffe5e47652e326491\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 14 21:26:12.015725 containerd[1460]: time="2025-07-14T21:26:12.015587333Z" level=info msg="CreateContainer within sandbox \"ff752971da8aa5d3ccf96fb7591f40031720603fa8a2766ffe5e47652e326491\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a259ba5926c76c5f48f648fc845b30f62777193723e4d2d8bf18f83f79d040da\"" Jul 14 21:26:12.016252 containerd[1460]: time="2025-07-14T21:26:12.016219339Z" level=info msg="StartContainer for \"a259ba5926c76c5f48f648fc845b30f62777193723e4d2d8bf18f83f79d040da\"" Jul 14 21:26:12.042332 systemd[1]: Started cri-containerd-a259ba5926c76c5f48f648fc845b30f62777193723e4d2d8bf18f83f79d040da.scope - libcontainer container a259ba5926c76c5f48f648fc845b30f62777193723e4d2d8bf18f83f79d040da. Jul 14 21:26:12.088992 systemd[1]: cri-containerd-a259ba5926c76c5f48f648fc845b30f62777193723e4d2d8bf18f83f79d040da.scope: Deactivated successfully. Jul 14 21:26:12.091167 containerd[1460]: time="2025-07-14T21:26:12.089531606Z" level=info msg="StartContainer for \"a259ba5926c76c5f48f648fc845b30f62777193723e4d2d8bf18f83f79d040da\" returns successfully" Jul 14 21:26:12.138581 containerd[1460]: time="2025-07-14T21:26:12.138389931Z" level=info msg="shim disconnected" id=a259ba5926c76c5f48f648fc845b30f62777193723e4d2d8bf18f83f79d040da namespace=k8s.io Jul 14 21:26:12.138581 containerd[1460]: time="2025-07-14T21:26:12.138445372Z" level=warning msg="cleaning up after shim disconnected" id=a259ba5926c76c5f48f648fc845b30f62777193723e4d2d8bf18f83f79d040da namespace=k8s.io Jul 14 21:26:12.138581 containerd[1460]: time="2025-07-14T21:26:12.138453772Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 21:26:12.170843 kubelet[2561]: I0714 21:26:12.170785 2561 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-14T21:26:12Z","lastTransitionTime":"2025-07-14T21:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 14 21:26:13.133709 containerd[1460]: time="2025-07-14T21:26:13.133667711Z" level=info msg="CreateContainer within sandbox \"ff752971da8aa5d3ccf96fb7591f40031720603fa8a2766ffe5e47652e326491\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 14 21:26:13.143369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2554669591.mount: Deactivated successfully. Jul 14 21:26:13.144420 containerd[1460]: time="2025-07-14T21:26:13.144381726Z" level=info msg="CreateContainer within sandbox \"ff752971da8aa5d3ccf96fb7591f40031720603fa8a2766ffe5e47652e326491\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a39587417b6ece92eef521f2021c31aecabd69ea7d82fe2334944f3e86782331\"" Jul 14 21:26:13.145040 containerd[1460]: time="2025-07-14T21:26:13.145011051Z" level=info msg="StartContainer for \"a39587417b6ece92eef521f2021c31aecabd69ea7d82fe2334944f3e86782331\"" Jul 14 21:26:13.178255 systemd[1]: Started cri-containerd-a39587417b6ece92eef521f2021c31aecabd69ea7d82fe2334944f3e86782331.scope - libcontainer container a39587417b6ece92eef521f2021c31aecabd69ea7d82fe2334944f3e86782331. Jul 14 21:26:13.198948 containerd[1460]: time="2025-07-14T21:26:13.198843965Z" level=info msg="StartContainer for \"a39587417b6ece92eef521f2021c31aecabd69ea7d82fe2334944f3e86782331\" returns successfully" Jul 14 21:26:13.205008 systemd[1]: cri-containerd-a39587417b6ece92eef521f2021c31aecabd69ea7d82fe2334944f3e86782331.scope: Deactivated successfully. Jul 14 21:26:13.224099 containerd[1460]: time="2025-07-14T21:26:13.224033307Z" level=info msg="shim disconnected" id=a39587417b6ece92eef521f2021c31aecabd69ea7d82fe2334944f3e86782331 namespace=k8s.io Jul 14 21:26:13.224099 containerd[1460]: time="2025-07-14T21:26:13.224088787Z" level=warning msg="cleaning up after shim disconnected" id=a39587417b6ece92eef521f2021c31aecabd69ea7d82fe2334944f3e86782331 namespace=k8s.io Jul 14 21:26:13.224297 containerd[1460]: time="2025-07-14T21:26:13.224109228Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 21:26:13.868268 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a39587417b6ece92eef521f2021c31aecabd69ea7d82fe2334944f3e86782331-rootfs.mount: Deactivated successfully. Jul 14 21:26:14.148111 containerd[1460]: time="2025-07-14T21:26:14.147989839Z" level=info msg="CreateContainer within sandbox \"ff752971da8aa5d3ccf96fb7591f40031720603fa8a2766ffe5e47652e326491\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 14 21:26:14.177457 containerd[1460]: time="2025-07-14T21:26:14.177318729Z" level=info msg="CreateContainer within sandbox \"ff752971da8aa5d3ccf96fb7591f40031720603fa8a2766ffe5e47652e326491\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3be1486bc4c2089ac34acf5208cfb1ab9c2b219e8d6d2021ac4349db938dd2e1\"" Jul 14 21:26:14.177818 containerd[1460]: time="2025-07-14T21:26:14.177781893Z" level=info msg="StartContainer for \"3be1486bc4c2089ac34acf5208cfb1ab9c2b219e8d6d2021ac4349db938dd2e1\"" Jul 14 21:26:14.208374 systemd[1]: Started cri-containerd-3be1486bc4c2089ac34acf5208cfb1ab9c2b219e8d6d2021ac4349db938dd2e1.scope - libcontainer container 3be1486bc4c2089ac34acf5208cfb1ab9c2b219e8d6d2021ac4349db938dd2e1. Jul 14 21:26:14.235784 containerd[1460]: time="2025-07-14T21:26:14.235733466Z" level=info msg="StartContainer for \"3be1486bc4c2089ac34acf5208cfb1ab9c2b219e8d6d2021ac4349db938dd2e1\" returns successfully" Jul 14 21:26:14.236748 systemd[1]: cri-containerd-3be1486bc4c2089ac34acf5208cfb1ab9c2b219e8d6d2021ac4349db938dd2e1.scope: Deactivated successfully. Jul 14 21:26:14.263460 containerd[1460]: time="2025-07-14T21:26:14.263402262Z" level=info msg="shim disconnected" id=3be1486bc4c2089ac34acf5208cfb1ab9c2b219e8d6d2021ac4349db938dd2e1 namespace=k8s.io Jul 14 21:26:14.263460 containerd[1460]: time="2025-07-14T21:26:14.263454382Z" level=warning msg="cleaning up after shim disconnected" id=3be1486bc4c2089ac34acf5208cfb1ab9c2b219e8d6d2021ac4349db938dd2e1 namespace=k8s.io Jul 14 21:26:14.263460 containerd[1460]: time="2025-07-14T21:26:14.263463582Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 21:26:14.868368 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3be1486bc4c2089ac34acf5208cfb1ab9c2b219e8d6d2021ac4349db938dd2e1-rootfs.mount: Deactivated successfully. Jul 14 21:26:14.990968 kubelet[2561]: E0714 21:26:14.990933 2561 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 14 21:26:15.140513 containerd[1460]: time="2025-07-14T21:26:15.140084807Z" level=info msg="CreateContainer within sandbox \"ff752971da8aa5d3ccf96fb7591f40031720603fa8a2766ffe5e47652e326491\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 14 21:26:15.155798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2838607908.mount: Deactivated successfully. Jul 14 21:26:15.164543 containerd[1460]: time="2025-07-14T21:26:15.164500368Z" level=info msg="CreateContainer within sandbox \"ff752971da8aa5d3ccf96fb7591f40031720603fa8a2766ffe5e47652e326491\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"10572cc7aa3c1b3b29b41fea4d01e95d86a930cad5b065963ee089f2d1067195\"" Jul 14 21:26:15.166070 containerd[1460]: time="2025-07-14T21:26:15.165266574Z" level=info msg="StartContainer for \"10572cc7aa3c1b3b29b41fea4d01e95d86a930cad5b065963ee089f2d1067195\"" Jul 14 21:26:15.194270 systemd[1]: Started cri-containerd-10572cc7aa3c1b3b29b41fea4d01e95d86a930cad5b065963ee089f2d1067195.scope - libcontainer container 10572cc7aa3c1b3b29b41fea4d01e95d86a930cad5b065963ee089f2d1067195. Jul 14 21:26:15.214148 systemd[1]: cri-containerd-10572cc7aa3c1b3b29b41fea4d01e95d86a930cad5b065963ee089f2d1067195.scope: Deactivated successfully. Jul 14 21:26:15.216995 containerd[1460]: time="2025-07-14T21:26:15.216902240Z" level=info msg="StartContainer for \"10572cc7aa3c1b3b29b41fea4d01e95d86a930cad5b065963ee089f2d1067195\" returns successfully" Jul 14 21:26:15.236961 containerd[1460]: time="2025-07-14T21:26:15.236850924Z" level=info msg="shim disconnected" id=10572cc7aa3c1b3b29b41fea4d01e95d86a930cad5b065963ee089f2d1067195 namespace=k8s.io Jul 14 21:26:15.236961 containerd[1460]: time="2025-07-14T21:26:15.236959285Z" level=warning msg="cleaning up after shim disconnected" id=10572cc7aa3c1b3b29b41fea4d01e95d86a930cad5b065963ee089f2d1067195 namespace=k8s.io Jul 14 21:26:15.237223 containerd[1460]: time="2025-07-14T21:26:15.236978805Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 21:26:15.868513 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10572cc7aa3c1b3b29b41fea4d01e95d86a930cad5b065963ee089f2d1067195-rootfs.mount: Deactivated successfully. Jul 14 21:26:16.165040 containerd[1460]: time="2025-07-14T21:26:16.164927201Z" level=info msg="CreateContainer within sandbox \"ff752971da8aa5d3ccf96fb7591f40031720603fa8a2766ffe5e47652e326491\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 14 21:26:16.191884 containerd[1460]: time="2025-07-14T21:26:16.191827815Z" level=info msg="CreateContainer within sandbox \"ff752971da8aa5d3ccf96fb7591f40031720603fa8a2766ffe5e47652e326491\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7e60ba8ca63f446f271ba96b8840c57dd566300bef3c7dd93f1105b41ef13803\"" Jul 14 21:26:16.192419 containerd[1460]: time="2025-07-14T21:26:16.192395300Z" level=info msg="StartContainer for \"7e60ba8ca63f446f271ba96b8840c57dd566300bef3c7dd93f1105b41ef13803\"" Jul 14 21:26:16.217356 systemd[1]: Started cri-containerd-7e60ba8ca63f446f271ba96b8840c57dd566300bef3c7dd93f1105b41ef13803.scope - libcontainer container 7e60ba8ca63f446f271ba96b8840c57dd566300bef3c7dd93f1105b41ef13803. Jul 14 21:26:16.243465 containerd[1460]: time="2025-07-14T21:26:16.243415626Z" level=info msg="StartContainer for \"7e60ba8ca63f446f271ba96b8840c57dd566300bef3c7dd93f1105b41ef13803\" returns successfully" Jul 14 21:26:16.533113 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 14 21:26:17.175290 kubelet[2561]: I0714 21:26:17.175232 2561 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-blv6m" podStartSLOduration=6.175211199 podStartE2EDuration="6.175211199s" podCreationTimestamp="2025-07-14 21:26:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:26:17.174463194 +0000 UTC m=+77.310041165" watchObservedRunningTime="2025-07-14 21:26:17.175211199 +0000 UTC m=+77.310789170" Jul 14 21:26:19.371032 systemd-networkd[1395]: lxc_health: Link UP Jul 14 21:26:19.378893 systemd-networkd[1395]: lxc_health: Gained carrier Jul 14 21:26:21.354265 systemd-networkd[1395]: lxc_health: Gained IPv6LL Jul 14 21:26:24.704924 sshd[4400]: Connection closed by 10.0.0.1 port 33188 Jul 14 21:26:24.705453 sshd-session[4397]: pam_unix(sshd:session): session closed for user core Jul 14 21:26:24.708308 systemd[1]: sshd@24-10.0.0.115:22-10.0.0.1:33188.service: Deactivated successfully. Jul 14 21:26:24.710660 systemd[1]: session-25.scope: Deactivated successfully. Jul 14 21:26:24.711813 systemd-logind[1436]: Session 25 logged out. Waiting for processes to exit. Jul 14 21:26:24.713438 systemd-logind[1436]: Removed session 25.