Jul 6 23:28:12.822382 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 6 23:28:12.822403 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Sun Jul 6 21:57:11 -00 2025 Jul 6 23:28:12.822412 kernel: KASLR enabled Jul 6 23:28:12.822418 kernel: efi: EFI v2.7 by EDK II Jul 6 23:28:12.822431 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb21fd18 Jul 6 23:28:12.822438 kernel: random: crng init done Jul 6 23:28:12.822445 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Jul 6 23:28:12.822451 kernel: secureboot: Secure boot enabled Jul 6 23:28:12.822456 kernel: ACPI: Early table checksum verification disabled Jul 6 23:28:12.822463 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Jul 6 23:28:12.822469 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 6 23:28:12.822475 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:28:12.822481 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:28:12.822487 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:28:12.822494 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:28:12.822501 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:28:12.822507 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:28:12.822513 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:28:12.822519 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:28:12.822525 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:28:12.822531 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 6 23:28:12.822537 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 6 23:28:12.822544 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 6 23:28:12.822549 kernel: NODE_DATA(0) allocated [mem 0xdc737a00-0xdc73efff] Jul 6 23:28:12.822555 kernel: Zone ranges: Jul 6 23:28:12.822562 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 6 23:28:12.822568 kernel: DMA32 empty Jul 6 23:28:12.822574 kernel: Normal empty Jul 6 23:28:12.822580 kernel: Device empty Jul 6 23:28:12.822586 kernel: Movable zone start for each node Jul 6 23:28:12.822592 kernel: Early memory node ranges Jul 6 23:28:12.822598 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Jul 6 23:28:12.822604 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Jul 6 23:28:12.822610 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Jul 6 23:28:12.822616 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Jul 6 23:28:12.822622 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Jul 6 23:28:12.822627 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Jul 6 23:28:12.822635 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Jul 6 23:28:12.822641 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Jul 6 23:28:12.822647 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 6 23:28:12.822655 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 6 23:28:12.822662 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 6 23:28:12.822668 kernel: cma: Reserved 16 MiB at 0x00000000d7a00000 on node -1 Jul 6 23:28:12.822675 kernel: psci: probing for conduit method from ACPI. Jul 6 23:28:12.822682 kernel: psci: PSCIv1.1 detected in firmware. Jul 6 23:28:12.822688 kernel: psci: Using standard PSCI v0.2 function IDs Jul 6 23:28:12.822695 kernel: psci: Trusted OS migration not required Jul 6 23:28:12.822701 kernel: psci: SMC Calling Convention v1.1 Jul 6 23:28:12.822708 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 6 23:28:12.822714 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 6 23:28:12.822720 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 6 23:28:12.822727 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 6 23:28:12.822733 kernel: Detected PIPT I-cache on CPU0 Jul 6 23:28:12.822741 kernel: CPU features: detected: GIC system register CPU interface Jul 6 23:28:12.822747 kernel: CPU features: detected: Spectre-v4 Jul 6 23:28:12.822753 kernel: CPU features: detected: Spectre-BHB Jul 6 23:28:12.822760 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 6 23:28:12.822766 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 6 23:28:12.822773 kernel: CPU features: detected: ARM erratum 1418040 Jul 6 23:28:12.822779 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 6 23:28:12.822785 kernel: alternatives: applying boot alternatives Jul 6 23:28:12.822793 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d1bbaf8ae8f23de11dc703e14022523825f85f007c0c35003d7559228cbdda22 Jul 6 23:28:12.822799 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:28:12.822806 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 6 23:28:12.822813 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:28:12.822820 kernel: Fallback order for Node 0: 0 Jul 6 23:28:12.822826 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Jul 6 23:28:12.822833 kernel: Policy zone: DMA Jul 6 23:28:12.822839 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:28:12.822845 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Jul 6 23:28:12.822852 kernel: software IO TLB: area num 4. Jul 6 23:28:12.822858 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Jul 6 23:28:12.822864 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Jul 6 23:28:12.822871 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 6 23:28:12.822877 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:28:12.822885 kernel: rcu: RCU event tracing is enabled. Jul 6 23:28:12.822893 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 6 23:28:12.822899 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:28:12.822906 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:28:12.822912 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:28:12.822919 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 6 23:28:12.822925 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 6 23:28:12.822931 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 6 23:28:12.822938 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 6 23:28:12.822948 kernel: GICv3: 256 SPIs implemented Jul 6 23:28:12.822954 kernel: GICv3: 0 Extended SPIs implemented Jul 6 23:28:12.822961 kernel: Root IRQ handler: gic_handle_irq Jul 6 23:28:12.822969 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 6 23:28:12.822975 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jul 6 23:28:12.822982 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 6 23:28:12.822988 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 6 23:28:12.822995 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Jul 6 23:28:12.823001 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Jul 6 23:28:12.823008 kernel: GICv3: using LPI property table @0x0000000040130000 Jul 6 23:28:12.823014 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Jul 6 23:28:12.823020 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:28:12.823027 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:28:12.823033 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 6 23:28:12.823040 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 6 23:28:12.823051 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 6 23:28:12.823057 kernel: arm-pv: using stolen time PV Jul 6 23:28:12.823064 kernel: Console: colour dummy device 80x25 Jul 6 23:28:12.823070 kernel: ACPI: Core revision 20240827 Jul 6 23:28:12.823077 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 6 23:28:12.823084 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:28:12.823090 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 6 23:28:12.823097 kernel: landlock: Up and running. Jul 6 23:28:12.823103 kernel: SELinux: Initializing. Jul 6 23:28:12.823112 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:28:12.823119 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:28:12.823190 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:28:12.823197 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:28:12.823204 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 6 23:28:12.823211 kernel: Remapping and enabling EFI services. Jul 6 23:28:12.823217 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:28:12.823224 kernel: Detected PIPT I-cache on CPU1 Jul 6 23:28:12.823230 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 6 23:28:12.823237 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Jul 6 23:28:12.823256 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:28:12.823263 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 6 23:28:12.823271 kernel: Detected PIPT I-cache on CPU2 Jul 6 23:28:12.823278 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 6 23:28:12.823344 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Jul 6 23:28:12.823352 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:28:12.823359 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 6 23:28:12.823366 kernel: Detected PIPT I-cache on CPU3 Jul 6 23:28:12.823377 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 6 23:28:12.823384 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Jul 6 23:28:12.823392 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:28:12.823398 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 6 23:28:12.823544 kernel: smp: Brought up 1 node, 4 CPUs Jul 6 23:28:12.823553 kernel: SMP: Total of 4 processors activated. Jul 6 23:28:12.823560 kernel: CPU: All CPU(s) started at EL1 Jul 6 23:28:12.823568 kernel: CPU features: detected: 32-bit EL0 Support Jul 6 23:28:12.823575 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 6 23:28:12.823588 kernel: CPU features: detected: Common not Private translations Jul 6 23:28:12.823595 kernel: CPU features: detected: CRC32 instructions Jul 6 23:28:12.823602 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 6 23:28:12.823609 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 6 23:28:12.823617 kernel: CPU features: detected: LSE atomic instructions Jul 6 23:28:12.823633 kernel: CPU features: detected: Privileged Access Never Jul 6 23:28:12.823641 kernel: CPU features: detected: RAS Extension Support Jul 6 23:28:12.823649 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 6 23:28:12.823657 kernel: alternatives: applying system-wide alternatives Jul 6 23:28:12.823665 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Jul 6 23:28:12.823672 kernel: Memory: 2421860K/2572288K available (11136K kernel code, 2436K rwdata, 9076K rodata, 39488K init, 1038K bss, 128092K reserved, 16384K cma-reserved) Jul 6 23:28:12.823679 kernel: devtmpfs: initialized Jul 6 23:28:12.823686 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:28:12.823694 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 6 23:28:12.823701 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 6 23:28:12.823708 kernel: 0 pages in range for non-PLT usage Jul 6 23:28:12.823715 kernel: 508432 pages in range for PLT usage Jul 6 23:28:12.823722 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:28:12.823730 kernel: SMBIOS 3.0.0 present. Jul 6 23:28:12.823737 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jul 6 23:28:12.823743 kernel: DMI: Memory slots populated: 1/1 Jul 6 23:28:12.823750 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:28:12.823758 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 6 23:28:12.823765 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 6 23:28:12.823772 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 6 23:28:12.823779 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:28:12.823786 kernel: audit: type=2000 audit(0.025:1): state=initialized audit_enabled=0 res=1 Jul 6 23:28:12.823795 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:28:12.823802 kernel: cpuidle: using governor menu Jul 6 23:28:12.823809 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 6 23:28:12.823816 kernel: ASID allocator initialised with 32768 entries Jul 6 23:28:12.823823 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:28:12.823829 kernel: Serial: AMBA PL011 UART driver Jul 6 23:28:12.823837 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:28:12.823843 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:28:12.823851 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 6 23:28:12.823859 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 6 23:28:12.823865 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:28:12.823872 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:28:12.823880 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 6 23:28:12.823887 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 6 23:28:12.823893 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:28:12.823900 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:28:12.823907 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:28:12.823914 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:28:12.823922 kernel: ACPI: Interpreter enabled Jul 6 23:28:12.823929 kernel: ACPI: Using GIC for interrupt routing Jul 6 23:28:12.823936 kernel: ACPI: MCFG table detected, 1 entries Jul 6 23:28:12.823943 kernel: ACPI: CPU0 has been hot-added Jul 6 23:28:12.823949 kernel: ACPI: CPU1 has been hot-added Jul 6 23:28:12.823956 kernel: ACPI: CPU2 has been hot-added Jul 6 23:28:12.823963 kernel: ACPI: CPU3 has been hot-added Jul 6 23:28:12.823970 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 6 23:28:12.823977 kernel: printk: legacy console [ttyAMA0] enabled Jul 6 23:28:12.823985 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 6 23:28:12.824157 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:28:12.824235 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 6 23:28:12.824294 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 6 23:28:12.824351 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 6 23:28:12.824407 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 6 23:28:12.824416 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 6 23:28:12.824460 kernel: PCI host bridge to bus 0000:00 Jul 6 23:28:12.824540 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 6 23:28:12.824598 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 6 23:28:12.824652 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 6 23:28:12.824714 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 6 23:28:12.824811 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jul 6 23:28:12.824889 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 6 23:28:12.824964 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Jul 6 23:28:12.825023 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Jul 6 23:28:12.825114 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jul 6 23:28:12.825203 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jul 6 23:28:12.825267 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Jul 6 23:28:12.825328 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Jul 6 23:28:12.825396 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 6 23:28:12.825463 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 6 23:28:12.825520 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 6 23:28:12.825529 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 6 23:28:12.825537 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 6 23:28:12.825544 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 6 23:28:12.825551 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 6 23:28:12.825558 kernel: iommu: Default domain type: Translated Jul 6 23:28:12.825566 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 6 23:28:12.825575 kernel: efivars: Registered efivars operations Jul 6 23:28:12.825583 kernel: vgaarb: loaded Jul 6 23:28:12.825590 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 6 23:28:12.825597 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:28:12.825604 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:28:12.825611 kernel: pnp: PnP ACPI init Jul 6 23:28:12.825691 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 6 23:28:12.825702 kernel: pnp: PnP ACPI: found 1 devices Jul 6 23:28:12.825711 kernel: NET: Registered PF_INET protocol family Jul 6 23:28:12.825725 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 6 23:28:12.825733 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 6 23:28:12.825740 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:28:12.825747 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 6 23:28:12.825755 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 6 23:28:12.825762 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 6 23:28:12.825769 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:28:12.825777 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:28:12.825785 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:28:12.825792 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:28:12.825799 kernel: kvm [1]: HYP mode not available Jul 6 23:28:12.825806 kernel: Initialise system trusted keyrings Jul 6 23:28:12.825814 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 6 23:28:12.825821 kernel: Key type asymmetric registered Jul 6 23:28:12.825828 kernel: Asymmetric key parser 'x509' registered Jul 6 23:28:12.825835 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 6 23:28:12.825842 kernel: io scheduler mq-deadline registered Jul 6 23:28:12.825850 kernel: io scheduler kyber registered Jul 6 23:28:12.825858 kernel: io scheduler bfq registered Jul 6 23:28:12.825865 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 6 23:28:12.825872 kernel: ACPI: button: Power Button [PWRB] Jul 6 23:28:12.825880 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 6 23:28:12.825944 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 6 23:28:12.825954 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:28:12.825961 kernel: thunder_xcv, ver 1.0 Jul 6 23:28:12.825968 kernel: thunder_bgx, ver 1.0 Jul 6 23:28:12.825977 kernel: nicpf, ver 1.0 Jul 6 23:28:12.825984 kernel: nicvf, ver 1.0 Jul 6 23:28:12.826056 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 6 23:28:12.826116 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-06T23:28:12 UTC (1751844492) Jul 6 23:28:12.826138 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 6 23:28:12.826146 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 6 23:28:12.826153 kernel: watchdog: NMI not fully supported Jul 6 23:28:12.826160 kernel: watchdog: Hard watchdog permanently disabled Jul 6 23:28:12.826169 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:28:12.826177 kernel: Segment Routing with IPv6 Jul 6 23:28:12.826184 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:28:12.826191 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:28:12.826198 kernel: Key type dns_resolver registered Jul 6 23:28:12.826205 kernel: registered taskstats version 1 Jul 6 23:28:12.826212 kernel: Loading compiled-in X.509 certificates Jul 6 23:28:12.826219 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: f8c1d02496b1c3f2ac4a0c4b5b2a55d3dc0ca718' Jul 6 23:28:12.826288 kernel: Demotion targets for Node 0: null Jul 6 23:28:12.826299 kernel: Key type .fscrypt registered Jul 6 23:28:12.826306 kernel: Key type fscrypt-provisioning registered Jul 6 23:28:12.826313 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:28:12.826320 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:28:12.826327 kernel: ima: No architecture policies found Jul 6 23:28:12.826334 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 6 23:28:12.826563 kernel: clk: Disabling unused clocks Jul 6 23:28:12.826575 kernel: PM: genpd: Disabling unused power domains Jul 6 23:28:12.826582 kernel: Warning: unable to open an initial console. Jul 6 23:28:12.826595 kernel: Freeing unused kernel memory: 39488K Jul 6 23:28:12.826603 kernel: Run /init as init process Jul 6 23:28:12.826610 kernel: with arguments: Jul 6 23:28:12.826620 kernel: /init Jul 6 23:28:12.826630 kernel: with environment: Jul 6 23:28:12.826637 kernel: HOME=/ Jul 6 23:28:12.826644 kernel: TERM=linux Jul 6 23:28:12.826651 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:28:12.826659 systemd[1]: Successfully made /usr/ read-only. Jul 6 23:28:12.826672 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:28:12.826680 systemd[1]: Detected virtualization kvm. Jul 6 23:28:12.826688 systemd[1]: Detected architecture arm64. Jul 6 23:28:12.826695 systemd[1]: Running in initrd. Jul 6 23:28:12.826703 systemd[1]: No hostname configured, using default hostname. Jul 6 23:28:12.826711 systemd[1]: Hostname set to . Jul 6 23:28:12.826718 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:28:12.826727 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:28:12.826735 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:28:12.826743 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:28:12.826752 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:28:12.826760 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:28:12.826767 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:28:12.826776 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:28:12.826786 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:28:12.826794 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:28:12.826802 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:28:12.826809 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:28:12.826817 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:28:12.826825 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:28:12.826832 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:28:12.826840 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:28:12.826848 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:28:12.826856 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:28:12.826864 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:28:12.826872 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 6 23:28:12.826880 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:28:12.826887 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:28:12.826895 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:28:12.826903 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:28:12.826910 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:28:12.826920 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:28:12.826927 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:28:12.826935 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 6 23:28:12.826943 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:28:12.826951 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:28:12.826958 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:28:12.826966 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:28:12.826974 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:28:12.826983 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:28:12.826991 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:28:12.826999 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:28:12.827037 systemd-journald[245]: Collecting audit messages is disabled. Jul 6 23:28:12.827070 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:28:12.827080 systemd-journald[245]: Journal started Jul 6 23:28:12.827101 systemd-journald[245]: Runtime Journal (/run/log/journal/b05096c1fb864eea8b0d9bb7322fd7e4) is 6M, max 48.5M, 42.4M free. Jul 6 23:28:12.812092 systemd-modules-load[246]: Inserted module 'overlay' Jul 6 23:28:12.829775 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:28:12.833168 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:28:12.835728 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:28:12.838727 kernel: Bridge firewalling registered Jul 6 23:28:12.837387 systemd-modules-load[246]: Inserted module 'br_netfilter' Jul 6 23:28:12.837977 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:28:12.842164 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:28:12.855035 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:28:12.857752 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:28:12.858308 systemd-tmpfiles[266]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 6 23:28:12.860240 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:28:12.862408 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:28:12.868683 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:28:12.873537 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:28:12.874721 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:28:12.883271 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:28:12.885714 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:28:12.910037 dracut-cmdline[292]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d1bbaf8ae8f23de11dc703e14022523825f85f007c0c35003d7559228cbdda22 Jul 6 23:28:12.915130 systemd-resolved[287]: Positive Trust Anchors: Jul 6 23:28:12.915146 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:28:12.915178 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:28:12.920779 systemd-resolved[287]: Defaulting to hostname 'linux'. Jul 6 23:28:12.921768 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:28:12.929566 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:28:12.985168 kernel: SCSI subsystem initialized Jul 6 23:28:12.991143 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:28:13.004735 kernel: iscsi: registered transport (tcp) Jul 6 23:28:13.018143 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:28:13.018158 kernel: QLogic iSCSI HBA Driver Jul 6 23:28:13.036985 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:28:13.062682 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:28:13.065052 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:28:13.117190 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:28:13.120396 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:28:13.189161 kernel: raid6: neonx8 gen() 15773 MB/s Jul 6 23:28:13.206151 kernel: raid6: neonx4 gen() 15811 MB/s Jul 6 23:28:13.223167 kernel: raid6: neonx2 gen() 13009 MB/s Jul 6 23:28:13.240158 kernel: raid6: neonx1 gen() 7754 MB/s Jul 6 23:28:13.257152 kernel: raid6: int64x8 gen() 6635 MB/s Jul 6 23:28:13.274149 kernel: raid6: int64x4 gen() 7242 MB/s Jul 6 23:28:13.291147 kernel: raid6: int64x2 gen() 6070 MB/s Jul 6 23:28:13.308316 kernel: raid6: int64x1 gen() 4911 MB/s Jul 6 23:28:13.308333 kernel: raid6: using algorithm neonx4 gen() 15811 MB/s Jul 6 23:28:13.326223 kernel: raid6: .... xor() 12326 MB/s, rmw enabled Jul 6 23:28:13.326238 kernel: raid6: using neon recovery algorithm Jul 6 23:28:13.334167 kernel: xor: measuring software checksum speed Jul 6 23:28:13.335543 kernel: 8regs : 11687 MB/sec Jul 6 23:28:13.335558 kernel: 32regs : 21658 MB/sec Jul 6 23:28:13.336260 kernel: arm64_neon : 27908 MB/sec Jul 6 23:28:13.336273 kernel: xor: using function: arm64_neon (27908 MB/sec) Jul 6 23:28:13.393151 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:28:13.401196 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:28:13.406514 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:28:13.433185 systemd-udevd[500]: Using default interface naming scheme 'v255'. Jul 6 23:28:13.438756 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:28:13.440736 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:28:13.464417 dracut-pre-trigger[506]: rd.md=0: removing MD RAID activation Jul 6 23:28:13.493009 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:28:13.495488 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:28:13.546161 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:28:13.549488 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:28:13.599149 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 6 23:28:13.602095 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 6 23:28:13.606285 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 6 23:28:13.606320 kernel: GPT:9289727 != 19775487 Jul 6 23:28:13.606374 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:28:13.610294 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 6 23:28:13.610316 kernel: GPT:9289727 != 19775487 Jul 6 23:28:13.610332 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 6 23:28:13.610342 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:28:13.607318 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:28:13.613563 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:28:13.615432 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:28:13.636502 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:28:13.674470 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 6 23:28:13.675992 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:28:13.684184 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 6 23:28:13.691072 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 6 23:28:13.692292 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 6 23:28:13.701582 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 6 23:28:13.702871 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:28:13.704907 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:28:13.707084 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:28:13.710110 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:28:13.711948 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:28:13.731003 disk-uuid[591]: Primary Header is updated. Jul 6 23:28:13.731003 disk-uuid[591]: Secondary Entries is updated. Jul 6 23:28:13.731003 disk-uuid[591]: Secondary Header is updated. Jul 6 23:28:13.736140 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:28:13.739994 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:28:14.749173 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:28:14.750456 disk-uuid[594]: The operation has completed successfully. Jul 6 23:28:14.790434 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:28:14.791710 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:28:14.821850 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:28:14.838826 sh[610]: Success Jul 6 23:28:14.852467 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:28:14.852513 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:28:14.854266 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 6 23:28:14.861145 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 6 23:28:14.883018 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:28:14.886497 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:28:14.903149 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:28:14.911275 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 6 23:28:14.911313 kernel: BTRFS: device fsid 2cfafe0a-eb24-4e1d-b9c9-dec7de7e4c4d devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (622) Jul 6 23:28:14.912941 kernel: BTRFS info (device dm-0): first mount of filesystem 2cfafe0a-eb24-4e1d-b9c9-dec7de7e4c4d Jul 6 23:28:14.912970 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:28:14.914505 kernel: BTRFS info (device dm-0): using free-space-tree Jul 6 23:28:14.918159 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:28:14.919317 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 6 23:28:14.920844 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:28:14.925322 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:28:14.927771 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:28:14.951910 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (655) Jul 6 23:28:14.951953 kernel: BTRFS info (device vda6): first mount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 6 23:28:14.952994 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:28:14.953023 kernel: BTRFS info (device vda6): using free-space-tree Jul 6 23:28:14.960149 kernel: BTRFS info (device vda6): last unmount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 6 23:28:14.960837 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:28:14.965170 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:28:15.038103 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:28:15.044147 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:28:15.093284 systemd-networkd[795]: lo: Link UP Jul 6 23:28:15.093294 systemd-networkd[795]: lo: Gained carrier Jul 6 23:28:15.094062 systemd-networkd[795]: Enumeration completed Jul 6 23:28:15.094259 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:28:15.094587 systemd-networkd[795]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:28:15.094591 systemd-networkd[795]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:28:15.095165 systemd-networkd[795]: eth0: Link UP Jul 6 23:28:15.095168 systemd-networkd[795]: eth0: Gained carrier Jul 6 23:28:15.095176 systemd-networkd[795]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:28:15.096288 systemd[1]: Reached target network.target - Network. Jul 6 23:28:15.117197 systemd-networkd[795]: eth0: DHCPv4 address 10.0.0.47/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 6 23:28:15.135467 ignition[702]: Ignition 2.21.0 Jul 6 23:28:15.135479 ignition[702]: Stage: fetch-offline Jul 6 23:28:15.135509 ignition[702]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:28:15.135516 ignition[702]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:28:15.135697 ignition[702]: parsed url from cmdline: "" Jul 6 23:28:15.135700 ignition[702]: no config URL provided Jul 6 23:28:15.135704 ignition[702]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:28:15.135710 ignition[702]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:28:15.135729 ignition[702]: op(1): [started] loading QEMU firmware config module Jul 6 23:28:15.135733 ignition[702]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 6 23:28:15.141153 ignition[702]: op(1): [finished] loading QEMU firmware config module Jul 6 23:28:15.141172 ignition[702]: QEMU firmware config was not found. Ignoring... Jul 6 23:28:15.181290 ignition[702]: parsing config with SHA512: 40415700ddc539a5fdb178752ff3dba3e0c0e6f3e3faebfaf9c2f812bf27b7489c0bf14b28a10c594c2bab8cc94ea81b4a40c0f3cd9424d9a971ac81da8c8779 Jul 6 23:28:15.187214 unknown[702]: fetched base config from "system" Jul 6 23:28:15.187228 unknown[702]: fetched user config from "qemu" Jul 6 23:28:15.187675 ignition[702]: fetch-offline: fetch-offline passed Jul 6 23:28:15.187731 ignition[702]: Ignition finished successfully Jul 6 23:28:15.190075 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:28:15.192155 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 6 23:28:15.192966 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:28:15.230227 ignition[809]: Ignition 2.21.0 Jul 6 23:28:15.230243 ignition[809]: Stage: kargs Jul 6 23:28:15.230389 ignition[809]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:28:15.230398 ignition[809]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:28:15.232031 ignition[809]: kargs: kargs passed Jul 6 23:28:15.234920 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:28:15.232098 ignition[809]: Ignition finished successfully Jul 6 23:28:15.236993 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:28:15.264952 ignition[817]: Ignition 2.21.0 Jul 6 23:28:15.264969 ignition[817]: Stage: disks Jul 6 23:28:15.265144 ignition[817]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:28:15.265153 ignition[817]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:28:15.268526 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:28:15.266673 ignition[817]: disks: disks passed Jul 6 23:28:15.270580 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:28:15.266737 ignition[817]: Ignition finished successfully Jul 6 23:28:15.272052 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:28:15.273686 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:28:15.275656 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:28:15.278530 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:28:15.281254 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:28:15.318074 systemd-fsck[827]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 6 23:28:15.322304 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:28:15.326543 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:28:15.393155 kernel: EXT4-fs (vda9): mounted filesystem 8d88df29-f94d-4ab8-8fb6-af875603e6d4 r/w with ordered data mode. Quota mode: none. Jul 6 23:28:15.393848 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:28:15.395091 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:28:15.398334 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:28:15.410675 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:28:15.411701 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 6 23:28:15.411756 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:28:15.423400 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (835) Jul 6 23:28:15.423438 kernel: BTRFS info (device vda6): first mount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 6 23:28:15.423450 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:28:15.423460 kernel: BTRFS info (device vda6): using free-space-tree Jul 6 23:28:15.411780 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:28:15.417708 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:28:15.420665 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:28:15.428537 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:28:15.470720 initrd-setup-root[861]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:28:15.475656 initrd-setup-root[868]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:28:15.479809 initrd-setup-root[875]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:28:15.483197 initrd-setup-root[882]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:28:15.557850 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:28:15.561022 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:28:15.562581 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:28:15.578147 kernel: BTRFS info (device vda6): last unmount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 6 23:28:15.589061 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:28:15.599061 ignition[951]: INFO : Ignition 2.21.0 Jul 6 23:28:15.599061 ignition[951]: INFO : Stage: mount Jul 6 23:28:15.600708 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:28:15.600708 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:28:15.600708 ignition[951]: INFO : mount: mount passed Jul 6 23:28:15.600708 ignition[951]: INFO : Ignition finished successfully Jul 6 23:28:15.602335 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:28:15.605138 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:28:15.910199 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:28:15.911836 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:28:15.939093 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (963) Jul 6 23:28:15.939135 kernel: BTRFS info (device vda6): first mount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 6 23:28:15.940168 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:28:15.940183 kernel: BTRFS info (device vda6): using free-space-tree Jul 6 23:28:15.943821 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:28:15.975530 ignition[980]: INFO : Ignition 2.21.0 Jul 6 23:28:15.975530 ignition[980]: INFO : Stage: files Jul 6 23:28:15.975530 ignition[980]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:28:15.975530 ignition[980]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:28:15.979607 ignition[980]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:28:15.979607 ignition[980]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:28:15.979607 ignition[980]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:28:15.983788 ignition[980]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:28:15.983788 ignition[980]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:28:15.983788 ignition[980]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:28:15.983788 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 6 23:28:15.983788 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jul 6 23:28:15.980766 unknown[980]: wrote ssh authorized keys file for user: core Jul 6 23:28:16.033634 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 6 23:28:16.305568 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 6 23:28:16.305568 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 6 23:28:16.309575 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 6 23:28:16.591866 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 6 23:28:16.688345 systemd-networkd[795]: eth0: Gained IPv6LL Jul 6 23:28:16.716697 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 6 23:28:16.716697 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:28:16.721289 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:28:16.721289 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:28:16.721289 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:28:16.721289 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:28:16.721289 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:28:16.721289 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:28:16.721289 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:28:16.721289 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:28:16.721289 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:28:16.721289 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 6 23:28:16.748980 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 6 23:28:16.748980 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 6 23:28:16.748980 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jul 6 23:28:17.167389 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 6 23:28:17.859737 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 6 23:28:17.859737 ignition[980]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 6 23:28:17.863623 ignition[980]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:28:17.863623 ignition[980]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:28:17.863623 ignition[980]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 6 23:28:17.863623 ignition[980]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 6 23:28:17.863623 ignition[980]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 6 23:28:17.863623 ignition[980]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 6 23:28:17.863623 ignition[980]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 6 23:28:17.863623 ignition[980]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 6 23:28:17.881550 ignition[980]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 6 23:28:17.884686 ignition[980]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 6 23:28:17.887724 ignition[980]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 6 23:28:17.887724 ignition[980]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:28:17.887724 ignition[980]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:28:17.887724 ignition[980]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:28:17.887724 ignition[980]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:28:17.887724 ignition[980]: INFO : files: files passed Jul 6 23:28:17.887724 ignition[980]: INFO : Ignition finished successfully Jul 6 23:28:17.888828 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:28:17.892259 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:28:17.894797 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:28:17.912642 initrd-setup-root-after-ignition[1009]: grep: /sysroot/oem/oem-release: No such file or directory Jul 6 23:28:17.911780 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:28:17.911882 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:28:17.916944 initrd-setup-root-after-ignition[1011]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:28:17.916944 initrd-setup-root-after-ignition[1011]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:28:17.920360 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:28:17.919525 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:28:17.921718 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:28:17.926299 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:28:17.970456 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:28:17.971200 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:28:17.972967 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:28:17.974968 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:28:17.976955 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:28:17.977819 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:28:17.997107 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:28:17.999815 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:28:18.018110 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:28:18.020576 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:28:18.021927 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:28:18.023842 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:28:18.023997 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:28:18.026671 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:28:18.028825 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:28:18.030487 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:28:18.032318 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:28:18.034321 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:28:18.036484 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 6 23:28:18.038613 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:28:18.040513 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:28:18.042562 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:28:18.044698 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:28:18.046549 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:28:18.048207 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:28:18.048354 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:28:18.050867 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:28:18.052980 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:28:18.055105 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:28:18.056169 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:28:18.057547 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:28:18.057676 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:28:18.060696 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:28:18.060820 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:28:18.062898 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:28:18.064660 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:28:18.068741 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:28:18.070181 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:28:18.072486 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:28:18.074146 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:28:18.074231 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:28:18.075826 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:28:18.075911 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:28:18.077468 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:28:18.077590 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:28:18.079378 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:28:18.079493 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:28:18.081857 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:28:18.083847 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:28:18.083984 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:28:18.104866 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:28:18.105837 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:28:18.105984 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:28:18.108089 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:28:18.108213 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:28:18.115256 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:28:18.115352 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:28:18.122440 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:28:18.125747 ignition[1035]: INFO : Ignition 2.21.0 Jul 6 23:28:18.125747 ignition[1035]: INFO : Stage: umount Jul 6 23:28:18.127955 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:28:18.127955 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:28:18.127955 ignition[1035]: INFO : umount: umount passed Jul 6 23:28:18.127955 ignition[1035]: INFO : Ignition finished successfully Jul 6 23:28:18.129412 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:28:18.129554 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:28:18.131712 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:28:18.133199 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:28:18.134766 systemd[1]: Stopped target network.target - Network. Jul 6 23:28:18.136039 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:28:18.136144 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:28:18.138176 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:28:18.138229 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:28:18.140026 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:28:18.140094 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:28:18.141894 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:28:18.141942 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:28:18.143829 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:28:18.143896 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:28:18.147860 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:28:18.150071 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:28:18.157397 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:28:18.157501 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:28:18.161247 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 6 23:28:18.161513 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:28:18.161619 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:28:18.164996 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 6 23:28:18.165697 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 6 23:28:18.167678 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:28:18.167715 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:28:18.170987 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:28:18.171935 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:28:18.172007 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:28:18.174376 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:28:18.174441 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:28:18.179190 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:28:18.179241 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:28:18.181368 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:28:18.181425 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:28:18.183519 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:28:18.186352 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 6 23:28:18.186417 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:28:18.194838 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:28:18.202289 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:28:18.203935 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:28:18.203973 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:28:18.205975 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:28:18.206016 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:28:18.208138 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:28:18.208194 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:28:18.211131 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:28:18.211188 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:28:18.214020 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:28:18.214075 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:28:18.217202 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:28:18.218507 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 6 23:28:18.218570 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:28:18.221535 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:28:18.221576 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:28:18.224952 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:28:18.224994 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:28:18.229732 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 6 23:28:18.229779 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 6 23:28:18.229809 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:28:18.230062 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:28:18.240186 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:28:18.245196 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:28:18.245299 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:28:18.247898 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:28:18.250218 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:28:18.267806 systemd[1]: Switching root. Jul 6 23:28:18.293238 systemd-journald[245]: Journal stopped Jul 6 23:28:19.155455 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Jul 6 23:28:19.159829 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:28:19.159849 kernel: SELinux: policy capability open_perms=1 Jul 6 23:28:19.159859 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:28:19.159868 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:28:19.159878 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:28:19.159887 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:28:19.159900 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:28:19.159911 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:28:19.159922 kernel: SELinux: policy capability userspace_initial_context=0 Jul 6 23:28:19.159931 kernel: audit: type=1403 audit(1751844498.476:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:28:19.159946 systemd[1]: Successfully loaded SELinux policy in 44.321ms. Jul 6 23:28:19.159965 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.406ms. Jul 6 23:28:19.159978 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:28:19.159989 systemd[1]: Detected virtualization kvm. Jul 6 23:28:19.160000 systemd[1]: Detected architecture arm64. Jul 6 23:28:19.160010 systemd[1]: Detected first boot. Jul 6 23:28:19.160021 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:28:19.160030 kernel: NET: Registered PF_VSOCK protocol family Jul 6 23:28:19.160040 zram_generator::config[1082]: No configuration found. Jul 6 23:28:19.160052 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:28:19.160067 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 6 23:28:19.160079 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 6 23:28:19.160091 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 6 23:28:19.160101 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 6 23:28:19.160112 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:28:19.160132 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:28:19.160143 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:28:19.160153 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:28:19.160164 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:28:19.160176 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:28:19.160186 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:28:19.160196 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:28:19.160208 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:28:19.160219 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:28:19.160231 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:28:19.160241 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:28:19.160252 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:28:19.160263 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:28:19.160275 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 6 23:28:19.160285 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:28:19.160296 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:28:19.160306 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 6 23:28:19.160317 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 6 23:28:19.160327 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 6 23:28:19.160337 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:28:19.160347 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:28:19.160359 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:28:19.160369 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:28:19.160379 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:28:19.160389 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:28:19.160399 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:28:19.160417 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 6 23:28:19.160429 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:28:19.160441 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:28:19.160452 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:28:19.160465 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:28:19.160475 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:28:19.160486 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:28:19.160496 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:28:19.160506 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:28:19.160518 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:28:19.160528 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:28:19.160539 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:28:19.160549 systemd[1]: Reached target machines.target - Containers. Jul 6 23:28:19.160561 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:28:19.160572 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:28:19.160582 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:28:19.160593 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:28:19.160604 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:28:19.160614 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:28:19.160625 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:28:19.160639 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:28:19.160651 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:28:19.160665 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:28:19.160675 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 6 23:28:19.160686 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 6 23:28:19.160697 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 6 23:28:19.160707 systemd[1]: Stopped systemd-fsck-usr.service. Jul 6 23:28:19.160718 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:28:19.160729 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:28:19.160740 kernel: fuse: init (API version 7.41) Jul 6 23:28:19.160749 kernel: loop: module loaded Jul 6 23:28:19.160761 kernel: ACPI: bus type drm_connector registered Jul 6 23:28:19.160771 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:28:19.160781 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:28:19.160792 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:28:19.160802 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 6 23:28:19.160812 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:28:19.160824 systemd[1]: verity-setup.service: Deactivated successfully. Jul 6 23:28:19.160834 systemd[1]: Stopped verity-setup.service. Jul 6 23:28:19.160844 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:28:19.160854 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:28:19.160864 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:28:19.160875 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:28:19.160887 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:28:19.160897 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:28:19.160909 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:28:19.160952 systemd-journald[1155]: Collecting audit messages is disabled. Jul 6 23:28:19.160975 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:28:19.160987 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:28:19.160998 systemd-journald[1155]: Journal started Jul 6 23:28:19.161019 systemd-journald[1155]: Runtime Journal (/run/log/journal/b05096c1fb864eea8b0d9bb7322fd7e4) is 6M, max 48.5M, 42.4M free. Jul 6 23:28:18.880460 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:28:18.903456 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 6 23:28:18.904487 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 6 23:28:19.162644 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:28:19.165980 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:28:19.166786 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:28:19.168173 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:28:19.169542 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:28:19.169695 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:28:19.171053 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:28:19.171243 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:28:19.172692 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:28:19.172862 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:28:19.174284 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:28:19.174459 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:28:19.176064 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:28:19.177524 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:28:19.180300 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:28:19.181828 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 6 23:28:19.189937 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:28:19.197550 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:28:19.200208 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:28:19.202722 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:28:19.204063 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:28:19.204107 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:28:19.206235 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 6 23:28:19.213394 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:28:19.214713 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:28:19.219112 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:28:19.221373 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:28:19.222729 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:28:19.226288 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:28:19.227714 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:28:19.229522 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:28:19.230149 systemd-journald[1155]: Time spent on flushing to /var/log/journal/b05096c1fb864eea8b0d9bb7322fd7e4 is 20.525ms for 886 entries. Jul 6 23:28:19.230149 systemd-journald[1155]: System Journal (/var/log/journal/b05096c1fb864eea8b0d9bb7322fd7e4) is 8M, max 195.6M, 187.6M free. Jul 6 23:28:19.264501 systemd-journald[1155]: Received client request to flush runtime journal. Jul 6 23:28:19.264629 kernel: loop0: detected capacity change from 0 to 107312 Jul 6 23:28:19.233110 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:28:19.237433 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:28:19.240965 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:28:19.242985 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:28:19.264468 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:28:19.268817 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:28:19.271861 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:28:19.277753 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 6 23:28:19.279484 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:28:19.281189 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:28:19.303181 kernel: loop1: detected capacity change from 0 to 138376 Jul 6 23:28:19.303631 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:28:19.307577 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 6 23:28:19.315890 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:28:19.333153 kernel: loop2: detected capacity change from 0 to 211168 Jul 6 23:28:19.345712 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Jul 6 23:28:19.345732 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Jul 6 23:28:19.352389 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:28:19.375178 kernel: loop3: detected capacity change from 0 to 107312 Jul 6 23:28:19.382160 kernel: loop4: detected capacity change from 0 to 138376 Jul 6 23:28:19.391159 kernel: loop5: detected capacity change from 0 to 211168 Jul 6 23:28:19.396601 (sd-merge)[1223]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 6 23:28:19.397060 (sd-merge)[1223]: Merged extensions into '/usr'. Jul 6 23:28:19.401274 systemd[1]: Reload requested from client PID 1199 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:28:19.401293 systemd[1]: Reloading... Jul 6 23:28:19.470493 zram_generator::config[1248]: No configuration found. Jul 6 23:28:19.561604 ldconfig[1194]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:28:19.570078 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:28:19.646330 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:28:19.646755 systemd[1]: Reloading finished in 243 ms. Jul 6 23:28:19.680071 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:28:19.681573 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:28:19.696706 systemd[1]: Starting ensure-sysext.service... Jul 6 23:28:19.698862 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:28:19.715467 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 6 23:28:19.715510 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 6 23:28:19.715761 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:28:19.715984 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:28:19.716747 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:28:19.717005 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Jul 6 23:28:19.717055 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Jul 6 23:28:19.719838 systemd-tmpfiles[1284]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:28:19.719850 systemd-tmpfiles[1284]: Skipping /boot Jul 6 23:28:19.723558 systemd[1]: Reload requested from client PID 1283 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:28:19.723579 systemd[1]: Reloading... Jul 6 23:28:19.729097 systemd-tmpfiles[1284]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:28:19.729115 systemd-tmpfiles[1284]: Skipping /boot Jul 6 23:28:19.769167 zram_generator::config[1311]: No configuration found. Jul 6 23:28:19.848244 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:28:19.921471 systemd[1]: Reloading finished in 197 ms. Jul 6 23:28:19.945938 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:28:19.952916 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:28:19.967411 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:28:19.969932 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:28:19.986223 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:28:19.990356 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:28:19.993601 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:28:19.998917 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:28:20.008232 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:28:20.011888 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:28:20.013397 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:28:20.017855 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:28:20.021672 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:28:20.023233 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:28:20.023377 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:28:20.026336 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:28:20.026531 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:28:20.056179 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:28:20.059878 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:28:20.060104 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:28:20.062455 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:28:20.062614 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:28:20.077574 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:28:20.078948 systemd-udevd[1354]: Using default interface naming scheme 'v255'. Jul 6 23:28:20.080376 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:28:20.083423 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:28:20.086494 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:28:20.096859 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:28:20.098157 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:28:20.098295 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:28:20.099548 augenrules[1383]: No rules Jul 6 23:28:20.099709 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:28:20.102221 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:28:20.102478 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:28:20.105421 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:28:20.107261 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:28:20.107424 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:28:20.109107 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:28:20.109271 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:28:20.113220 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:28:20.114851 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:28:20.115017 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:28:20.121869 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:28:20.132716 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:28:20.139452 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:28:20.140665 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:28:20.142913 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:28:20.145343 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:28:20.158989 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:28:20.162508 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:28:20.165008 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:28:20.165053 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:28:20.167632 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:28:20.168776 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:28:20.170277 systemd[1]: Finished ensure-sysext.service. Jul 6 23:28:20.171489 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:28:20.174007 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:28:20.175714 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:28:20.175880 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:28:20.187599 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:28:20.187809 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:28:20.203491 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:28:20.203684 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:28:20.212871 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:28:20.212941 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:28:20.216766 augenrules[1417]: /sbin/augenrules: No change Jul 6 23:28:20.216848 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 6 23:28:20.219562 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 6 23:28:20.230079 augenrules[1461]: No rules Jul 6 23:28:20.231631 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:28:20.245692 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:28:20.290330 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 6 23:28:20.294142 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:28:20.360611 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:28:20.363581 systemd-resolved[1351]: Positive Trust Anchors: Jul 6 23:28:20.363597 systemd-resolved[1351]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:28:20.363630 systemd-resolved[1351]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:28:20.370683 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 6 23:28:20.371950 systemd-networkd[1434]: lo: Link UP Jul 6 23:28:20.372253 systemd-networkd[1434]: lo: Gained carrier Jul 6 23:28:20.373149 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:28:20.373514 systemd-networkd[1434]: Enumeration completed Jul 6 23:28:20.374414 systemd-networkd[1434]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:28:20.375455 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:28:20.375498 systemd-networkd[1434]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:28:20.376042 systemd-networkd[1434]: eth0: Link UP Jul 6 23:28:20.376172 systemd-networkd[1434]: eth0: Gained carrier Jul 6 23:28:20.376187 systemd-networkd[1434]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:28:20.379690 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 6 23:28:20.383496 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:28:20.390151 systemd-resolved[1351]: Defaulting to hostname 'linux'. Jul 6 23:28:20.397995 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:28:20.400015 systemd[1]: Reached target network.target - Network. Jul 6 23:28:20.400257 systemd-networkd[1434]: eth0: DHCPv4 address 10.0.0.47/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 6 23:28:20.400879 systemd-timesyncd[1454]: Network configuration changed, trying to establish connection. Jul 6 23:28:20.401110 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:28:20.403312 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:28:20.403734 systemd-timesyncd[1454]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 6 23:28:20.403798 systemd-timesyncd[1454]: Initial clock synchronization to Sun 2025-07-06 23:28:20.079418 UTC. Jul 6 23:28:20.404538 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:28:20.411274 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:28:20.412909 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:28:20.414174 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:28:20.415490 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:28:20.416792 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:28:20.416834 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:28:20.417781 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:28:20.419830 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:28:20.422473 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:28:20.429215 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 6 23:28:20.433081 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 6 23:28:20.434452 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 6 23:28:20.440970 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:28:20.443825 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 6 23:28:20.446254 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 6 23:28:20.447911 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:28:20.457955 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:28:20.459073 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:28:20.460144 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:28:20.460182 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:28:20.461512 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:28:20.463781 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:28:20.475327 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:28:20.477792 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:28:20.480238 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:28:20.481364 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:28:20.484287 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:28:20.488791 jq[1497]: false Jul 6 23:28:20.490270 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:28:20.493471 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:28:20.497034 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:28:20.505316 extend-filesystems[1498]: Found /dev/vda6 Jul 6 23:28:20.511836 extend-filesystems[1498]: Found /dev/vda9 Jul 6 23:28:20.513956 extend-filesystems[1498]: Checking size of /dev/vda9 Jul 6 23:28:20.514770 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:28:20.517753 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:28:20.520511 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:28:20.521104 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:28:20.523342 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:28:20.526319 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:28:20.532168 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:28:20.533804 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:28:20.535791 extend-filesystems[1498]: Resized partition /dev/vda9 Jul 6 23:28:20.536789 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:28:20.538623 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:28:20.538812 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:28:20.540460 jq[1520]: true Jul 6 23:28:20.542542 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:28:20.543012 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:28:20.543957 extend-filesystems[1526]: resize2fs 1.47.2 (1-Jan-2025) Jul 6 23:28:20.563544 (ntainerd)[1529]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:28:20.570611 jq[1528]: true Jul 6 23:28:20.575046 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 6 23:28:20.599078 tar[1527]: linux-arm64/LICENSE Jul 6 23:28:20.599078 tar[1527]: linux-arm64/helm Jul 6 23:28:20.604617 update_engine[1518]: I20250706 23:28:20.604451 1518 main.cc:92] Flatcar Update Engine starting Jul 6 23:28:20.615943 dbus-daemon[1495]: [system] SELinux support is enabled Jul 6 23:28:20.616791 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:28:20.622147 update_engine[1518]: I20250706 23:28:20.620406 1518 update_check_scheduler.cc:74] Next update check in 9m4s Jul 6 23:28:20.622880 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:28:20.622924 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:28:20.625138 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:28:20.625162 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:28:20.627242 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:28:20.630982 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:28:20.631044 systemd-logind[1514]: Watching system buttons on /dev/input/event0 (Power Button) Jul 6 23:28:20.633429 systemd-logind[1514]: New seat seat0. Jul 6 23:28:20.634314 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:28:20.643171 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 6 23:28:20.667017 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:28:20.671162 extend-filesystems[1526]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 6 23:28:20.671162 extend-filesystems[1526]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 6 23:28:20.671162 extend-filesystems[1526]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 6 23:28:20.675650 extend-filesystems[1498]: Resized filesystem in /dev/vda9 Jul 6 23:28:20.674302 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:28:20.674558 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:28:20.699611 bash[1559]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:28:20.701146 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:28:20.703743 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 6 23:28:20.732825 locksmithd[1558]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:28:20.849224 containerd[1529]: time="2025-07-06T23:28:20Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 6 23:28:20.849870 containerd[1529]: time="2025-07-06T23:28:20.849830760Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 6 23:28:20.860130 containerd[1529]: time="2025-07-06T23:28:20.860020160Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.04µs" Jul 6 23:28:20.860130 containerd[1529]: time="2025-07-06T23:28:20.860054680Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 6 23:28:20.860130 containerd[1529]: time="2025-07-06T23:28:20.860072160Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 6 23:28:20.861443 containerd[1529]: time="2025-07-06T23:28:20.860244800Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 6 23:28:20.861443 containerd[1529]: time="2025-07-06T23:28:20.860277160Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 6 23:28:20.861443 containerd[1529]: time="2025-07-06T23:28:20.860300640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 6 23:28:20.861443 containerd[1529]: time="2025-07-06T23:28:20.860352560Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 6 23:28:20.861443 containerd[1529]: time="2025-07-06T23:28:20.860363400Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 6 23:28:20.861443 containerd[1529]: time="2025-07-06T23:28:20.860579240Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 6 23:28:20.861443 containerd[1529]: time="2025-07-06T23:28:20.860596160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 6 23:28:20.861443 containerd[1529]: time="2025-07-06T23:28:20.860607440Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 6 23:28:20.861443 containerd[1529]: time="2025-07-06T23:28:20.860616160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 6 23:28:20.861443 containerd[1529]: time="2025-07-06T23:28:20.860684240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 6 23:28:20.861443 containerd[1529]: time="2025-07-06T23:28:20.860873200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 6 23:28:20.861690 containerd[1529]: time="2025-07-06T23:28:20.860899480Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 6 23:28:20.861690 containerd[1529]: time="2025-07-06T23:28:20.860911080Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 6 23:28:20.861690 containerd[1529]: time="2025-07-06T23:28:20.861435880Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 6 23:28:20.862283 containerd[1529]: time="2025-07-06T23:28:20.862251320Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 6 23:28:20.862371 containerd[1529]: time="2025-07-06T23:28:20.862346880Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:28:20.869086 containerd[1529]: time="2025-07-06T23:28:20.868989080Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 6 23:28:20.869086 containerd[1529]: time="2025-07-06T23:28:20.869046840Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 6 23:28:20.869086 containerd[1529]: time="2025-07-06T23:28:20.869062200Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 6 23:28:20.869086 containerd[1529]: time="2025-07-06T23:28:20.869073800Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 6 23:28:20.869086 containerd[1529]: time="2025-07-06T23:28:20.869085360Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 6 23:28:20.869242 containerd[1529]: time="2025-07-06T23:28:20.869096480Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 6 23:28:20.869242 containerd[1529]: time="2025-07-06T23:28:20.869108280Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 6 23:28:20.869242 containerd[1529]: time="2025-07-06T23:28:20.869135400Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 6 23:28:20.869242 containerd[1529]: time="2025-07-06T23:28:20.869160920Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 6 23:28:20.869242 containerd[1529]: time="2025-07-06T23:28:20.869171200Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 6 23:28:20.869242 containerd[1529]: time="2025-07-06T23:28:20.869181800Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 6 23:28:20.869242 containerd[1529]: time="2025-07-06T23:28:20.869194000Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 6 23:28:20.869355 containerd[1529]: time="2025-07-06T23:28:20.869330440Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 6 23:28:20.869355 containerd[1529]: time="2025-07-06T23:28:20.869350920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 6 23:28:20.869389 containerd[1529]: time="2025-07-06T23:28:20.869371400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 6 23:28:20.869389 containerd[1529]: time="2025-07-06T23:28:20.869383200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 6 23:28:20.869437 containerd[1529]: time="2025-07-06T23:28:20.869393320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 6 23:28:20.869437 containerd[1529]: time="2025-07-06T23:28:20.869416320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 6 23:28:20.869437 containerd[1529]: time="2025-07-06T23:28:20.869433680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 6 23:28:20.869490 containerd[1529]: time="2025-07-06T23:28:20.869444360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 6 23:28:20.869490 containerd[1529]: time="2025-07-06T23:28:20.869458000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 6 23:28:20.869490 containerd[1529]: time="2025-07-06T23:28:20.869468560Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 6 23:28:20.869490 containerd[1529]: time="2025-07-06T23:28:20.869479320Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 6 23:28:20.869818 containerd[1529]: time="2025-07-06T23:28:20.869800160Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 6 23:28:20.869856 containerd[1529]: time="2025-07-06T23:28:20.869821720Z" level=info msg="Start snapshots syncer" Jul 6 23:28:20.869856 containerd[1529]: time="2025-07-06T23:28:20.869845960Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 6 23:28:20.870131 containerd[1529]: time="2025-07-06T23:28:20.870065680Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 6 23:28:20.870228 containerd[1529]: time="2025-07-06T23:28:20.870118960Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 6 23:28:20.870228 containerd[1529]: time="2025-07-06T23:28:20.870211840Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 6 23:28:20.870336 containerd[1529]: time="2025-07-06T23:28:20.870312600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 6 23:28:20.870384 containerd[1529]: time="2025-07-06T23:28:20.870347320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 6 23:28:20.870384 containerd[1529]: time="2025-07-06T23:28:20.870359440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 6 23:28:20.870384 containerd[1529]: time="2025-07-06T23:28:20.870370920Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 6 23:28:20.870384 containerd[1529]: time="2025-07-06T23:28:20.870382880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 6 23:28:20.870468 containerd[1529]: time="2025-07-06T23:28:20.870393960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 6 23:28:20.870468 containerd[1529]: time="2025-07-06T23:28:20.870414760Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 6 23:28:20.870468 containerd[1529]: time="2025-07-06T23:28:20.870438560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 6 23:28:20.870468 containerd[1529]: time="2025-07-06T23:28:20.870451320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 6 23:28:20.870468 containerd[1529]: time="2025-07-06T23:28:20.870461800Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 6 23:28:20.870561 containerd[1529]: time="2025-07-06T23:28:20.870500240Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 6 23:28:20.870561 containerd[1529]: time="2025-07-06T23:28:20.870515120Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 6 23:28:20.870561 containerd[1529]: time="2025-07-06T23:28:20.870524120Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 6 23:28:20.870561 containerd[1529]: time="2025-07-06T23:28:20.870533760Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 6 23:28:20.870561 containerd[1529]: time="2025-07-06T23:28:20.870546680Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 6 23:28:20.870561 containerd[1529]: time="2025-07-06T23:28:20.870561240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 6 23:28:20.870658 containerd[1529]: time="2025-07-06T23:28:20.870576600Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 6 23:28:20.870722 containerd[1529]: time="2025-07-06T23:28:20.870709600Z" level=info msg="runtime interface created" Jul 6 23:28:20.870722 containerd[1529]: time="2025-07-06T23:28:20.870717720Z" level=info msg="created NRI interface" Jul 6 23:28:20.870853 containerd[1529]: time="2025-07-06T23:28:20.870727000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 6 23:28:20.870853 containerd[1529]: time="2025-07-06T23:28:20.870737960Z" level=info msg="Connect containerd service" Jul 6 23:28:20.870853 containerd[1529]: time="2025-07-06T23:28:20.870785800Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:28:20.871483 containerd[1529]: time="2025-07-06T23:28:20.871458160Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:28:20.986154 containerd[1529]: time="2025-07-06T23:28:20.986071720Z" level=info msg="Start subscribing containerd event" Jul 6 23:28:20.986413 containerd[1529]: time="2025-07-06T23:28:20.986295160Z" level=info msg="Start recovering state" Jul 6 23:28:20.986525 containerd[1529]: time="2025-07-06T23:28:20.986442880Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:28:20.986525 containerd[1529]: time="2025-07-06T23:28:20.986491200Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:28:20.986865 containerd[1529]: time="2025-07-06T23:28:20.986823360Z" level=info msg="Start event monitor" Jul 6 23:28:20.986950 containerd[1529]: time="2025-07-06T23:28:20.986938560Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:28:20.987072 containerd[1529]: time="2025-07-06T23:28:20.987013800Z" level=info msg="Start streaming server" Jul 6 23:28:20.987072 containerd[1529]: time="2025-07-06T23:28:20.987029880Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 6 23:28:20.987072 containerd[1529]: time="2025-07-06T23:28:20.987037560Z" level=info msg="runtime interface starting up..." Jul 6 23:28:20.987072 containerd[1529]: time="2025-07-06T23:28:20.987043360Z" level=info msg="starting plugins..." Jul 6 23:28:20.987413 containerd[1529]: time="2025-07-06T23:28:20.987061000Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 6 23:28:20.987741 containerd[1529]: time="2025-07-06T23:28:20.987719600Z" level=info msg="containerd successfully booted in 0.138881s" Jul 6 23:28:20.987847 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:28:21.042225 tar[1527]: linux-arm64/README.md Jul 6 23:28:21.068156 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 6 23:28:21.460573 sshd_keygen[1524]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:28:21.480191 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:28:21.483178 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:28:21.502795 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:28:21.503036 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:28:21.505991 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:28:21.530424 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:28:21.533375 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:28:21.535679 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 6 23:28:21.537000 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:28:21.616251 systemd-networkd[1434]: eth0: Gained IPv6LL Jul 6 23:28:21.622792 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:28:21.624633 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:28:21.627241 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 6 23:28:21.629782 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:28:21.643723 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:28:21.663055 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 6 23:28:21.663282 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 6 23:28:21.665426 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 6 23:28:21.668206 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:28:22.225289 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:28:22.227030 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:28:22.229908 (kubelet)[1635]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:28:22.230421 systemd[1]: Startup finished in 2.189s (kernel) + 5.853s (initrd) + 3.803s (userspace) = 11.846s. Jul 6 23:28:22.679252 kubelet[1635]: E0706 23:28:22.679116 1635 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:28:22.681445 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:28:22.681582 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:28:22.681914 systemd[1]: kubelet.service: Consumed 839ms CPU time, 257.9M memory peak. Jul 6 23:28:25.610458 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:28:25.611698 systemd[1]: Started sshd@0-10.0.0.47:22-10.0.0.1:46746.service - OpenSSH per-connection server daemon (10.0.0.1:46746). Jul 6 23:28:25.682259 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 46746 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:28:25.683878 sshd-session[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:28:25.697096 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:28:25.698045 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:28:25.703787 systemd-logind[1514]: New session 1 of user core. Jul 6 23:28:25.722275 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:28:25.725255 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:28:25.744180 (systemd)[1653]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:28:25.746298 systemd-logind[1514]: New session c1 of user core. Jul 6 23:28:25.849768 systemd[1653]: Queued start job for default target default.target. Jul 6 23:28:25.860004 systemd[1653]: Created slice app.slice - User Application Slice. Jul 6 23:28:25.860034 systemd[1653]: Reached target paths.target - Paths. Jul 6 23:28:25.860070 systemd[1653]: Reached target timers.target - Timers. Jul 6 23:28:25.861249 systemd[1653]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:28:25.869788 systemd[1653]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:28:25.869852 systemd[1653]: Reached target sockets.target - Sockets. Jul 6 23:28:25.869890 systemd[1653]: Reached target basic.target - Basic System. Jul 6 23:28:25.869917 systemd[1653]: Reached target default.target - Main User Target. Jul 6 23:28:25.869946 systemd[1653]: Startup finished in 118ms. Jul 6 23:28:25.870287 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:28:25.871777 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:28:25.926423 systemd[1]: Started sshd@1-10.0.0.47:22-10.0.0.1:46754.service - OpenSSH per-connection server daemon (10.0.0.1:46754). Jul 6 23:28:25.966202 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 46754 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:28:25.967368 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:28:25.971712 systemd-logind[1514]: New session 2 of user core. Jul 6 23:28:25.980295 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:28:26.033699 sshd[1666]: Connection closed by 10.0.0.1 port 46754 Jul 6 23:28:26.034185 sshd-session[1664]: pam_unix(sshd:session): session closed for user core Jul 6 23:28:26.050642 systemd[1]: sshd@1-10.0.0.47:22-10.0.0.1:46754.service: Deactivated successfully. Jul 6 23:28:26.053578 systemd[1]: session-2.scope: Deactivated successfully. Jul 6 23:28:26.054334 systemd-logind[1514]: Session 2 logged out. Waiting for processes to exit. Jul 6 23:28:26.056982 systemd[1]: Started sshd@2-10.0.0.47:22-10.0.0.1:46770.service - OpenSSH per-connection server daemon (10.0.0.1:46770). Jul 6 23:28:26.057659 systemd-logind[1514]: Removed session 2. Jul 6 23:28:26.118717 sshd[1672]: Accepted publickey for core from 10.0.0.1 port 46770 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:28:26.119635 sshd-session[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:28:26.124096 systemd-logind[1514]: New session 3 of user core. Jul 6 23:28:26.130281 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:28:26.177988 sshd[1674]: Connection closed by 10.0.0.1 port 46770 Jul 6 23:28:26.178399 sshd-session[1672]: pam_unix(sshd:session): session closed for user core Jul 6 23:28:26.191421 systemd[1]: sshd@2-10.0.0.47:22-10.0.0.1:46770.service: Deactivated successfully. Jul 6 23:28:26.192924 systemd[1]: session-3.scope: Deactivated successfully. Jul 6 23:28:26.194626 systemd-logind[1514]: Session 3 logged out. Waiting for processes to exit. Jul 6 23:28:26.196816 systemd[1]: Started sshd@3-10.0.0.47:22-10.0.0.1:46774.service - OpenSSH per-connection server daemon (10.0.0.1:46774). Jul 6 23:28:26.197780 systemd-logind[1514]: Removed session 3. Jul 6 23:28:26.249530 sshd[1680]: Accepted publickey for core from 10.0.0.1 port 46774 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:28:26.250819 sshd-session[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:28:26.254351 systemd-logind[1514]: New session 4 of user core. Jul 6 23:28:26.267285 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:28:26.316385 sshd[1682]: Connection closed by 10.0.0.1 port 46774 Jul 6 23:28:26.316697 sshd-session[1680]: pam_unix(sshd:session): session closed for user core Jul 6 23:28:26.332329 systemd[1]: sshd@3-10.0.0.47:22-10.0.0.1:46774.service: Deactivated successfully. Jul 6 23:28:26.335549 systemd[1]: session-4.scope: Deactivated successfully. Jul 6 23:28:26.336282 systemd-logind[1514]: Session 4 logged out. Waiting for processes to exit. Jul 6 23:28:26.338590 systemd[1]: Started sshd@4-10.0.0.47:22-10.0.0.1:46776.service - OpenSSH per-connection server daemon (10.0.0.1:46776). Jul 6 23:28:26.339164 systemd-logind[1514]: Removed session 4. Jul 6 23:28:26.377941 sshd[1688]: Accepted publickey for core from 10.0.0.1 port 46776 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:28:26.379228 sshd-session[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:28:26.383213 systemd-logind[1514]: New session 5 of user core. Jul 6 23:28:26.398309 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:28:26.464283 sudo[1691]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 6 23:28:26.464561 sudo[1691]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:28:26.492776 sudo[1691]: pam_unix(sudo:session): session closed for user root Jul 6 23:28:26.494568 sshd[1690]: Connection closed by 10.0.0.1 port 46776 Jul 6 23:28:26.495105 sshd-session[1688]: pam_unix(sshd:session): session closed for user core Jul 6 23:28:26.503358 systemd[1]: sshd@4-10.0.0.47:22-10.0.0.1:46776.service: Deactivated successfully. Jul 6 23:28:26.505914 systemd[1]: session-5.scope: Deactivated successfully. Jul 6 23:28:26.506884 systemd-logind[1514]: Session 5 logged out. Waiting for processes to exit. Jul 6 23:28:26.510064 systemd[1]: Started sshd@5-10.0.0.47:22-10.0.0.1:46784.service - OpenSSH per-connection server daemon (10.0.0.1:46784). Jul 6 23:28:26.510908 systemd-logind[1514]: Removed session 5. Jul 6 23:28:26.555945 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 46784 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:28:26.557350 sshd-session[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:28:26.561913 systemd-logind[1514]: New session 6 of user core. Jul 6 23:28:26.574304 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:28:26.624883 sudo[1701]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 6 23:28:26.625510 sudo[1701]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:28:26.630282 sudo[1701]: pam_unix(sudo:session): session closed for user root Jul 6 23:28:26.635387 sudo[1700]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 6 23:28:26.635653 sudo[1700]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:28:26.644139 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:28:26.679816 augenrules[1723]: No rules Jul 6 23:28:26.681447 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:28:26.682219 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:28:26.683794 sudo[1700]: pam_unix(sudo:session): session closed for user root Jul 6 23:28:26.685328 sshd[1699]: Connection closed by 10.0.0.1 port 46784 Jul 6 23:28:26.686353 sshd-session[1697]: pam_unix(sshd:session): session closed for user core Jul 6 23:28:26.700727 systemd[1]: sshd@5-10.0.0.47:22-10.0.0.1:46784.service: Deactivated successfully. Jul 6 23:28:26.702321 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:28:26.703102 systemd-logind[1514]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:28:26.705861 systemd[1]: Started sshd@6-10.0.0.47:22-10.0.0.1:46788.service - OpenSSH per-connection server daemon (10.0.0.1:46788). Jul 6 23:28:26.706587 systemd-logind[1514]: Removed session 6. Jul 6 23:28:26.762322 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 46788 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:28:26.763604 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:28:26.767928 systemd-logind[1514]: New session 7 of user core. Jul 6 23:28:26.778312 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:28:26.828644 sudo[1735]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:28:26.828927 sudo[1735]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:28:27.174536 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 6 23:28:27.188481 (dockerd)[1755]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 6 23:28:27.440954 dockerd[1755]: time="2025-07-06T23:28:27.440829966Z" level=info msg="Starting up" Jul 6 23:28:27.443755 dockerd[1755]: time="2025-07-06T23:28:27.443719484Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 6 23:28:27.555882 dockerd[1755]: time="2025-07-06T23:28:27.555838351Z" level=info msg="Loading containers: start." Jul 6 23:28:27.567148 kernel: Initializing XFRM netlink socket Jul 6 23:28:27.774245 systemd-networkd[1434]: docker0: Link UP Jul 6 23:28:27.777430 dockerd[1755]: time="2025-07-06T23:28:27.777386274Z" level=info msg="Loading containers: done." Jul 6 23:28:27.801932 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3133983386-merged.mount: Deactivated successfully. Jul 6 23:28:27.806377 dockerd[1755]: time="2025-07-06T23:28:27.806322180Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 6 23:28:27.806512 dockerd[1755]: time="2025-07-06T23:28:27.806489525Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 6 23:28:27.806667 dockerd[1755]: time="2025-07-06T23:28:27.806637312Z" level=info msg="Initializing buildkit" Jul 6 23:28:27.827701 dockerd[1755]: time="2025-07-06T23:28:27.827618856Z" level=info msg="Completed buildkit initialization" Jul 6 23:28:27.833794 dockerd[1755]: time="2025-07-06T23:28:27.833737648Z" level=info msg="Daemon has completed initialization" Jul 6 23:28:27.834296 dockerd[1755]: time="2025-07-06T23:28:27.833793024Z" level=info msg="API listen on /run/docker.sock" Jul 6 23:28:27.833984 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 6 23:28:28.414912 containerd[1529]: time="2025-07-06T23:28:28.414873122Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 6 23:28:29.170374 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3360562091.mount: Deactivated successfully. Jul 6 23:28:30.097885 containerd[1529]: time="2025-07-06T23:28:30.097828000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:30.098448 containerd[1529]: time="2025-07-06T23:28:30.098413391Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=27351718" Jul 6 23:28:30.099098 containerd[1529]: time="2025-07-06T23:28:30.099057183Z" level=info msg="ImageCreate event name:\"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:30.101666 containerd[1529]: time="2025-07-06T23:28:30.101618361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:30.102736 containerd[1529]: time="2025-07-06T23:28:30.102688582Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"27348516\" in 1.687773663s" Jul 6 23:28:30.103034 containerd[1529]: time="2025-07-06T23:28:30.102804751Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\"" Jul 6 23:28:30.105846 containerd[1529]: time="2025-07-06T23:28:30.105814762Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 6 23:28:31.094044 containerd[1529]: time="2025-07-06T23:28:31.093604164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:31.094684 containerd[1529]: time="2025-07-06T23:28:31.094658460Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=23537625" Jul 6 23:28:31.095660 containerd[1529]: time="2025-07-06T23:28:31.095634123Z" level=info msg="ImageCreate event name:\"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:31.098521 containerd[1529]: time="2025-07-06T23:28:31.098482517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:31.099331 containerd[1529]: time="2025-07-06T23:28:31.099294817Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"25092541\" in 993.445587ms" Jul 6 23:28:31.099331 containerd[1529]: time="2025-07-06T23:28:31.099329444Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\"" Jul 6 23:28:31.100285 containerd[1529]: time="2025-07-06T23:28:31.100258805Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 6 23:28:32.084566 containerd[1529]: time="2025-07-06T23:28:32.084508189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:32.085069 containerd[1529]: time="2025-07-06T23:28:32.085038325Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=18293517" Jul 6 23:28:32.085849 containerd[1529]: time="2025-07-06T23:28:32.085815419Z" level=info msg="ImageCreate event name:\"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:32.088181 containerd[1529]: time="2025-07-06T23:28:32.088152329Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:32.090067 containerd[1529]: time="2025-07-06T23:28:32.090032888Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"19848451\" in 989.741225ms" Jul 6 23:28:32.090067 containerd[1529]: time="2025-07-06T23:28:32.090066374Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\"" Jul 6 23:28:32.090731 containerd[1529]: time="2025-07-06T23:28:32.090515749Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 6 23:28:32.830145 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 6 23:28:32.833232 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:28:32.984573 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:28:32.988431 (kubelet)[2039]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:28:33.030498 kubelet[2039]: E0706 23:28:33.030432 2039 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:28:33.033724 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:28:33.033848 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:28:33.034173 systemd[1]: kubelet.service: Consumed 155ms CPU time, 106.4M memory peak. Jul 6 23:28:33.135360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1770239668.mount: Deactivated successfully. Jul 6 23:28:33.596624 containerd[1529]: time="2025-07-06T23:28:33.596494820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:33.597066 containerd[1529]: time="2025-07-06T23:28:33.597031093Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=28199474" Jul 6 23:28:33.598032 containerd[1529]: time="2025-07-06T23:28:33.597700552Z" level=info msg="ImageCreate event name:\"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:33.599697 containerd[1529]: time="2025-07-06T23:28:33.599653585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:33.600364 containerd[1529]: time="2025-07-06T23:28:33.600337406Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"28198491\" in 1.509787526s" Jul 6 23:28:33.600443 containerd[1529]: time="2025-07-06T23:28:33.600428775Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\"" Jul 6 23:28:33.601037 containerd[1529]: time="2025-07-06T23:28:33.601009603Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 6 23:28:34.373596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount292864512.mount: Deactivated successfully. Jul 6 23:28:35.304388 containerd[1529]: time="2025-07-06T23:28:35.304328078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:35.304833 containerd[1529]: time="2025-07-06T23:28:35.304784254Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Jul 6 23:28:35.305793 containerd[1529]: time="2025-07-06T23:28:35.305769110Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:35.308441 containerd[1529]: time="2025-07-06T23:28:35.308366243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:35.309497 containerd[1529]: time="2025-07-06T23:28:35.309466853Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.708419541s" Jul 6 23:28:35.309556 containerd[1529]: time="2025-07-06T23:28:35.309501873Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jul 6 23:28:35.310143 containerd[1529]: time="2025-07-06T23:28:35.309929826Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 6 23:28:35.758716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3353224592.mount: Deactivated successfully. Jul 6 23:28:35.767764 containerd[1529]: time="2025-07-06T23:28:35.767652328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:28:35.768455 containerd[1529]: time="2025-07-06T23:28:35.768419193Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 6 23:28:35.769372 containerd[1529]: time="2025-07-06T23:28:35.769323992Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:28:35.771875 containerd[1529]: time="2025-07-06T23:28:35.771833038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:28:35.773125 containerd[1529]: time="2025-07-06T23:28:35.773082751Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 463.121443ms" Jul 6 23:28:35.773175 containerd[1529]: time="2025-07-06T23:28:35.773131286Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 6 23:28:35.773573 containerd[1529]: time="2025-07-06T23:28:35.773533362Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 6 23:28:36.225471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount68196588.mount: Deactivated successfully. Jul 6 23:28:37.815755 containerd[1529]: time="2025-07-06T23:28:37.815683833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:37.816217 containerd[1529]: time="2025-07-06T23:28:37.816185943Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334601" Jul 6 23:28:37.817258 containerd[1529]: time="2025-07-06T23:28:37.817224000Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:37.820867 containerd[1529]: time="2025-07-06T23:28:37.820805767Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:37.821957 containerd[1529]: time="2025-07-06T23:28:37.821911499Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.048348631s" Jul 6 23:28:37.822027 containerd[1529]: time="2025-07-06T23:28:37.821962017Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jul 6 23:28:43.284257 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 6 23:28:43.285782 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:28:43.420231 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:28:43.423662 (kubelet)[2196]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:28:43.455475 kubelet[2196]: E0706 23:28:43.455422 2196 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:28:43.458106 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:28:43.458368 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:28:43.459007 systemd[1]: kubelet.service: Consumed 130ms CPU time, 105.4M memory peak. Jul 6 23:28:44.117196 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:28:44.117482 systemd[1]: kubelet.service: Consumed 130ms CPU time, 105.4M memory peak. Jul 6 23:28:44.120092 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:28:44.144943 systemd[1]: Reload requested from client PID 2211 ('systemctl') (unit session-7.scope)... Jul 6 23:28:44.144966 systemd[1]: Reloading... Jul 6 23:28:44.209145 zram_generator::config[2254]: No configuration found. Jul 6 23:28:44.398392 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:28:44.493345 systemd[1]: Reloading finished in 348 ms. Jul 6 23:28:44.549702 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 6 23:28:44.549785 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 6 23:28:44.550081 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:28:44.550154 systemd[1]: kubelet.service: Consumed 90ms CPU time, 95.1M memory peak. Jul 6 23:28:44.551889 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:28:44.679856 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:28:44.690424 (kubelet)[2298]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:28:44.730903 kubelet[2298]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:28:44.730903 kubelet[2298]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:28:44.730903 kubelet[2298]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:28:44.731344 kubelet[2298]: I0706 23:28:44.730941 2298 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:28:46.590717 kubelet[2298]: I0706 23:28:46.590664 2298 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 6 23:28:46.590717 kubelet[2298]: I0706 23:28:46.590698 2298 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:28:46.591068 kubelet[2298]: I0706 23:28:46.590925 2298 server.go:956] "Client rotation is on, will bootstrap in background" Jul 6 23:28:46.632746 kubelet[2298]: E0706 23:28:46.632707 2298 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.47:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.47:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 6 23:28:46.633847 kubelet[2298]: I0706 23:28:46.633785 2298 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:28:46.646462 kubelet[2298]: I0706 23:28:46.646431 2298 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 6 23:28:46.659973 kubelet[2298]: I0706 23:28:46.659941 2298 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:28:46.665691 kubelet[2298]: I0706 23:28:46.665634 2298 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:28:46.665871 kubelet[2298]: I0706 23:28:46.665687 2298 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:28:46.666060 kubelet[2298]: I0706 23:28:46.666035 2298 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:28:46.666060 kubelet[2298]: I0706 23:28:46.666048 2298 container_manager_linux.go:303] "Creating device plugin manager" Jul 6 23:28:46.666344 kubelet[2298]: I0706 23:28:46.666317 2298 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:28:46.676196 kubelet[2298]: I0706 23:28:46.676164 2298 kubelet.go:480] "Attempting to sync node with API server" Jul 6 23:28:46.676196 kubelet[2298]: I0706 23:28:46.676191 2298 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:28:46.678174 kubelet[2298]: I0706 23:28:46.678155 2298 kubelet.go:386] "Adding apiserver pod source" Jul 6 23:28:46.679609 kubelet[2298]: I0706 23:28:46.679515 2298 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:28:46.683095 kubelet[2298]: E0706 23:28:46.683057 2298 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.47:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 6 23:28:46.683220 kubelet[2298]: E0706 23:28:46.683183 2298 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.47:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.47:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 6 23:28:46.683982 kubelet[2298]: I0706 23:28:46.683912 2298 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 6 23:28:46.684729 kubelet[2298]: I0706 23:28:46.684685 2298 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 6 23:28:46.684833 kubelet[2298]: W0706 23:28:46.684816 2298 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:28:46.690719 kubelet[2298]: I0706 23:28:46.690679 2298 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:28:46.690783 kubelet[2298]: I0706 23:28:46.690736 2298 server.go:1289] "Started kubelet" Jul 6 23:28:46.690885 kubelet[2298]: I0706 23:28:46.690840 2298 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:28:46.691944 kubelet[2298]: I0706 23:28:46.691922 2298 server.go:317] "Adding debug handlers to kubelet server" Jul 6 23:28:46.696095 kubelet[2298]: I0706 23:28:46.694447 2298 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:28:46.696095 kubelet[2298]: I0706 23:28:46.694783 2298 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:28:46.696975 kubelet[2298]: I0706 23:28:46.696645 2298 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:28:46.697858 kubelet[2298]: I0706 23:28:46.697844 2298 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:28:46.698158 kubelet[2298]: E0706 23:28:46.693921 2298 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.47:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.47:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184fcd50bb1c8340 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-06 23:28:46.690698048 +0000 UTC m=+1.995338176,LastTimestamp:2025-07-06 23:28:46.690698048 +0000 UTC m=+1.995338176,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 6 23:28:46.698487 kubelet[2298]: I0706 23:28:46.698470 2298 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:28:46.698564 kubelet[2298]: I0706 23:28:46.698549 2298 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:28:46.698625 kubelet[2298]: I0706 23:28:46.698611 2298 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:28:46.698821 kubelet[2298]: E0706 23:28:46.698798 2298 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:28:46.699134 kubelet[2298]: E0706 23:28:46.698917 2298 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.47:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 6 23:28:46.699134 kubelet[2298]: E0706 23:28:46.698945 2298 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.47:6443: connect: connection refused" interval="200ms" Jul 6 23:28:46.699647 kubelet[2298]: E0706 23:28:46.699629 2298 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:28:46.701041 kubelet[2298]: I0706 23:28:46.701005 2298 factory.go:223] Registration of the containerd container factory successfully Jul 6 23:28:46.701041 kubelet[2298]: I0706 23:28:46.701025 2298 factory.go:223] Registration of the systemd container factory successfully Jul 6 23:28:46.701158 kubelet[2298]: I0706 23:28:46.701110 2298 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:28:46.713547 kubelet[2298]: I0706 23:28:46.713523 2298 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:28:46.713547 kubelet[2298]: I0706 23:28:46.713540 2298 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:28:46.713547 kubelet[2298]: I0706 23:28:46.713559 2298 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:28:46.798567 kubelet[2298]: I0706 23:28:46.798517 2298 policy_none.go:49] "None policy: Start" Jul 6 23:28:46.798567 kubelet[2298]: I0706 23:28:46.798547 2298 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:28:46.798567 kubelet[2298]: I0706 23:28:46.798568 2298 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:28:46.799320 kubelet[2298]: E0706 23:28:46.799285 2298 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:28:46.804398 kubelet[2298]: I0706 23:28:46.804320 2298 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 6 23:28:46.806034 kubelet[2298]: I0706 23:28:46.805969 2298 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 6 23:28:46.806034 kubelet[2298]: I0706 23:28:46.805991 2298 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 6 23:28:46.806034 kubelet[2298]: I0706 23:28:46.806011 2298 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:28:46.806034 kubelet[2298]: I0706 23:28:46.806020 2298 kubelet.go:2436] "Starting kubelet main sync loop" Jul 6 23:28:46.806848 kubelet[2298]: E0706 23:28:46.806056 2298 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:28:46.806388 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 6 23:28:46.807819 kubelet[2298]: E0706 23:28:46.807757 2298 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.47:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 6 23:28:46.817777 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 6 23:28:46.820667 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 6 23:28:46.833912 kubelet[2298]: E0706 23:28:46.833868 2298 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 6 23:28:46.834089 kubelet[2298]: I0706 23:28:46.834074 2298 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:28:46.834151 kubelet[2298]: I0706 23:28:46.834091 2298 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:28:46.834564 kubelet[2298]: I0706 23:28:46.834534 2298 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:28:46.835290 kubelet[2298]: E0706 23:28:46.835263 2298 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:28:46.835355 kubelet[2298]: E0706 23:28:46.835302 2298 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 6 23:28:46.899918 kubelet[2298]: E0706 23:28:46.899784 2298 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.47:6443: connect: connection refused" interval="400ms" Jul 6 23:28:46.920795 systemd[1]: Created slice kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice - libcontainer container kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice. Jul 6 23:28:46.935653 kubelet[2298]: I0706 23:28:46.935582 2298 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:28:46.936038 kubelet[2298]: E0706 23:28:46.936007 2298 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.47:6443/api/v1/nodes\": dial tcp 10.0.0.47:6443: connect: connection refused" node="localhost" Jul 6 23:28:46.947706 kubelet[2298]: E0706 23:28:46.947667 2298 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:28:46.950537 systemd[1]: Created slice kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice - libcontainer container kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice. Jul 6 23:28:46.970258 kubelet[2298]: E0706 23:28:46.970228 2298 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:28:46.972946 systemd[1]: Created slice kubepods-burstable-pod7cea89690cdc0acd0ac33f42783bbffe.slice - libcontainer container kubepods-burstable-pod7cea89690cdc0acd0ac33f42783bbffe.slice. Jul 6 23:28:46.974760 kubelet[2298]: E0706 23:28:46.974613 2298 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:28:47.099971 kubelet[2298]: I0706 23:28:47.099937 2298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 6 23:28:47.100368 kubelet[2298]: I0706 23:28:47.100185 2298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7cea89690cdc0acd0ac33f42783bbffe-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7cea89690cdc0acd0ac33f42783bbffe\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:28:47.100368 kubelet[2298]: I0706 23:28:47.100210 2298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7cea89690cdc0acd0ac33f42783bbffe-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7cea89690cdc0acd0ac33f42783bbffe\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:28:47.100368 kubelet[2298]: I0706 23:28:47.100230 2298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:28:47.100368 kubelet[2298]: I0706 23:28:47.100246 2298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:28:47.100368 kubelet[2298]: I0706 23:28:47.100262 2298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7cea89690cdc0acd0ac33f42783bbffe-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7cea89690cdc0acd0ac33f42783bbffe\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:28:47.100521 kubelet[2298]: I0706 23:28:47.100275 2298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:28:47.100521 kubelet[2298]: I0706 23:28:47.100289 2298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:28:47.100521 kubelet[2298]: I0706 23:28:47.100304 2298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:28:47.138256 kubelet[2298]: I0706 23:28:47.138172 2298 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:28:47.138676 kubelet[2298]: E0706 23:28:47.138649 2298 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.47:6443/api/v1/nodes\": dial tcp 10.0.0.47:6443: connect: connection refused" node="localhost" Jul 6 23:28:47.249268 containerd[1529]: time="2025-07-06T23:28:47.249167458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,}" Jul 6 23:28:47.266147 containerd[1529]: time="2025-07-06T23:28:47.266084530Z" level=info msg="connecting to shim 92163058692229f5aeb1dc97dd44c9b3e80d04493e6322af9ffdeed89be9fd8a" address="unix:///run/containerd/s/543055c76276d4bf5dfd96dc47d63d9a96739970aef08cd14424f0b3bdd5e785" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:28:47.272079 containerd[1529]: time="2025-07-06T23:28:47.271787234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,}" Jul 6 23:28:47.276142 containerd[1529]: time="2025-07-06T23:28:47.276081710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7cea89690cdc0acd0ac33f42783bbffe,Namespace:kube-system,Attempt:0,}" Jul 6 23:28:47.294749 containerd[1529]: time="2025-07-06T23:28:47.294701040Z" level=info msg="connecting to shim 5f36754297a1d38cc87c5666ff2f675b4797b6278a88f4703f5cb0cc671de709" address="unix:///run/containerd/s/fab19bd8770d35035a54a3ab71ccc5402e0ec42e0e179061e224ee1451052936" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:28:47.296477 systemd[1]: Started cri-containerd-92163058692229f5aeb1dc97dd44c9b3e80d04493e6322af9ffdeed89be9fd8a.scope - libcontainer container 92163058692229f5aeb1dc97dd44c9b3e80d04493e6322af9ffdeed89be9fd8a. Jul 6 23:28:47.300413 kubelet[2298]: E0706 23:28:47.300366 2298 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.47:6443: connect: connection refused" interval="800ms" Jul 6 23:28:47.312555 containerd[1529]: time="2025-07-06T23:28:47.312492252Z" level=info msg="connecting to shim f19c1ffcd7c3f5392697779c2f6094935c5f200764f26f1be8a3a567df5ad130" address="unix:///run/containerd/s/0daad1fddf7336d3cce8b778e5c0fa1e35b5de6b71b193f4db7e1449ff30b3e4" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:28:47.328310 systemd[1]: Started cri-containerd-5f36754297a1d38cc87c5666ff2f675b4797b6278a88f4703f5cb0cc671de709.scope - libcontainer container 5f36754297a1d38cc87c5666ff2f675b4797b6278a88f4703f5cb0cc671de709. Jul 6 23:28:47.345283 systemd[1]: Started cri-containerd-f19c1ffcd7c3f5392697779c2f6094935c5f200764f26f1be8a3a567df5ad130.scope - libcontainer container f19c1ffcd7c3f5392697779c2f6094935c5f200764f26f1be8a3a567df5ad130. Jul 6 23:28:47.363152 containerd[1529]: time="2025-07-06T23:28:47.363097293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"92163058692229f5aeb1dc97dd44c9b3e80d04493e6322af9ffdeed89be9fd8a\"" Jul 6 23:28:47.368453 containerd[1529]: time="2025-07-06T23:28:47.368409328Z" level=info msg="CreateContainer within sandbox \"92163058692229f5aeb1dc97dd44c9b3e80d04493e6322af9ffdeed89be9fd8a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 6 23:28:47.378893 containerd[1529]: time="2025-07-06T23:28:47.378845795Z" level=info msg="Container 708e60d499be887cf3b9f42239d56d570c46be4ef61db2b3ca8b86ee34964d58: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:28:47.386752 containerd[1529]: time="2025-07-06T23:28:47.386702629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f36754297a1d38cc87c5666ff2f675b4797b6278a88f4703f5cb0cc671de709\"" Jul 6 23:28:47.386921 containerd[1529]: time="2025-07-06T23:28:47.386714654Z" level=info msg="CreateContainer within sandbox \"92163058692229f5aeb1dc97dd44c9b3e80d04493e6322af9ffdeed89be9fd8a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"708e60d499be887cf3b9f42239d56d570c46be4ef61db2b3ca8b86ee34964d58\"" Jul 6 23:28:47.388338 containerd[1529]: time="2025-07-06T23:28:47.388303894Z" level=info msg="StartContainer for \"708e60d499be887cf3b9f42239d56d570c46be4ef61db2b3ca8b86ee34964d58\"" Jul 6 23:28:47.390586 containerd[1529]: time="2025-07-06T23:28:47.390551306Z" level=info msg="connecting to shim 708e60d499be887cf3b9f42239d56d570c46be4ef61db2b3ca8b86ee34964d58" address="unix:///run/containerd/s/543055c76276d4bf5dfd96dc47d63d9a96739970aef08cd14424f0b3bdd5e785" protocol=ttrpc version=3 Jul 6 23:28:47.392880 containerd[1529]: time="2025-07-06T23:28:47.392505566Z" level=info msg="CreateContainer within sandbox \"5f36754297a1d38cc87c5666ff2f675b4797b6278a88f4703f5cb0cc671de709\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 6 23:28:47.393970 containerd[1529]: time="2025-07-06T23:28:47.393936046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7cea89690cdc0acd0ac33f42783bbffe,Namespace:kube-system,Attempt:0,} returns sandbox id \"f19c1ffcd7c3f5392697779c2f6094935c5f200764f26f1be8a3a567df5ad130\"" Jul 6 23:28:47.402449 containerd[1529]: time="2025-07-06T23:28:47.402379901Z" level=info msg="CreateContainer within sandbox \"f19c1ffcd7c3f5392697779c2f6094935c5f200764f26f1be8a3a567df5ad130\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 6 23:28:47.404469 containerd[1529]: time="2025-07-06T23:28:47.404433677Z" level=info msg="Container 2159459cdee9b98dcbceca8c81da9fdc0c0445204d6afe26c2b9c8ce5e05f732: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:28:47.410313 systemd[1]: Started cri-containerd-708e60d499be887cf3b9f42239d56d570c46be4ef61db2b3ca8b86ee34964d58.scope - libcontainer container 708e60d499be887cf3b9f42239d56d570c46be4ef61db2b3ca8b86ee34964d58. Jul 6 23:28:47.412998 containerd[1529]: time="2025-07-06T23:28:47.412961386Z" level=info msg="Container c10751067019bda6fa7b9a73ddc907a4e2873f1c6e5cb04b39ee69d324a1b67c: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:28:47.416976 containerd[1529]: time="2025-07-06T23:28:47.416917847Z" level=info msg="CreateContainer within sandbox \"5f36754297a1d38cc87c5666ff2f675b4797b6278a88f4703f5cb0cc671de709\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2159459cdee9b98dcbceca8c81da9fdc0c0445204d6afe26c2b9c8ce5e05f732\"" Jul 6 23:28:47.417566 containerd[1529]: time="2025-07-06T23:28:47.417541582Z" level=info msg="StartContainer for \"2159459cdee9b98dcbceca8c81da9fdc0c0445204d6afe26c2b9c8ce5e05f732\"" Jul 6 23:28:47.419145 containerd[1529]: time="2025-07-06T23:28:47.418979692Z" level=info msg="connecting to shim 2159459cdee9b98dcbceca8c81da9fdc0c0445204d6afe26c2b9c8ce5e05f732" address="unix:///run/containerd/s/fab19bd8770d35035a54a3ab71ccc5402e0ec42e0e179061e224ee1451052936" protocol=ttrpc version=3 Jul 6 23:28:47.422406 containerd[1529]: time="2025-07-06T23:28:47.422369027Z" level=info msg="CreateContainer within sandbox \"f19c1ffcd7c3f5392697779c2f6094935c5f200764f26f1be8a3a567df5ad130\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c10751067019bda6fa7b9a73ddc907a4e2873f1c6e5cb04b39ee69d324a1b67c\"" Jul 6 23:28:47.423510 containerd[1529]: time="2025-07-06T23:28:47.423484424Z" level=info msg="StartContainer for \"c10751067019bda6fa7b9a73ddc907a4e2873f1c6e5cb04b39ee69d324a1b67c\"" Jul 6 23:28:47.425276 containerd[1529]: time="2025-07-06T23:28:47.425192315Z" level=info msg="connecting to shim c10751067019bda6fa7b9a73ddc907a4e2873f1c6e5cb04b39ee69d324a1b67c" address="unix:///run/containerd/s/0daad1fddf7336d3cce8b778e5c0fa1e35b5de6b71b193f4db7e1449ff30b3e4" protocol=ttrpc version=3 Jul 6 23:28:47.439486 systemd[1]: Started cri-containerd-2159459cdee9b98dcbceca8c81da9fdc0c0445204d6afe26c2b9c8ce5e05f732.scope - libcontainer container 2159459cdee9b98dcbceca8c81da9fdc0c0445204d6afe26c2b9c8ce5e05f732. Jul 6 23:28:47.448313 systemd[1]: Started cri-containerd-c10751067019bda6fa7b9a73ddc907a4e2873f1c6e5cb04b39ee69d324a1b67c.scope - libcontainer container c10751067019bda6fa7b9a73ddc907a4e2873f1c6e5cb04b39ee69d324a1b67c. Jul 6 23:28:47.459952 containerd[1529]: time="2025-07-06T23:28:47.458959743Z" level=info msg="StartContainer for \"708e60d499be887cf3b9f42239d56d570c46be4ef61db2b3ca8b86ee34964d58\" returns successfully" Jul 6 23:28:47.502548 containerd[1529]: time="2025-07-06T23:28:47.502415220Z" level=info msg="StartContainer for \"2159459cdee9b98dcbceca8c81da9fdc0c0445204d6afe26c2b9c8ce5e05f732\" returns successfully" Jul 6 23:28:47.528131 containerd[1529]: time="2025-07-06T23:28:47.524343906Z" level=info msg="StartContainer for \"c10751067019bda6fa7b9a73ddc907a4e2873f1c6e5cb04b39ee69d324a1b67c\" returns successfully" Jul 6 23:28:47.542257 kubelet[2298]: I0706 23:28:47.540338 2298 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:28:47.542257 kubelet[2298]: E0706 23:28:47.540651 2298 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.47:6443/api/v1/nodes\": dial tcp 10.0.0.47:6443: connect: connection refused" node="localhost" Jul 6 23:28:47.648347 kubelet[2298]: E0706 23:28:47.648166 2298 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.47:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 6 23:28:47.666636 kubelet[2298]: E0706 23:28:47.666596 2298 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.47:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 6 23:28:47.815331 kubelet[2298]: E0706 23:28:47.814857 2298 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:28:47.816664 kubelet[2298]: E0706 23:28:47.816641 2298 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:28:47.820033 kubelet[2298]: E0706 23:28:47.820015 2298 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:28:48.342538 kubelet[2298]: I0706 23:28:48.342490 2298 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:28:48.822957 kubelet[2298]: E0706 23:28:48.822600 2298 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:28:48.822957 kubelet[2298]: E0706 23:28:48.822750 2298 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:28:49.344005 kubelet[2298]: E0706 23:28:49.343968 2298 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 6 23:28:49.502892 kubelet[2298]: I0706 23:28:49.502785 2298 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 6 23:28:49.502892 kubelet[2298]: E0706 23:28:49.502830 2298 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 6 23:28:49.514920 kubelet[2298]: E0706 23:28:49.514861 2298 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:28:49.616066 kubelet[2298]: E0706 23:28:49.615954 2298 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:28:49.699330 kubelet[2298]: I0706 23:28:49.699298 2298 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 6 23:28:49.705310 kubelet[2298]: E0706 23:28:49.705261 2298 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 6 23:28:49.705310 kubelet[2298]: I0706 23:28:49.705302 2298 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 6 23:28:49.707426 kubelet[2298]: E0706 23:28:49.707287 2298 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 6 23:28:49.707426 kubelet[2298]: I0706 23:28:49.707313 2298 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 6 23:28:49.708978 kubelet[2298]: E0706 23:28:49.708953 2298 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 6 23:28:50.682252 kubelet[2298]: I0706 23:28:50.682204 2298 apiserver.go:52] "Watching apiserver" Jul 6 23:28:50.699540 kubelet[2298]: I0706 23:28:50.699513 2298 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:28:50.866987 kubelet[2298]: I0706 23:28:50.866942 2298 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 6 23:28:51.326585 kubelet[2298]: I0706 23:28:51.326398 2298 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 6 23:28:51.657393 systemd[1]: Reload requested from client PID 2582 ('systemctl') (unit session-7.scope)... Jul 6 23:28:51.657409 systemd[1]: Reloading... Jul 6 23:28:51.726256 zram_generator::config[2625]: No configuration found. Jul 6 23:28:51.796743 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:28:51.905395 systemd[1]: Reloading finished in 247 ms. Jul 6 23:28:51.926052 kubelet[2298]: I0706 23:28:51.925766 2298 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:28:51.925947 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:28:51.944868 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:28:51.945156 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:28:51.945216 systemd[1]: kubelet.service: Consumed 2.389s CPU time, 129M memory peak. Jul 6 23:28:51.947055 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:28:52.116931 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:28:52.120982 (kubelet)[2667]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:28:52.163806 kubelet[2667]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:28:52.163806 kubelet[2667]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:28:52.163806 kubelet[2667]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:28:52.164245 kubelet[2667]: I0706 23:28:52.163907 2667 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:28:52.172834 kubelet[2667]: I0706 23:28:52.172675 2667 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 6 23:28:52.172834 kubelet[2667]: I0706 23:28:52.172708 2667 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:28:52.173243 kubelet[2667]: I0706 23:28:52.173219 2667 server.go:956] "Client rotation is on, will bootstrap in background" Jul 6 23:28:52.175168 kubelet[2667]: I0706 23:28:52.175148 2667 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 6 23:28:52.177936 kubelet[2667]: I0706 23:28:52.177847 2667 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:28:52.184212 kubelet[2667]: I0706 23:28:52.184186 2667 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 6 23:28:52.187312 kubelet[2667]: I0706 23:28:52.187035 2667 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:28:52.188176 kubelet[2667]: I0706 23:28:52.187515 2667 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:28:52.188176 kubelet[2667]: I0706 23:28:52.187549 2667 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:28:52.188176 kubelet[2667]: I0706 23:28:52.187774 2667 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:28:52.188176 kubelet[2667]: I0706 23:28:52.187785 2667 container_manager_linux.go:303] "Creating device plugin manager" Jul 6 23:28:52.189561 kubelet[2667]: I0706 23:28:52.189533 2667 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:28:52.189977 kubelet[2667]: I0706 23:28:52.189953 2667 kubelet.go:480] "Attempting to sync node with API server" Jul 6 23:28:52.189977 kubelet[2667]: I0706 23:28:52.189971 2667 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:28:52.190659 kubelet[2667]: I0706 23:28:52.189996 2667 kubelet.go:386] "Adding apiserver pod source" Jul 6 23:28:52.190659 kubelet[2667]: I0706 23:28:52.190010 2667 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:28:52.191057 kubelet[2667]: I0706 23:28:52.191004 2667 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 6 23:28:52.192148 kubelet[2667]: I0706 23:28:52.191980 2667 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 6 23:28:52.200278 kubelet[2667]: I0706 23:28:52.200230 2667 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:28:52.200278 kubelet[2667]: I0706 23:28:52.200283 2667 server.go:1289] "Started kubelet" Jul 6 23:28:52.204761 kubelet[2667]: I0706 23:28:52.204699 2667 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:28:52.210997 kubelet[2667]: E0706 23:28:52.209271 2667 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:28:52.210997 kubelet[2667]: I0706 23:28:52.210307 2667 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:28:52.211264 kubelet[2667]: I0706 23:28:52.211243 2667 server.go:317] "Adding debug handlers to kubelet server" Jul 6 23:28:52.213617 kubelet[2667]: I0706 23:28:52.213561 2667 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:28:52.214086 kubelet[2667]: I0706 23:28:52.214068 2667 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:28:52.214607 kubelet[2667]: I0706 23:28:52.214591 2667 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:28:52.214789 kubelet[2667]: I0706 23:28:52.214775 2667 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:28:52.214995 kubelet[2667]: I0706 23:28:52.214982 2667 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:28:52.216548 kubelet[2667]: I0706 23:28:52.216517 2667 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:28:52.217118 kubelet[2667]: I0706 23:28:52.217089 2667 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:28:52.220346 kubelet[2667]: I0706 23:28:52.220313 2667 factory.go:223] Registration of the containerd container factory successfully Jul 6 23:28:52.220346 kubelet[2667]: I0706 23:28:52.220337 2667 factory.go:223] Registration of the systemd container factory successfully Jul 6 23:28:52.231532 kubelet[2667]: I0706 23:28:52.231484 2667 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 6 23:28:52.233776 kubelet[2667]: I0706 23:28:52.233749 2667 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 6 23:28:52.233908 kubelet[2667]: I0706 23:28:52.233897 2667 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 6 23:28:52.233984 kubelet[2667]: I0706 23:28:52.233973 2667 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:28:52.234034 kubelet[2667]: I0706 23:28:52.234026 2667 kubelet.go:2436] "Starting kubelet main sync loop" Jul 6 23:28:52.234149 kubelet[2667]: E0706 23:28:52.234105 2667 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:28:52.261067 kubelet[2667]: I0706 23:28:52.261038 2667 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:28:52.261220 kubelet[2667]: I0706 23:28:52.261205 2667 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:28:52.261300 kubelet[2667]: I0706 23:28:52.261291 2667 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:28:52.261519 kubelet[2667]: I0706 23:28:52.261503 2667 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 6 23:28:52.261589 kubelet[2667]: I0706 23:28:52.261568 2667 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 6 23:28:52.261709 kubelet[2667]: I0706 23:28:52.261698 2667 policy_none.go:49] "None policy: Start" Jul 6 23:28:52.262813 kubelet[2667]: I0706 23:28:52.262532 2667 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:28:52.262813 kubelet[2667]: I0706 23:28:52.262557 2667 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:28:52.262813 kubelet[2667]: I0706 23:28:52.262682 2667 state_mem.go:75] "Updated machine memory state" Jul 6 23:28:52.269590 kubelet[2667]: E0706 23:28:52.269231 2667 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 6 23:28:52.269590 kubelet[2667]: I0706 23:28:52.269422 2667 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:28:52.269590 kubelet[2667]: I0706 23:28:52.269446 2667 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:28:52.269740 kubelet[2667]: I0706 23:28:52.269650 2667 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:28:52.272450 kubelet[2667]: E0706 23:28:52.272422 2667 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:28:52.335049 kubelet[2667]: I0706 23:28:52.335012 2667 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 6 23:28:52.335257 kubelet[2667]: I0706 23:28:52.335073 2667 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 6 23:28:52.335257 kubelet[2667]: I0706 23:28:52.335147 2667 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 6 23:28:52.374865 kubelet[2667]: I0706 23:28:52.374838 2667 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:28:52.412815 kubelet[2667]: E0706 23:28:52.412754 2667 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 6 23:28:52.412943 kubelet[2667]: E0706 23:28:52.412887 2667 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 6 23:28:52.416089 kubelet[2667]: I0706 23:28:52.416041 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:28:52.416089 kubelet[2667]: I0706 23:28:52.416088 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 6 23:28:52.416563 kubelet[2667]: I0706 23:28:52.416441 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7cea89690cdc0acd0ac33f42783bbffe-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7cea89690cdc0acd0ac33f42783bbffe\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:28:52.416627 kubelet[2667]: I0706 23:28:52.416584 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7cea89690cdc0acd0ac33f42783bbffe-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7cea89690cdc0acd0ac33f42783bbffe\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:28:52.416795 kubelet[2667]: I0706 23:28:52.416771 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:28:52.416842 kubelet[2667]: I0706 23:28:52.416821 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:28:52.416881 kubelet[2667]: I0706 23:28:52.416852 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7cea89690cdc0acd0ac33f42783bbffe-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7cea89690cdc0acd0ac33f42783bbffe\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:28:52.416904 kubelet[2667]: I0706 23:28:52.416897 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:28:52.416926 kubelet[2667]: I0706 23:28:52.416917 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:28:52.417150 kubelet[2667]: I0706 23:28:52.417115 2667 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 6 23:28:52.417250 kubelet[2667]: I0706 23:28:52.417216 2667 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 6 23:28:52.696763 sudo[2706]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 6 23:28:52.697392 sudo[2706]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 6 23:28:53.159892 sudo[2706]: pam_unix(sudo:session): session closed for user root Jul 6 23:28:53.190754 kubelet[2667]: I0706 23:28:53.190709 2667 apiserver.go:52] "Watching apiserver" Jul 6 23:28:53.215452 kubelet[2667]: I0706 23:28:53.215403 2667 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:28:53.216699 kubelet[2667]: I0706 23:28:53.216619 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.216603608 podStartE2EDuration="2.216603608s" podCreationTimestamp="2025-07-06 23:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:28:53.21655856 +0000 UTC m=+1.092206614" watchObservedRunningTime="2025-07-06 23:28:53.216603608 +0000 UTC m=+1.092251702" Jul 6 23:28:53.225427 kubelet[2667]: I0706 23:28:53.225298 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.225282081 podStartE2EDuration="3.225282081s" podCreationTimestamp="2025-07-06 23:28:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:28:53.225072404 +0000 UTC m=+1.100720458" watchObservedRunningTime="2025-07-06 23:28:53.225282081 +0000 UTC m=+1.100930175" Jul 6 23:28:53.246853 kubelet[2667]: I0706 23:28:53.245542 2667 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 6 23:28:53.246853 kubelet[2667]: I0706 23:28:53.245595 2667 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 6 23:28:53.250055 kubelet[2667]: I0706 23:28:53.249990 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.249974981 podStartE2EDuration="1.249974981s" podCreationTimestamp="2025-07-06 23:28:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:28:53.237092075 +0000 UTC m=+1.112740169" watchObservedRunningTime="2025-07-06 23:28:53.249974981 +0000 UTC m=+1.125623075" Jul 6 23:28:53.251814 kubelet[2667]: E0706 23:28:53.251789 2667 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 6 23:28:53.254356 kubelet[2667]: E0706 23:28:53.254174 2667 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 6 23:28:54.928572 sudo[1735]: pam_unix(sudo:session): session closed for user root Jul 6 23:28:54.930640 sshd[1734]: Connection closed by 10.0.0.1 port 46788 Jul 6 23:28:54.932050 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Jul 6 23:28:54.937264 systemd[1]: sshd@6-10.0.0.47:22-10.0.0.1:46788.service: Deactivated successfully. Jul 6 23:28:54.941993 systemd[1]: session-7.scope: Deactivated successfully. Jul 6 23:28:54.942407 systemd[1]: session-7.scope: Consumed 8.846s CPU time, 258.9M memory peak. Jul 6 23:28:54.944351 systemd-logind[1514]: Session 7 logged out. Waiting for processes to exit. Jul 6 23:28:54.948024 systemd-logind[1514]: Removed session 7. Jul 6 23:28:56.531621 kubelet[2667]: I0706 23:28:56.531343 2667 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 6 23:28:56.532424 containerd[1529]: time="2025-07-06T23:28:56.532217880Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 6 23:28:56.532661 kubelet[2667]: I0706 23:28:56.532424 2667 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 6 23:28:57.883885 systemd[1]: Created slice kubepods-besteffort-pod3ec03224_1529_44cc_8910_1283d9aa21cc.slice - libcontainer container kubepods-besteffort-pod3ec03224_1529_44cc_8910_1283d9aa21cc.slice. Jul 6 23:28:57.900351 systemd[1]: Created slice kubepods-burstable-podd20e950a_9322_4244_a16a_b65570f06454.slice - libcontainer container kubepods-burstable-podd20e950a_9322_4244_a16a_b65570f06454.slice. Jul 6 23:28:57.956105 kubelet[2667]: I0706 23:28:57.956068 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3ec03224-1529-44cc-8910-1283d9aa21cc-kube-proxy\") pod \"kube-proxy-66phn\" (UID: \"3ec03224-1529-44cc-8910-1283d9aa21cc\") " pod="kube-system/kube-proxy-66phn" Jul 6 23:28:57.956105 kubelet[2667]: I0706 23:28:57.956107 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vmg2\" (UniqueName: \"kubernetes.io/projected/3ec03224-1529-44cc-8910-1283d9aa21cc-kube-api-access-4vmg2\") pod \"kube-proxy-66phn\" (UID: \"3ec03224-1529-44cc-8910-1283d9aa21cc\") " pod="kube-system/kube-proxy-66phn" Jul 6 23:28:57.956603 kubelet[2667]: I0706 23:28:57.956167 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d20e950a-9322-4244-a16a-b65570f06454-cilium-run\") pod \"cilium-mgz2x\" (UID: \"d20e950a-9322-4244-a16a-b65570f06454\") " pod="kube-system/cilium-mgz2x" Jul 6 23:28:57.956603 kubelet[2667]: I0706 23:28:57.956185 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d20e950a-9322-4244-a16a-b65570f06454-clustermesh-secrets\") pod \"cilium-mgz2x\" (UID: \"d20e950a-9322-4244-a16a-b65570f06454\") " pod="kube-system/cilium-mgz2x" Jul 6 23:28:57.956603 kubelet[2667]: I0706 23:28:57.956201 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d20e950a-9322-4244-a16a-b65570f06454-cilium-config-path\") pod \"cilium-mgz2x\" (UID: \"d20e950a-9322-4244-a16a-b65570f06454\") " pod="kube-system/cilium-mgz2x" Jul 6 23:28:57.956603 kubelet[2667]: I0706 23:28:57.956221 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d20e950a-9322-4244-a16a-b65570f06454-bpf-maps\") pod \"cilium-mgz2x\" (UID: \"d20e950a-9322-4244-a16a-b65570f06454\") " pod="kube-system/cilium-mgz2x" Jul 6 23:28:57.956603 kubelet[2667]: I0706 23:28:57.956238 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d20e950a-9322-4244-a16a-b65570f06454-cni-path\") pod \"cilium-mgz2x\" (UID: \"d20e950a-9322-4244-a16a-b65570f06454\") " pod="kube-system/cilium-mgz2x" Jul 6 23:28:57.956603 kubelet[2667]: I0706 23:28:57.956256 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d20e950a-9322-4244-a16a-b65570f06454-etc-cni-netd\") pod \"cilium-mgz2x\" (UID: \"d20e950a-9322-4244-a16a-b65570f06454\") " pod="kube-system/cilium-mgz2x" Jul 6 23:28:57.956914 kubelet[2667]: I0706 23:28:57.956270 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d20e950a-9322-4244-a16a-b65570f06454-xtables-lock\") pod \"cilium-mgz2x\" (UID: \"d20e950a-9322-4244-a16a-b65570f06454\") " pod="kube-system/cilium-mgz2x" Jul 6 23:28:57.956914 kubelet[2667]: I0706 23:28:57.956305 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ec03224-1529-44cc-8910-1283d9aa21cc-xtables-lock\") pod \"kube-proxy-66phn\" (UID: \"3ec03224-1529-44cc-8910-1283d9aa21cc\") " pod="kube-system/kube-proxy-66phn" Jul 6 23:28:57.956914 kubelet[2667]: I0706 23:28:57.956342 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ec03224-1529-44cc-8910-1283d9aa21cc-lib-modules\") pod \"kube-proxy-66phn\" (UID: \"3ec03224-1529-44cc-8910-1283d9aa21cc\") " pod="kube-system/kube-proxy-66phn" Jul 6 23:28:57.956914 kubelet[2667]: I0706 23:28:57.956357 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d20e950a-9322-4244-a16a-b65570f06454-hostproc\") pod \"cilium-mgz2x\" (UID: \"d20e950a-9322-4244-a16a-b65570f06454\") " pod="kube-system/cilium-mgz2x" Jul 6 23:28:57.956914 kubelet[2667]: I0706 23:28:57.956375 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d20e950a-9322-4244-a16a-b65570f06454-host-proc-sys-net\") pod \"cilium-mgz2x\" (UID: \"d20e950a-9322-4244-a16a-b65570f06454\") " pod="kube-system/cilium-mgz2x" Jul 6 23:28:57.956914 kubelet[2667]: I0706 23:28:57.956389 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d20e950a-9322-4244-a16a-b65570f06454-host-proc-sys-kernel\") pod \"cilium-mgz2x\" (UID: \"d20e950a-9322-4244-a16a-b65570f06454\") " pod="kube-system/cilium-mgz2x" Jul 6 23:28:57.957034 kubelet[2667]: I0706 23:28:57.956403 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d20e950a-9322-4244-a16a-b65570f06454-hubble-tls\") pod \"cilium-mgz2x\" (UID: \"d20e950a-9322-4244-a16a-b65570f06454\") " pod="kube-system/cilium-mgz2x" Jul 6 23:28:57.957034 kubelet[2667]: I0706 23:28:57.956427 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ck6c\" (UniqueName: \"kubernetes.io/projected/d20e950a-9322-4244-a16a-b65570f06454-kube-api-access-9ck6c\") pod \"cilium-mgz2x\" (UID: \"d20e950a-9322-4244-a16a-b65570f06454\") " pod="kube-system/cilium-mgz2x" Jul 6 23:28:57.957034 kubelet[2667]: I0706 23:28:57.956460 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d20e950a-9322-4244-a16a-b65570f06454-cilium-cgroup\") pod \"cilium-mgz2x\" (UID: \"d20e950a-9322-4244-a16a-b65570f06454\") " pod="kube-system/cilium-mgz2x" Jul 6 23:28:57.957034 kubelet[2667]: I0706 23:28:57.956493 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d20e950a-9322-4244-a16a-b65570f06454-lib-modules\") pod \"cilium-mgz2x\" (UID: \"d20e950a-9322-4244-a16a-b65570f06454\") " pod="kube-system/cilium-mgz2x" Jul 6 23:28:58.053225 systemd[1]: Created slice kubepods-besteffort-pod12a5dfa2_c7a8_49e7_a1dd_322a31a1246e.slice - libcontainer container kubepods-besteffort-pod12a5dfa2_c7a8_49e7_a1dd_322a31a1246e.slice. Jul 6 23:28:58.056877 kubelet[2667]: I0706 23:28:58.056843 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khh2q\" (UniqueName: \"kubernetes.io/projected/12a5dfa2-c7a8-49e7-a1dd-322a31a1246e-kube-api-access-khh2q\") pod \"cilium-operator-6c4d7847fc-cdrqn\" (UID: \"12a5dfa2-c7a8-49e7-a1dd-322a31a1246e\") " pod="kube-system/cilium-operator-6c4d7847fc-cdrqn" Jul 6 23:28:58.057178 kubelet[2667]: I0706 23:28:58.057153 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/12a5dfa2-c7a8-49e7-a1dd-322a31a1246e-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-cdrqn\" (UID: \"12a5dfa2-c7a8-49e7-a1dd-322a31a1246e\") " pod="kube-system/cilium-operator-6c4d7847fc-cdrqn" Jul 6 23:28:58.197175 containerd[1529]: time="2025-07-06T23:28:58.197028475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-66phn,Uid:3ec03224-1529-44cc-8910-1283d9aa21cc,Namespace:kube-system,Attempt:0,}" Jul 6 23:28:58.203858 containerd[1529]: time="2025-07-06T23:28:58.203819752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mgz2x,Uid:d20e950a-9322-4244-a16a-b65570f06454,Namespace:kube-system,Attempt:0,}" Jul 6 23:28:58.285205 containerd[1529]: time="2025-07-06T23:28:58.285152143Z" level=info msg="connecting to shim edc57f02fc52190339d712a96c25c8a591ca613ebfafefe90ea12fddd2a0607d" address="unix:///run/containerd/s/4f19ffffc0b34e08dcd133535629c543f5a400e8ddd6b01d404bf63cae5bf24a" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:28:58.291737 containerd[1529]: time="2025-07-06T23:28:58.291702948Z" level=info msg="connecting to shim 2ed24b36ef6e6d33c5c239fb3dc496a382fa1ef39a690559e856ebdc1b13a4d3" address="unix:///run/containerd/s/33df6039c4abb558f6fc3d1f7151b6f2e5ad67a49d43a11b24a3c10dc2d3fb56" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:28:58.311298 systemd[1]: Started cri-containerd-edc57f02fc52190339d712a96c25c8a591ca613ebfafefe90ea12fddd2a0607d.scope - libcontainer container edc57f02fc52190339d712a96c25c8a591ca613ebfafefe90ea12fddd2a0607d. Jul 6 23:28:58.321822 systemd[1]: Started cri-containerd-2ed24b36ef6e6d33c5c239fb3dc496a382fa1ef39a690559e856ebdc1b13a4d3.scope - libcontainer container 2ed24b36ef6e6d33c5c239fb3dc496a382fa1ef39a690559e856ebdc1b13a4d3. Jul 6 23:28:58.345399 containerd[1529]: time="2025-07-06T23:28:58.345350437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mgz2x,Uid:d20e950a-9322-4244-a16a-b65570f06454,Namespace:kube-system,Attempt:0,} returns sandbox id \"edc57f02fc52190339d712a96c25c8a591ca613ebfafefe90ea12fddd2a0607d\"" Jul 6 23:28:58.346956 containerd[1529]: time="2025-07-06T23:28:58.346910528Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 6 23:28:58.355560 containerd[1529]: time="2025-07-06T23:28:58.355514410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-66phn,Uid:3ec03224-1529-44cc-8910-1283d9aa21cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ed24b36ef6e6d33c5c239fb3dc496a382fa1ef39a690559e856ebdc1b13a4d3\"" Jul 6 23:28:58.356394 containerd[1529]: time="2025-07-06T23:28:58.356371846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-cdrqn,Uid:12a5dfa2-c7a8-49e7-a1dd-322a31a1246e,Namespace:kube-system,Attempt:0,}" Jul 6 23:28:58.360398 containerd[1529]: time="2025-07-06T23:28:58.360312019Z" level=info msg="CreateContainer within sandbox \"2ed24b36ef6e6d33c5c239fb3dc496a382fa1ef39a690559e856ebdc1b13a4d3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 6 23:28:58.400412 containerd[1529]: time="2025-07-06T23:28:58.400156883Z" level=info msg="Container 4932392002b2dfd3d41332d2c715b8ea950bb807483d4541bb83f084e90bd89d: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:28:58.402672 containerd[1529]: time="2025-07-06T23:28:58.402626576Z" level=info msg="connecting to shim ce0ecee5d795dea95e2198976e23c574d3d0ef5aedacf957f2c5df9a59e9faf8" address="unix:///run/containerd/s/b88dce68fc11a8c1c1afd1886b31b1aa2b83c58b34eb9336b143554f0f39a603" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:28:58.416991 containerd[1529]: time="2025-07-06T23:28:58.416948912Z" level=info msg="CreateContainer within sandbox \"2ed24b36ef6e6d33c5c239fb3dc496a382fa1ef39a690559e856ebdc1b13a4d3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4932392002b2dfd3d41332d2c715b8ea950bb807483d4541bb83f084e90bd89d\"" Jul 6 23:28:58.417951 containerd[1529]: time="2025-07-06T23:28:58.417926004Z" level=info msg="StartContainer for \"4932392002b2dfd3d41332d2c715b8ea950bb807483d4541bb83f084e90bd89d\"" Jul 6 23:28:58.419954 containerd[1529]: time="2025-07-06T23:28:58.419871027Z" level=info msg="connecting to shim 4932392002b2dfd3d41332d2c715b8ea950bb807483d4541bb83f084e90bd89d" address="unix:///run/containerd/s/33df6039c4abb558f6fc3d1f7151b6f2e5ad67a49d43a11b24a3c10dc2d3fb56" protocol=ttrpc version=3 Jul 6 23:28:58.431490 systemd[1]: Started cri-containerd-ce0ecee5d795dea95e2198976e23c574d3d0ef5aedacf957f2c5df9a59e9faf8.scope - libcontainer container ce0ecee5d795dea95e2198976e23c574d3d0ef5aedacf957f2c5df9a59e9faf8. Jul 6 23:28:58.437244 systemd[1]: Started cri-containerd-4932392002b2dfd3d41332d2c715b8ea950bb807483d4541bb83f084e90bd89d.scope - libcontainer container 4932392002b2dfd3d41332d2c715b8ea950bb807483d4541bb83f084e90bd89d. Jul 6 23:28:58.486221 containerd[1529]: time="2025-07-06T23:28:58.486096095Z" level=info msg="StartContainer for \"4932392002b2dfd3d41332d2c715b8ea950bb807483d4541bb83f084e90bd89d\" returns successfully" Jul 6 23:28:58.501959 containerd[1529]: time="2025-07-06T23:28:58.501922674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-cdrqn,Uid:12a5dfa2-c7a8-49e7-a1dd-322a31a1246e,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce0ecee5d795dea95e2198976e23c574d3d0ef5aedacf957f2c5df9a59e9faf8\"" Jul 6 23:29:00.341053 kubelet[2667]: I0706 23:29:00.340992 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-66phn" podStartSLOduration=3.340976396 podStartE2EDuration="3.340976396s" podCreationTimestamp="2025-07-06 23:28:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:28:59.273915386 +0000 UTC m=+7.149563520" watchObservedRunningTime="2025-07-06 23:29:00.340976396 +0000 UTC m=+8.216624490" Jul 6 23:29:01.457201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4222040272.mount: Deactivated successfully. Jul 6 23:29:05.391171 update_engine[1518]: I20250706 23:29:05.391086 1518 update_attempter.cc:509] Updating boot flags... Jul 6 23:29:05.419514 containerd[1529]: time="2025-07-06T23:29:05.419382777Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:05.420158 containerd[1529]: time="2025-07-06T23:29:05.420113365Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 6 23:29:05.422286 containerd[1529]: time="2025-07-06T23:29:05.422222362Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:05.425144 containerd[1529]: time="2025-07-06T23:29:05.424912573Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.07796324s" Jul 6 23:29:05.425144 containerd[1529]: time="2025-07-06T23:29:05.424957297Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 6 23:29:05.427791 containerd[1529]: time="2025-07-06T23:29:05.427760078Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 6 23:29:05.451510 containerd[1529]: time="2025-07-06T23:29:05.451475409Z" level=info msg="CreateContainer within sandbox \"edc57f02fc52190339d712a96c25c8a591ca613ebfafefe90ea12fddd2a0607d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 6 23:29:05.467816 containerd[1529]: time="2025-07-06T23:29:05.467132189Z" level=info msg="Container 397376a21c5394265e5884842615c8eb79e0621942c168f570498ce55a0d4c24: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:29:05.476078 containerd[1529]: time="2025-07-06T23:29:05.476022658Z" level=info msg="CreateContainer within sandbox \"edc57f02fc52190339d712a96c25c8a591ca613ebfafefe90ea12fddd2a0607d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"397376a21c5394265e5884842615c8eb79e0621942c168f570498ce55a0d4c24\"" Jul 6 23:29:05.483213 containerd[1529]: time="2025-07-06T23:29:05.482790609Z" level=info msg="StartContainer for \"397376a21c5394265e5884842615c8eb79e0621942c168f570498ce55a0d4c24\"" Jul 6 23:29:05.483862 containerd[1529]: time="2025-07-06T23:29:05.483836587Z" level=info msg="connecting to shim 397376a21c5394265e5884842615c8eb79e0621942c168f570498ce55a0d4c24" address="unix:///run/containerd/s/4f19ffffc0b34e08dcd133535629c543f5a400e8ddd6b01d404bf63cae5bf24a" protocol=ttrpc version=3 Jul 6 23:29:05.513381 systemd[1]: Started cri-containerd-397376a21c5394265e5884842615c8eb79e0621942c168f570498ce55a0d4c24.scope - libcontainer container 397376a21c5394265e5884842615c8eb79e0621942c168f570498ce55a0d4c24. Jul 6 23:29:05.556224 containerd[1529]: time="2025-07-06T23:29:05.552890025Z" level=info msg="StartContainer for \"397376a21c5394265e5884842615c8eb79e0621942c168f570498ce55a0d4c24\" returns successfully" Jul 6 23:29:05.637885 systemd[1]: cri-containerd-397376a21c5394265e5884842615c8eb79e0621942c168f570498ce55a0d4c24.scope: Deactivated successfully. Jul 6 23:29:05.663993 containerd[1529]: time="2025-07-06T23:29:05.663860053Z" level=info msg="TaskExit event in podsandbox handler container_id:\"397376a21c5394265e5884842615c8eb79e0621942c168f570498ce55a0d4c24\" id:\"397376a21c5394265e5884842615c8eb79e0621942c168f570498ce55a0d4c24\" pid:3109 exited_at:{seconds:1751844545 nanos:653342032}" Jul 6 23:29:05.664680 containerd[1529]: time="2025-07-06T23:29:05.664627044Z" level=info msg="received exit event container_id:\"397376a21c5394265e5884842615c8eb79e0621942c168f570498ce55a0d4c24\" id:\"397376a21c5394265e5884842615c8eb79e0621942c168f570498ce55a0d4c24\" pid:3109 exited_at:{seconds:1751844545 nanos:653342032}" Jul 6 23:29:05.702808 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-397376a21c5394265e5884842615c8eb79e0621942c168f570498ce55a0d4c24-rootfs.mount: Deactivated successfully. Jul 6 23:29:06.367455 containerd[1529]: time="2025-07-06T23:29:06.367371766Z" level=info msg="CreateContainer within sandbox \"edc57f02fc52190339d712a96c25c8a591ca613ebfafefe90ea12fddd2a0607d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 6 23:29:06.388993 containerd[1529]: time="2025-07-06T23:29:06.388930077Z" level=info msg="Container ccc42c4d89639f1a5a42da2c47ec77c1f9a8195ead3c1669f247635b2cd9e80c: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:29:06.398776 containerd[1529]: time="2025-07-06T23:29:06.398700664Z" level=info msg="CreateContainer within sandbox \"edc57f02fc52190339d712a96c25c8a591ca613ebfafefe90ea12fddd2a0607d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ccc42c4d89639f1a5a42da2c47ec77c1f9a8195ead3c1669f247635b2cd9e80c\"" Jul 6 23:29:06.399760 containerd[1529]: time="2025-07-06T23:29:06.399721354Z" level=info msg="StartContainer for \"ccc42c4d89639f1a5a42da2c47ec77c1f9a8195ead3c1669f247635b2cd9e80c\"" Jul 6 23:29:06.403392 containerd[1529]: time="2025-07-06T23:29:06.403336035Z" level=info msg="connecting to shim ccc42c4d89639f1a5a42da2c47ec77c1f9a8195ead3c1669f247635b2cd9e80c" address="unix:///run/containerd/s/4f19ffffc0b34e08dcd133535629c543f5a400e8ddd6b01d404bf63cae5bf24a" protocol=ttrpc version=3 Jul 6 23:29:06.435352 systemd[1]: Started cri-containerd-ccc42c4d89639f1a5a42da2c47ec77c1f9a8195ead3c1669f247635b2cd9e80c.scope - libcontainer container ccc42c4d89639f1a5a42da2c47ec77c1f9a8195ead3c1669f247635b2cd9e80c. Jul 6 23:29:06.472950 containerd[1529]: time="2025-07-06T23:29:06.470913426Z" level=info msg="StartContainer for \"ccc42c4d89639f1a5a42da2c47ec77c1f9a8195ead3c1669f247635b2cd9e80c\" returns successfully" Jul 6 23:29:06.494962 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:29:06.495455 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:29:06.495664 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:29:06.498496 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:29:06.501143 containerd[1529]: time="2025-07-06T23:29:06.500925647Z" level=info msg="received exit event container_id:\"ccc42c4d89639f1a5a42da2c47ec77c1f9a8195ead3c1669f247635b2cd9e80c\" id:\"ccc42c4d89639f1a5a42da2c47ec77c1f9a8195ead3c1669f247635b2cd9e80c\" pid:3162 exited_at:{seconds:1751844546 nanos:500712228}" Jul 6 23:29:06.501186 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 6 23:29:06.501510 containerd[1529]: time="2025-07-06T23:29:06.501275438Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ccc42c4d89639f1a5a42da2c47ec77c1f9a8195ead3c1669f247635b2cd9e80c\" id:\"ccc42c4d89639f1a5a42da2c47ec77c1f9a8195ead3c1669f247635b2cd9e80c\" pid:3162 exited_at:{seconds:1751844546 nanos:500712228}" Jul 6 23:29:06.501661 systemd[1]: cri-containerd-ccc42c4d89639f1a5a42da2c47ec77c1f9a8195ead3c1669f247635b2cd9e80c.scope: Deactivated successfully. Jul 6 23:29:06.533779 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ccc42c4d89639f1a5a42da2c47ec77c1f9a8195ead3c1669f247635b2cd9e80c-rootfs.mount: Deactivated successfully. Jul 6 23:29:06.536669 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:29:06.754636 containerd[1529]: time="2025-07-06T23:29:06.754227104Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:06.754941 containerd[1529]: time="2025-07-06T23:29:06.754906804Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 6 23:29:06.755530 containerd[1529]: time="2025-07-06T23:29:06.755503617Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:06.756786 containerd[1529]: time="2025-07-06T23:29:06.756758408Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.328726705s" Jul 6 23:29:06.756865 containerd[1529]: time="2025-07-06T23:29:06.756788571Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 6 23:29:06.761310 containerd[1529]: time="2025-07-06T23:29:06.761258247Z" level=info msg="CreateContainer within sandbox \"ce0ecee5d795dea95e2198976e23c574d3d0ef5aedacf957f2c5df9a59e9faf8\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 6 23:29:06.770546 containerd[1529]: time="2025-07-06T23:29:06.770498387Z" level=info msg="Container 8dbe19fb273c2622f88eae2324594c5f4bcd7370138bb6eb6105de5ba28ff241: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:29:06.776476 containerd[1529]: time="2025-07-06T23:29:06.776418351Z" level=info msg="CreateContainer within sandbox \"ce0ecee5d795dea95e2198976e23c574d3d0ef5aedacf957f2c5df9a59e9faf8\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8dbe19fb273c2622f88eae2324594c5f4bcd7370138bb6eb6105de5ba28ff241\"" Jul 6 23:29:06.777036 containerd[1529]: time="2025-07-06T23:29:06.777006484Z" level=info msg="StartContainer for \"8dbe19fb273c2622f88eae2324594c5f4bcd7370138bb6eb6105de5ba28ff241\"" Jul 6 23:29:06.779607 containerd[1529]: time="2025-07-06T23:29:06.779561310Z" level=info msg="connecting to shim 8dbe19fb273c2622f88eae2324594c5f4bcd7370138bb6eb6105de5ba28ff241" address="unix:///run/containerd/s/b88dce68fc11a8c1c1afd1886b31b1aa2b83c58b34eb9336b143554f0f39a603" protocol=ttrpc version=3 Jul 6 23:29:06.809353 systemd[1]: Started cri-containerd-8dbe19fb273c2622f88eae2324594c5f4bcd7370138bb6eb6105de5ba28ff241.scope - libcontainer container 8dbe19fb273c2622f88eae2324594c5f4bcd7370138bb6eb6105de5ba28ff241. Jul 6 23:29:06.840936 containerd[1529]: time="2025-07-06T23:29:06.840887427Z" level=info msg="StartContainer for \"8dbe19fb273c2622f88eae2324594c5f4bcd7370138bb6eb6105de5ba28ff241\" returns successfully" Jul 6 23:29:07.372482 containerd[1529]: time="2025-07-06T23:29:07.372424717Z" level=info msg="CreateContainer within sandbox \"edc57f02fc52190339d712a96c25c8a591ca613ebfafefe90ea12fddd2a0607d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 6 23:29:07.379022 kubelet[2667]: I0706 23:29:07.378946 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-cdrqn" podStartSLOduration=2.126054724 podStartE2EDuration="10.378926825s" podCreationTimestamp="2025-07-06 23:28:57 +0000 UTC" firstStartedPulling="2025-07-06 23:28:58.50492684 +0000 UTC m=+6.380574894" lastFinishedPulling="2025-07-06 23:29:06.757798901 +0000 UTC m=+14.633446995" observedRunningTime="2025-07-06 23:29:07.378447385 +0000 UTC m=+15.254095479" watchObservedRunningTime="2025-07-06 23:29:07.378926825 +0000 UTC m=+15.254574959" Jul 6 23:29:07.390137 containerd[1529]: time="2025-07-06T23:29:07.390074445Z" level=info msg="Container e95a4aec3b6acef46a5191ecf5bd57f4433d8e42c7358b565c234c760c7e6915: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:29:07.403779 containerd[1529]: time="2025-07-06T23:29:07.403711676Z" level=info msg="CreateContainer within sandbox \"edc57f02fc52190339d712a96c25c8a591ca613ebfafefe90ea12fddd2a0607d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e95a4aec3b6acef46a5191ecf5bd57f4433d8e42c7358b565c234c760c7e6915\"" Jul 6 23:29:07.404663 containerd[1529]: time="2025-07-06T23:29:07.404627393Z" level=info msg="StartContainer for \"e95a4aec3b6acef46a5191ecf5bd57f4433d8e42c7358b565c234c760c7e6915\"" Jul 6 23:29:07.406202 containerd[1529]: time="2025-07-06T23:29:07.406149522Z" level=info msg="connecting to shim e95a4aec3b6acef46a5191ecf5bd57f4433d8e42c7358b565c234c760c7e6915" address="unix:///run/containerd/s/4f19ffffc0b34e08dcd133535629c543f5a400e8ddd6b01d404bf63cae5bf24a" protocol=ttrpc version=3 Jul 6 23:29:07.435350 systemd[1]: Started cri-containerd-e95a4aec3b6acef46a5191ecf5bd57f4433d8e42c7358b565c234c760c7e6915.scope - libcontainer container e95a4aec3b6acef46a5191ecf5bd57f4433d8e42c7358b565c234c760c7e6915. Jul 6 23:29:07.512356 containerd[1529]: time="2025-07-06T23:29:07.512303437Z" level=info msg="StartContainer for \"e95a4aec3b6acef46a5191ecf5bd57f4433d8e42c7358b565c234c760c7e6915\" returns successfully" Jul 6 23:29:07.539693 systemd[1]: cri-containerd-e95a4aec3b6acef46a5191ecf5bd57f4433d8e42c7358b565c234c760c7e6915.scope: Deactivated successfully. Jul 6 23:29:07.541192 containerd[1529]: time="2025-07-06T23:29:07.541153311Z" level=info msg="received exit event container_id:\"e95a4aec3b6acef46a5191ecf5bd57f4433d8e42c7358b565c234c760c7e6915\" id:\"e95a4aec3b6acef46a5191ecf5bd57f4433d8e42c7358b565c234c760c7e6915\" pid:3253 exited_at:{seconds:1751844547 nanos:540313600}" Jul 6 23:29:07.544770 containerd[1529]: time="2025-07-06T23:29:07.544717091Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e95a4aec3b6acef46a5191ecf5bd57f4433d8e42c7358b565c234c760c7e6915\" id:\"e95a4aec3b6acef46a5191ecf5bd57f4433d8e42c7358b565c234c760c7e6915\" pid:3253 exited_at:{seconds:1751844547 nanos:540313600}" Jul 6 23:29:07.565734 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e95a4aec3b6acef46a5191ecf5bd57f4433d8e42c7358b565c234c760c7e6915-rootfs.mount: Deactivated successfully. Jul 6 23:29:08.381772 containerd[1529]: time="2025-07-06T23:29:08.381595638Z" level=info msg="CreateContainer within sandbox \"edc57f02fc52190339d712a96c25c8a591ca613ebfafefe90ea12fddd2a0607d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 6 23:29:08.406274 containerd[1529]: time="2025-07-06T23:29:08.406219176Z" level=info msg="Container 271209b3628789e731acfb0c226c602e5a56a502b83623412421a026ffa81463: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:29:08.408923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2319621173.mount: Deactivated successfully. Jul 6 23:29:08.417312 containerd[1529]: time="2025-07-06T23:29:08.417259223Z" level=info msg="CreateContainer within sandbox \"edc57f02fc52190339d712a96c25c8a591ca613ebfafefe90ea12fddd2a0607d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"271209b3628789e731acfb0c226c602e5a56a502b83623412421a026ffa81463\"" Jul 6 23:29:08.418093 containerd[1529]: time="2025-07-06T23:29:08.418055967Z" level=info msg="StartContainer for \"271209b3628789e731acfb0c226c602e5a56a502b83623412421a026ffa81463\"" Jul 6 23:29:08.419548 containerd[1529]: time="2025-07-06T23:29:08.419508844Z" level=info msg="connecting to shim 271209b3628789e731acfb0c226c602e5a56a502b83623412421a026ffa81463" address="unix:///run/containerd/s/4f19ffffc0b34e08dcd133535629c543f5a400e8ddd6b01d404bf63cae5bf24a" protocol=ttrpc version=3 Jul 6 23:29:08.446538 systemd[1]: Started cri-containerd-271209b3628789e731acfb0c226c602e5a56a502b83623412421a026ffa81463.scope - libcontainer container 271209b3628789e731acfb0c226c602e5a56a502b83623412421a026ffa81463. Jul 6 23:29:08.488474 systemd[1]: cri-containerd-271209b3628789e731acfb0c226c602e5a56a502b83623412421a026ffa81463.scope: Deactivated successfully. Jul 6 23:29:08.491430 containerd[1529]: time="2025-07-06T23:29:08.491305651Z" level=info msg="received exit event container_id:\"271209b3628789e731acfb0c226c602e5a56a502b83623412421a026ffa81463\" id:\"271209b3628789e731acfb0c226c602e5a56a502b83623412421a026ffa81463\" pid:3295 exited_at:{seconds:1751844548 nanos:491044750}" Jul 6 23:29:08.491892 containerd[1529]: time="2025-07-06T23:29:08.491799771Z" level=info msg="StartContainer for \"271209b3628789e731acfb0c226c602e5a56a502b83623412421a026ffa81463\" returns successfully" Jul 6 23:29:08.492411 containerd[1529]: time="2025-07-06T23:29:08.492159840Z" level=info msg="TaskExit event in podsandbox handler container_id:\"271209b3628789e731acfb0c226c602e5a56a502b83623412421a026ffa81463\" id:\"271209b3628789e731acfb0c226c602e5a56a502b83623412421a026ffa81463\" pid:3295 exited_at:{seconds:1751844548 nanos:491044750}" Jul 6 23:29:08.513333 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-271209b3628789e731acfb0c226c602e5a56a502b83623412421a026ffa81463-rootfs.mount: Deactivated successfully. Jul 6 23:29:09.390972 containerd[1529]: time="2025-07-06T23:29:09.390916166Z" level=info msg="CreateContainer within sandbox \"edc57f02fc52190339d712a96c25c8a591ca613ebfafefe90ea12fddd2a0607d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 6 23:29:09.403998 containerd[1529]: time="2025-07-06T23:29:09.403164944Z" level=info msg="Container 314340126df67576202e5e2aae3ea712baafb998a376d66399c60c5b331b05e6: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:29:09.411112 containerd[1529]: time="2025-07-06T23:29:09.411070589Z" level=info msg="CreateContainer within sandbox \"edc57f02fc52190339d712a96c25c8a591ca613ebfafefe90ea12fddd2a0607d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"314340126df67576202e5e2aae3ea712baafb998a376d66399c60c5b331b05e6\"" Jul 6 23:29:09.411812 containerd[1529]: time="2025-07-06T23:29:09.411739920Z" level=info msg="StartContainer for \"314340126df67576202e5e2aae3ea712baafb998a376d66399c60c5b331b05e6\"" Jul 6 23:29:09.413327 containerd[1529]: time="2025-07-06T23:29:09.413283478Z" level=info msg="connecting to shim 314340126df67576202e5e2aae3ea712baafb998a376d66399c60c5b331b05e6" address="unix:///run/containerd/s/4f19ffffc0b34e08dcd133535629c543f5a400e8ddd6b01d404bf63cae5bf24a" protocol=ttrpc version=3 Jul 6 23:29:09.436325 systemd[1]: Started cri-containerd-314340126df67576202e5e2aae3ea712baafb998a376d66399c60c5b331b05e6.scope - libcontainer container 314340126df67576202e5e2aae3ea712baafb998a376d66399c60c5b331b05e6. Jul 6 23:29:09.466896 containerd[1529]: time="2025-07-06T23:29:09.466846459Z" level=info msg="StartContainer for \"314340126df67576202e5e2aae3ea712baafb998a376d66399c60c5b331b05e6\" returns successfully" Jul 6 23:29:09.560631 containerd[1529]: time="2025-07-06T23:29:09.560581395Z" level=info msg="TaskExit event in podsandbox handler container_id:\"314340126df67576202e5e2aae3ea712baafb998a376d66399c60c5b331b05e6\" id:\"da4fefce2a689222222a098d4a3979ab79d862a2bb97823b92595055f01a108b\" pid:3362 exited_at:{seconds:1751844549 nanos:560271131}" Jul 6 23:29:09.644346 kubelet[2667]: I0706 23:29:09.644244 2667 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 6 23:29:09.723399 systemd[1]: Created slice kubepods-burstable-pod694a860f_921d_40bc_a136_69ab9be0a319.slice - libcontainer container kubepods-burstable-pod694a860f_921d_40bc_a136_69ab9be0a319.slice. Jul 6 23:29:09.733835 systemd[1]: Created slice kubepods-burstable-pod7c1f4495_d264_4514_ad2c_773b4faf98bd.slice - libcontainer container kubepods-burstable-pod7c1f4495_d264_4514_ad2c_773b4faf98bd.slice. Jul 6 23:29:09.754602 kubelet[2667]: I0706 23:29:09.754501 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lkbr\" (UniqueName: \"kubernetes.io/projected/7c1f4495-d264-4514-ad2c-773b4faf98bd-kube-api-access-8lkbr\") pod \"coredns-674b8bbfcf-2j2jb\" (UID: \"7c1f4495-d264-4514-ad2c-773b4faf98bd\") " pod="kube-system/coredns-674b8bbfcf-2j2jb" Jul 6 23:29:09.754926 kubelet[2667]: I0706 23:29:09.754909 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzrwf\" (UniqueName: \"kubernetes.io/projected/694a860f-921d-40bc-a136-69ab9be0a319-kube-api-access-tzrwf\") pod \"coredns-674b8bbfcf-lj2dt\" (UID: \"694a860f-921d-40bc-a136-69ab9be0a319\") " pod="kube-system/coredns-674b8bbfcf-lj2dt" Jul 6 23:29:09.754968 kubelet[2667]: I0706 23:29:09.754939 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/694a860f-921d-40bc-a136-69ab9be0a319-config-volume\") pod \"coredns-674b8bbfcf-lj2dt\" (UID: \"694a860f-921d-40bc-a136-69ab9be0a319\") " pod="kube-system/coredns-674b8bbfcf-lj2dt" Jul 6 23:29:09.754968 kubelet[2667]: I0706 23:29:09.754957 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c1f4495-d264-4514-ad2c-773b4faf98bd-config-volume\") pod \"coredns-674b8bbfcf-2j2jb\" (UID: \"7c1f4495-d264-4514-ad2c-773b4faf98bd\") " pod="kube-system/coredns-674b8bbfcf-2j2jb" Jul 6 23:29:10.028819 containerd[1529]: time="2025-07-06T23:29:10.028704894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lj2dt,Uid:694a860f-921d-40bc-a136-69ab9be0a319,Namespace:kube-system,Attempt:0,}" Jul 6 23:29:10.040259 containerd[1529]: time="2025-07-06T23:29:10.040213374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2j2jb,Uid:7c1f4495-d264-4514-ad2c-773b4faf98bd,Namespace:kube-system,Attempt:0,}" Jul 6 23:29:10.451690 kubelet[2667]: I0706 23:29:10.451559 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mgz2x" podStartSLOduration=6.371460758 podStartE2EDuration="13.451544327s" podCreationTimestamp="2025-07-06 23:28:57 +0000 UTC" firstStartedPulling="2025-07-06 23:28:58.346686417 +0000 UTC m=+6.222334511" lastFinishedPulling="2025-07-06 23:29:05.426769986 +0000 UTC m=+13.302418080" observedRunningTime="2025-07-06 23:29:10.450477569 +0000 UTC m=+18.326125663" watchObservedRunningTime="2025-07-06 23:29:10.451544327 +0000 UTC m=+18.327192421" Jul 6 23:29:11.356626 systemd-networkd[1434]: cilium_host: Link UP Jul 6 23:29:11.356747 systemd-networkd[1434]: cilium_net: Link UP Jul 6 23:29:11.356953 systemd-networkd[1434]: cilium_net: Gained carrier Jul 6 23:29:11.357091 systemd-networkd[1434]: cilium_host: Gained carrier Jul 6 23:29:11.478939 systemd-networkd[1434]: cilium_vxlan: Link UP Jul 6 23:29:11.478948 systemd-networkd[1434]: cilium_vxlan: Gained carrier Jul 6 23:29:11.664247 systemd-networkd[1434]: cilium_net: Gained IPv6LL Jul 6 23:29:11.830151 kernel: NET: Registered PF_ALG protocol family Jul 6 23:29:12.304259 systemd-networkd[1434]: cilium_host: Gained IPv6LL Jul 6 23:29:12.449416 systemd-networkd[1434]: lxc_health: Link UP Jul 6 23:29:12.460327 systemd-networkd[1434]: lxc_health: Gained carrier Jul 6 23:29:12.632797 systemd-networkd[1434]: lxcd99860356212: Link UP Jul 6 23:29:12.648086 systemd-networkd[1434]: lxcea590f435728: Link UP Jul 6 23:29:12.649142 kernel: eth0: renamed from tmpdaea7 Jul 6 23:29:12.649652 systemd-networkd[1434]: lxcd99860356212: Gained carrier Jul 6 23:29:12.653194 kernel: eth0: renamed from tmp2d040 Jul 6 23:29:12.654051 systemd-networkd[1434]: lxcea590f435728: Gained carrier Jul 6 23:29:13.456320 systemd-networkd[1434]: cilium_vxlan: Gained IPv6LL Jul 6 23:29:13.776421 systemd-networkd[1434]: lxcea590f435728: Gained IPv6LL Jul 6 23:29:14.160263 systemd-networkd[1434]: lxcd99860356212: Gained IPv6LL Jul 6 23:29:14.161007 systemd-networkd[1434]: lxc_health: Gained IPv6LL Jul 6 23:29:16.243974 containerd[1529]: time="2025-07-06T23:29:16.243884946Z" level=info msg="connecting to shim 2d040ff57966a1be3bd073f52252698d2970ceb29863a4148e3287063b4be7af" address="unix:///run/containerd/s/a1f50f3cc172b6696356130f56393af789c919ef3ad74412b5a5be32ddf01c9e" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:29:16.245672 containerd[1529]: time="2025-07-06T23:29:16.245632083Z" level=info msg="connecting to shim daea7d9ccbbeef6b85785cad760229f5f19f775893c7da2f9115356656b9063c" address="unix:///run/containerd/s/40f134b8a8040e75ea2c9b0be14ab642eb8cb32ab79e4b9d6647b9d531785b3f" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:29:16.269344 systemd[1]: Started cri-containerd-2d040ff57966a1be3bd073f52252698d2970ceb29863a4148e3287063b4be7af.scope - libcontainer container 2d040ff57966a1be3bd073f52252698d2970ceb29863a4148e3287063b4be7af. Jul 6 23:29:16.272862 systemd[1]: Started cri-containerd-daea7d9ccbbeef6b85785cad760229f5f19f775893c7da2f9115356656b9063c.scope - libcontainer container daea7d9ccbbeef6b85785cad760229f5f19f775893c7da2f9115356656b9063c. Jul 6 23:29:16.285231 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:29:16.285854 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:29:16.308765 containerd[1529]: time="2025-07-06T23:29:16.308729614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2j2jb,Uid:7c1f4495-d264-4514-ad2c-773b4faf98bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d040ff57966a1be3bd073f52252698d2970ceb29863a4148e3287063b4be7af\"" Jul 6 23:29:16.314471 containerd[1529]: time="2025-07-06T23:29:16.314413292Z" level=info msg="CreateContainer within sandbox \"2d040ff57966a1be3bd073f52252698d2970ceb29863a4148e3287063b4be7af\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:29:16.322431 containerd[1529]: time="2025-07-06T23:29:16.322366657Z" level=info msg="Container b6bace5f0612c933a1d33eda80edc013e81a6ea28c730a2caa9c4b34c31dfd8b: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:29:16.338763 containerd[1529]: time="2025-07-06T23:29:16.338716932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lj2dt,Uid:694a860f-921d-40bc-a136-69ab9be0a319,Namespace:kube-system,Attempt:0,} returns sandbox id \"daea7d9ccbbeef6b85785cad760229f5f19f775893c7da2f9115356656b9063c\"" Jul 6 23:29:16.342082 containerd[1529]: time="2025-07-06T23:29:16.342031238Z" level=info msg="CreateContainer within sandbox \"2d040ff57966a1be3bd073f52252698d2970ceb29863a4148e3287063b4be7af\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b6bace5f0612c933a1d33eda80edc013e81a6ea28c730a2caa9c4b34c31dfd8b\"" Jul 6 23:29:16.342703 containerd[1529]: time="2025-07-06T23:29:16.342674434Z" level=info msg="StartContainer for \"b6bace5f0612c933a1d33eda80edc013e81a6ea28c730a2caa9c4b34c31dfd8b\"" Jul 6 23:29:16.343628 containerd[1529]: time="2025-07-06T23:29:16.343602646Z" level=info msg="connecting to shim b6bace5f0612c933a1d33eda80edc013e81a6ea28c730a2caa9c4b34c31dfd8b" address="unix:///run/containerd/s/a1f50f3cc172b6696356130f56393af789c919ef3ad74412b5a5be32ddf01c9e" protocol=ttrpc version=3 Jul 6 23:29:16.344802 containerd[1529]: time="2025-07-06T23:29:16.344767151Z" level=info msg="CreateContainer within sandbox \"daea7d9ccbbeef6b85785cad760229f5f19f775893c7da2f9115356656b9063c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:29:16.352661 containerd[1529]: time="2025-07-06T23:29:16.352631191Z" level=info msg="Container 75b57561108bd11bd83bc3219f2b42c2982dade39465747b63ac50b252c20d26: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:29:16.359061 containerd[1529]: time="2025-07-06T23:29:16.359013548Z" level=info msg="CreateContainer within sandbox \"daea7d9ccbbeef6b85785cad760229f5f19f775893c7da2f9115356656b9063c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"75b57561108bd11bd83bc3219f2b42c2982dade39465747b63ac50b252c20d26\"" Jul 6 23:29:16.360296 containerd[1529]: time="2025-07-06T23:29:16.360267938Z" level=info msg="StartContainer for \"75b57561108bd11bd83bc3219f2b42c2982dade39465747b63ac50b252c20d26\"" Jul 6 23:29:16.361088 containerd[1529]: time="2025-07-06T23:29:16.361052902Z" level=info msg="connecting to shim 75b57561108bd11bd83bc3219f2b42c2982dade39465747b63ac50b252c20d26" address="unix:///run/containerd/s/40f134b8a8040e75ea2c9b0be14ab642eb8cb32ab79e4b9d6647b9d531785b3f" protocol=ttrpc version=3 Jul 6 23:29:16.364339 systemd[1]: Started cri-containerd-b6bace5f0612c933a1d33eda80edc013e81a6ea28c730a2caa9c4b34c31dfd8b.scope - libcontainer container b6bace5f0612c933a1d33eda80edc013e81a6ea28c730a2caa9c4b34c31dfd8b. Jul 6 23:29:16.388296 systemd[1]: Started cri-containerd-75b57561108bd11bd83bc3219f2b42c2982dade39465747b63ac50b252c20d26.scope - libcontainer container 75b57561108bd11bd83bc3219f2b42c2982dade39465747b63ac50b252c20d26. Jul 6 23:29:16.418528 containerd[1529]: time="2025-07-06T23:29:16.418253463Z" level=info msg="StartContainer for \"b6bace5f0612c933a1d33eda80edc013e81a6ea28c730a2caa9c4b34c31dfd8b\" returns successfully" Jul 6 23:29:16.437281 containerd[1529]: time="2025-07-06T23:29:16.434481411Z" level=info msg="StartContainer for \"75b57561108bd11bd83bc3219f2b42c2982dade39465747b63ac50b252c20d26\" returns successfully" Jul 6 23:29:17.426724 kubelet[2667]: I0706 23:29:17.426458 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-2j2jb" podStartSLOduration=19.426338358 podStartE2EDuration="19.426338358s" podCreationTimestamp="2025-07-06 23:28:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:29:17.424503179 +0000 UTC m=+25.300151273" watchObservedRunningTime="2025-07-06 23:29:17.426338358 +0000 UTC m=+25.301986452" Jul 6 23:29:17.456055 kubelet[2667]: I0706 23:29:17.455523 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-lj2dt" podStartSLOduration=20.455506124 podStartE2EDuration="20.455506124s" podCreationTimestamp="2025-07-06 23:28:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:29:17.455414559 +0000 UTC m=+25.331062653" watchObservedRunningTime="2025-07-06 23:29:17.455506124 +0000 UTC m=+25.331154178" Jul 6 23:29:22.454531 systemd[1]: Started sshd@7-10.0.0.47:22-10.0.0.1:58930.service - OpenSSH per-connection server daemon (10.0.0.1:58930). Jul 6 23:29:22.509283 sshd[4019]: Accepted publickey for core from 10.0.0.1 port 58930 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:29:22.510715 sshd-session[4019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:22.518879 systemd-logind[1514]: New session 8 of user core. Jul 6 23:29:22.537347 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 6 23:29:22.704657 sshd[4021]: Connection closed by 10.0.0.1 port 58930 Jul 6 23:29:22.705202 sshd-session[4019]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:22.709407 systemd[1]: sshd@7-10.0.0.47:22-10.0.0.1:58930.service: Deactivated successfully. Jul 6 23:29:22.711372 systemd[1]: session-8.scope: Deactivated successfully. Jul 6 23:29:22.713721 systemd-logind[1514]: Session 8 logged out. Waiting for processes to exit. Jul 6 23:29:22.715164 systemd-logind[1514]: Removed session 8. Jul 6 23:29:27.730593 systemd[1]: Started sshd@8-10.0.0.47:22-10.0.0.1:42244.service - OpenSSH per-connection server daemon (10.0.0.1:42244). Jul 6 23:29:27.786637 sshd[4036]: Accepted publickey for core from 10.0.0.1 port 42244 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:29:27.788043 sshd-session[4036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:27.794533 systemd-logind[1514]: New session 9 of user core. Jul 6 23:29:27.804413 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 6 23:29:27.930937 sshd[4038]: Connection closed by 10.0.0.1 port 42244 Jul 6 23:29:27.931732 sshd-session[4036]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:27.935304 systemd[1]: sshd@8-10.0.0.47:22-10.0.0.1:42244.service: Deactivated successfully. Jul 6 23:29:27.938212 systemd[1]: session-9.scope: Deactivated successfully. Jul 6 23:29:27.941964 systemd-logind[1514]: Session 9 logged out. Waiting for processes to exit. Jul 6 23:29:27.943508 systemd-logind[1514]: Removed session 9. Jul 6 23:29:32.950995 systemd[1]: Started sshd@9-10.0.0.47:22-10.0.0.1:56480.service - OpenSSH per-connection server daemon (10.0.0.1:56480). Jul 6 23:29:33.015416 sshd[4055]: Accepted publickey for core from 10.0.0.1 port 56480 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:29:33.017624 sshd-session[4055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:33.030401 systemd-logind[1514]: New session 10 of user core. Jul 6 23:29:33.037300 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 6 23:29:33.173385 sshd[4057]: Connection closed by 10.0.0.1 port 56480 Jul 6 23:29:33.174410 sshd-session[4055]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:33.181595 systemd-logind[1514]: Session 10 logged out. Waiting for processes to exit. Jul 6 23:29:33.182263 systemd[1]: sshd@9-10.0.0.47:22-10.0.0.1:56480.service: Deactivated successfully. Jul 6 23:29:33.184997 systemd[1]: session-10.scope: Deactivated successfully. Jul 6 23:29:33.186767 systemd-logind[1514]: Removed session 10. Jul 6 23:29:38.193513 systemd[1]: Started sshd@10-10.0.0.47:22-10.0.0.1:56496.service - OpenSSH per-connection server daemon (10.0.0.1:56496). Jul 6 23:29:38.235028 sshd[4071]: Accepted publickey for core from 10.0.0.1 port 56496 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:29:38.236571 sshd-session[4071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:38.241325 systemd-logind[1514]: New session 11 of user core. Jul 6 23:29:38.251359 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 6 23:29:38.374368 sshd[4073]: Connection closed by 10.0.0.1 port 56496 Jul 6 23:29:38.374424 sshd-session[4071]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:38.389535 systemd[1]: sshd@10-10.0.0.47:22-10.0.0.1:56496.service: Deactivated successfully. Jul 6 23:29:38.392709 systemd[1]: session-11.scope: Deactivated successfully. Jul 6 23:29:38.393467 systemd-logind[1514]: Session 11 logged out. Waiting for processes to exit. Jul 6 23:29:38.396386 systemd[1]: Started sshd@11-10.0.0.47:22-10.0.0.1:56510.service - OpenSSH per-connection server daemon (10.0.0.1:56510). Jul 6 23:29:38.397151 systemd-logind[1514]: Removed session 11. Jul 6 23:29:38.443282 sshd[4087]: Accepted publickey for core from 10.0.0.1 port 56510 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:29:38.444692 sshd-session[4087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:38.449833 systemd-logind[1514]: New session 12 of user core. Jul 6 23:29:38.456318 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 6 23:29:38.620653 sshd[4089]: Connection closed by 10.0.0.1 port 56510 Jul 6 23:29:38.621233 sshd-session[4087]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:38.633644 systemd[1]: sshd@11-10.0.0.47:22-10.0.0.1:56510.service: Deactivated successfully. Jul 6 23:29:38.636828 systemd[1]: session-12.scope: Deactivated successfully. Jul 6 23:29:38.637833 systemd-logind[1514]: Session 12 logged out. Waiting for processes to exit. Jul 6 23:29:38.642074 systemd[1]: Started sshd@12-10.0.0.47:22-10.0.0.1:56524.service - OpenSSH per-connection server daemon (10.0.0.1:56524). Jul 6 23:29:38.642745 systemd-logind[1514]: Removed session 12. Jul 6 23:29:38.703457 sshd[4101]: Accepted publickey for core from 10.0.0.1 port 56524 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:29:38.704769 sshd-session[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:38.709419 systemd-logind[1514]: New session 13 of user core. Jul 6 23:29:38.719326 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 6 23:29:38.839135 sshd[4103]: Connection closed by 10.0.0.1 port 56524 Jul 6 23:29:38.839485 sshd-session[4101]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:38.843258 systemd[1]: sshd@12-10.0.0.47:22-10.0.0.1:56524.service: Deactivated successfully. Jul 6 23:29:38.845358 systemd[1]: session-13.scope: Deactivated successfully. Jul 6 23:29:38.846263 systemd-logind[1514]: Session 13 logged out. Waiting for processes to exit. Jul 6 23:29:38.847339 systemd-logind[1514]: Removed session 13. Jul 6 23:29:43.866518 systemd[1]: Started sshd@13-10.0.0.47:22-10.0.0.1:56074.service - OpenSSH per-connection server daemon (10.0.0.1:56074). Jul 6 23:29:43.946713 sshd[4116]: Accepted publickey for core from 10.0.0.1 port 56074 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:29:43.948367 sshd-session[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:43.956032 systemd-logind[1514]: New session 14 of user core. Jul 6 23:29:43.970427 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 6 23:29:44.133371 sshd[4118]: Connection closed by 10.0.0.1 port 56074 Jul 6 23:29:44.134606 sshd-session[4116]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:44.142493 systemd[1]: sshd@13-10.0.0.47:22-10.0.0.1:56074.service: Deactivated successfully. Jul 6 23:29:44.148685 systemd[1]: session-14.scope: Deactivated successfully. Jul 6 23:29:44.150070 systemd-logind[1514]: Session 14 logged out. Waiting for processes to exit. Jul 6 23:29:44.151730 systemd-logind[1514]: Removed session 14. Jul 6 23:29:49.154857 systemd[1]: Started sshd@14-10.0.0.47:22-10.0.0.1:56078.service - OpenSSH per-connection server daemon (10.0.0.1:56078). Jul 6 23:29:49.227034 sshd[4134]: Accepted publickey for core from 10.0.0.1 port 56078 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:29:49.228917 sshd-session[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:49.234705 systemd-logind[1514]: New session 15 of user core. Jul 6 23:29:49.254415 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 6 23:29:49.394223 sshd[4136]: Connection closed by 10.0.0.1 port 56078 Jul 6 23:29:49.395392 sshd-session[4134]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:49.405374 systemd[1]: sshd@14-10.0.0.47:22-10.0.0.1:56078.service: Deactivated successfully. Jul 6 23:29:49.407847 systemd[1]: session-15.scope: Deactivated successfully. Jul 6 23:29:49.409400 systemd-logind[1514]: Session 15 logged out. Waiting for processes to exit. Jul 6 23:29:49.413585 systemd[1]: Started sshd@15-10.0.0.47:22-10.0.0.1:56094.service - OpenSSH per-connection server daemon (10.0.0.1:56094). Jul 6 23:29:49.415291 systemd-logind[1514]: Removed session 15. Jul 6 23:29:49.477880 sshd[4150]: Accepted publickey for core from 10.0.0.1 port 56094 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:29:49.481320 sshd-session[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:49.492309 systemd-logind[1514]: New session 16 of user core. Jul 6 23:29:49.502404 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 6 23:29:50.151499 sshd[4152]: Connection closed by 10.0.0.1 port 56094 Jul 6 23:29:50.153206 sshd-session[4150]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:50.168620 systemd[1]: sshd@15-10.0.0.47:22-10.0.0.1:56094.service: Deactivated successfully. Jul 6 23:29:50.172967 systemd[1]: session-16.scope: Deactivated successfully. Jul 6 23:29:50.174270 systemd-logind[1514]: Session 16 logged out. Waiting for processes to exit. Jul 6 23:29:50.179833 systemd[1]: Started sshd@16-10.0.0.47:22-10.0.0.1:56108.service - OpenSSH per-connection server daemon (10.0.0.1:56108). Jul 6 23:29:50.180593 systemd-logind[1514]: Removed session 16. Jul 6 23:29:50.245948 sshd[4164]: Accepted publickey for core from 10.0.0.1 port 56108 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:29:50.247919 sshd-session[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:50.257864 systemd-logind[1514]: New session 17 of user core. Jul 6 23:29:50.267437 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 6 23:29:51.157870 sshd[4166]: Connection closed by 10.0.0.1 port 56108 Jul 6 23:29:51.159540 sshd-session[4164]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:51.166724 systemd[1]: sshd@16-10.0.0.47:22-10.0.0.1:56108.service: Deactivated successfully. Jul 6 23:29:51.170381 systemd[1]: session-17.scope: Deactivated successfully. Jul 6 23:29:51.172082 systemd-logind[1514]: Session 17 logged out. Waiting for processes to exit. Jul 6 23:29:51.176485 systemd[1]: Started sshd@17-10.0.0.47:22-10.0.0.1:56114.service - OpenSSH per-connection server daemon (10.0.0.1:56114). Jul 6 23:29:51.178934 systemd-logind[1514]: Removed session 17. Jul 6 23:29:51.234356 sshd[4189]: Accepted publickey for core from 10.0.0.1 port 56114 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:29:51.235830 sshd-session[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:51.241217 systemd-logind[1514]: New session 18 of user core. Jul 6 23:29:51.256353 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 6 23:29:51.510295 sshd[4191]: Connection closed by 10.0.0.1 port 56114 Jul 6 23:29:51.515642 sshd-session[4189]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:51.529799 systemd[1]: sshd@17-10.0.0.47:22-10.0.0.1:56114.service: Deactivated successfully. Jul 6 23:29:51.532652 systemd[1]: session-18.scope: Deactivated successfully. Jul 6 23:29:51.534524 systemd-logind[1514]: Session 18 logged out. Waiting for processes to exit. Jul 6 23:29:51.537544 systemd[1]: Started sshd@18-10.0.0.47:22-10.0.0.1:56128.service - OpenSSH per-connection server daemon (10.0.0.1:56128). Jul 6 23:29:51.541495 systemd-logind[1514]: Removed session 18. Jul 6 23:29:51.608989 sshd[4202]: Accepted publickey for core from 10.0.0.1 port 56128 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:29:51.610524 sshd-session[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:51.615224 systemd-logind[1514]: New session 19 of user core. Jul 6 23:29:51.624369 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 6 23:29:51.745413 sshd[4204]: Connection closed by 10.0.0.1 port 56128 Jul 6 23:29:51.746118 sshd-session[4202]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:51.752470 systemd[1]: sshd@18-10.0.0.47:22-10.0.0.1:56128.service: Deactivated successfully. Jul 6 23:29:51.756165 systemd[1]: session-19.scope: Deactivated successfully. Jul 6 23:29:51.757095 systemd-logind[1514]: Session 19 logged out. Waiting for processes to exit. Jul 6 23:29:51.758615 systemd-logind[1514]: Removed session 19. Jul 6 23:29:56.760237 systemd[1]: Started sshd@19-10.0.0.47:22-10.0.0.1:58036.service - OpenSSH per-connection server daemon (10.0.0.1:58036). Jul 6 23:29:56.821975 sshd[4222]: Accepted publickey for core from 10.0.0.1 port 58036 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:29:56.823495 sshd-session[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:56.828286 systemd-logind[1514]: New session 20 of user core. Jul 6 23:29:56.835324 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 6 23:29:56.953492 sshd[4224]: Connection closed by 10.0.0.1 port 58036 Jul 6 23:29:56.953829 sshd-session[4222]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:56.957704 systemd[1]: sshd@19-10.0.0.47:22-10.0.0.1:58036.service: Deactivated successfully. Jul 6 23:29:56.959364 systemd[1]: session-20.scope: Deactivated successfully. Jul 6 23:29:56.959988 systemd-logind[1514]: Session 20 logged out. Waiting for processes to exit. Jul 6 23:29:56.961298 systemd-logind[1514]: Removed session 20. Jul 6 23:30:01.966241 systemd[1]: Started sshd@20-10.0.0.47:22-10.0.0.1:58048.service - OpenSSH per-connection server daemon (10.0.0.1:58048). Jul 6 23:30:02.024814 sshd[4241]: Accepted publickey for core from 10.0.0.1 port 58048 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:30:02.026364 sshd-session[4241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:30:02.032229 systemd-logind[1514]: New session 21 of user core. Jul 6 23:30:02.050391 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 6 23:30:02.197703 sshd[4243]: Connection closed by 10.0.0.1 port 58048 Jul 6 23:30:02.198367 sshd-session[4241]: pam_unix(sshd:session): session closed for user core Jul 6 23:30:02.201978 systemd[1]: sshd@20-10.0.0.47:22-10.0.0.1:58048.service: Deactivated successfully. Jul 6 23:30:02.203715 systemd[1]: session-21.scope: Deactivated successfully. Jul 6 23:30:02.205222 systemd-logind[1514]: Session 21 logged out. Waiting for processes to exit. Jul 6 23:30:02.210072 systemd-logind[1514]: Removed session 21. Jul 6 23:30:07.215886 systemd[1]: Started sshd@21-10.0.0.47:22-10.0.0.1:47162.service - OpenSSH per-connection server daemon (10.0.0.1:47162). Jul 6 23:30:07.270281 sshd[4256]: Accepted publickey for core from 10.0.0.1 port 47162 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:30:07.272436 sshd-session[4256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:30:07.280754 systemd-logind[1514]: New session 22 of user core. Jul 6 23:30:07.294407 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 6 23:30:07.421246 sshd[4258]: Connection closed by 10.0.0.1 port 47162 Jul 6 23:30:07.421804 sshd-session[4256]: pam_unix(sshd:session): session closed for user core Jul 6 23:30:07.431753 systemd[1]: sshd@21-10.0.0.47:22-10.0.0.1:47162.service: Deactivated successfully. Jul 6 23:30:07.433898 systemd[1]: session-22.scope: Deactivated successfully. Jul 6 23:30:07.436143 systemd-logind[1514]: Session 22 logged out. Waiting for processes to exit. Jul 6 23:30:07.442568 systemd[1]: Started sshd@22-10.0.0.47:22-10.0.0.1:47166.service - OpenSSH per-connection server daemon (10.0.0.1:47166). Jul 6 23:30:07.444674 systemd-logind[1514]: Removed session 22. Jul 6 23:30:07.495304 sshd[4272]: Accepted publickey for core from 10.0.0.1 port 47166 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:30:07.497104 sshd-session[4272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:30:07.502211 systemd-logind[1514]: New session 23 of user core. Jul 6 23:30:07.509290 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 6 23:30:09.082648 containerd[1529]: time="2025-07-06T23:30:09.082417260Z" level=info msg="StopContainer for \"8dbe19fb273c2622f88eae2324594c5f4bcd7370138bb6eb6105de5ba28ff241\" with timeout 30 (s)" Jul 6 23:30:09.086359 containerd[1529]: time="2025-07-06T23:30:09.086275341Z" level=info msg="Stop container \"8dbe19fb273c2622f88eae2324594c5f4bcd7370138bb6eb6105de5ba28ff241\" with signal terminated" Jul 6 23:30:09.100381 systemd[1]: cri-containerd-8dbe19fb273c2622f88eae2324594c5f4bcd7370138bb6eb6105de5ba28ff241.scope: Deactivated successfully. Jul 6 23:30:09.102434 containerd[1529]: time="2025-07-06T23:30:09.102397626Z" level=info msg="received exit event container_id:\"8dbe19fb273c2622f88eae2324594c5f4bcd7370138bb6eb6105de5ba28ff241\" id:\"8dbe19fb273c2622f88eae2324594c5f4bcd7370138bb6eb6105de5ba28ff241\" pid:3218 exited_at:{seconds:1751844609 nanos:101700706}" Jul 6 23:30:09.106052 containerd[1529]: time="2025-07-06T23:30:09.105798188Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8dbe19fb273c2622f88eae2324594c5f4bcd7370138bb6eb6105de5ba28ff241\" id:\"8dbe19fb273c2622f88eae2324594c5f4bcd7370138bb6eb6105de5ba28ff241\" pid:3218 exited_at:{seconds:1751844609 nanos:101700706}" Jul 6 23:30:09.106734 containerd[1529]: time="2025-07-06T23:30:09.106692308Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:30:09.111923 containerd[1529]: time="2025-07-06T23:30:09.111889229Z" level=info msg="TaskExit event in podsandbox handler container_id:\"314340126df67576202e5e2aae3ea712baafb998a376d66399c60c5b331b05e6\" id:\"6db29e02f84f3ec53b2949aec5e2b82cbdd08e660aa674f9418bac5a1cc9e3ef\" pid:4302 exited_at:{seconds:1751844609 nanos:111270949}" Jul 6 23:30:09.113568 containerd[1529]: time="2025-07-06T23:30:09.113539950Z" level=info msg="StopContainer for \"314340126df67576202e5e2aae3ea712baafb998a376d66399c60c5b331b05e6\" with timeout 2 (s)" Jul 6 23:30:09.115235 containerd[1529]: time="2025-07-06T23:30:09.115202951Z" level=info msg="Stop container \"314340126df67576202e5e2aae3ea712baafb998a376d66399c60c5b331b05e6\" with signal terminated" Jul 6 23:30:09.123574 systemd-networkd[1434]: lxc_health: Link DOWN Jul 6 23:30:09.123854 systemd-networkd[1434]: lxc_health: Lost carrier Jul 6 23:30:09.140063 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8dbe19fb273c2622f88eae2324594c5f4bcd7370138bb6eb6105de5ba28ff241-rootfs.mount: Deactivated successfully. Jul 6 23:30:09.143741 systemd[1]: cri-containerd-314340126df67576202e5e2aae3ea712baafb998a376d66399c60c5b331b05e6.scope: Deactivated successfully. Jul 6 23:30:09.144306 systemd[1]: cri-containerd-314340126df67576202e5e2aae3ea712baafb998a376d66399c60c5b331b05e6.scope: Consumed 6.639s CPU time, 122.4M memory peak, 160K read from disk, 12.9M written to disk. Jul 6 23:30:09.145256 containerd[1529]: time="2025-07-06T23:30:09.145172280Z" level=info msg="received exit event container_id:\"314340126df67576202e5e2aae3ea712baafb998a376d66399c60c5b331b05e6\" id:\"314340126df67576202e5e2aae3ea712baafb998a376d66399c60c5b331b05e6\" pid:3332 exited_at:{seconds:1751844609 nanos:144891880}" Jul 6 23:30:09.145591 containerd[1529]: time="2025-07-06T23:30:09.145302720Z" level=info msg="TaskExit event in podsandbox handler container_id:\"314340126df67576202e5e2aae3ea712baafb998a376d66399c60c5b331b05e6\" id:\"314340126df67576202e5e2aae3ea712baafb998a376d66399c60c5b331b05e6\" pid:3332 exited_at:{seconds:1751844609 nanos:144891880}" Jul 6 23:30:09.152817 containerd[1529]: time="2025-07-06T23:30:09.152770003Z" level=info msg="StopContainer for \"8dbe19fb273c2622f88eae2324594c5f4bcd7370138bb6eb6105de5ba28ff241\" returns successfully" Jul 6 23:30:09.156859 containerd[1529]: time="2025-07-06T23:30:09.156800684Z" level=info msg="StopPodSandbox for \"ce0ecee5d795dea95e2198976e23c574d3d0ef5aedacf957f2c5df9a59e9faf8\"" Jul 6 23:30:09.167064 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-314340126df67576202e5e2aae3ea712baafb998a376d66399c60c5b331b05e6-rootfs.mount: Deactivated successfully. Jul 6 23:30:09.168799 containerd[1529]: time="2025-07-06T23:30:09.168571528Z" level=info msg="Container to stop \"8dbe19fb273c2622f88eae2324594c5f4bcd7370138bb6eb6105de5ba28ff241\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:30:09.175599 containerd[1529]: time="2025-07-06T23:30:09.175481890Z" level=info msg="StopContainer for \"314340126df67576202e5e2aae3ea712baafb998a376d66399c60c5b331b05e6\" returns successfully" Jul 6 23:30:09.176956 containerd[1529]: time="2025-07-06T23:30:09.176920451Z" level=info msg="StopPodSandbox for \"edc57f02fc52190339d712a96c25c8a591ca613ebfafefe90ea12fddd2a0607d\"" Jul 6 23:30:09.177090 containerd[1529]: time="2025-07-06T23:30:09.177051611Z" level=info msg="Container to stop \"397376a21c5394265e5884842615c8eb79e0621942c168f570498ce55a0d4c24\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:30:09.177090 containerd[1529]: time="2025-07-06T23:30:09.177082531Z" level=info msg="Container to stop \"ccc42c4d89639f1a5a42da2c47ec77c1f9a8195ead3c1669f247635b2cd9e80c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:30:09.177155 containerd[1529]: time="2025-07-06T23:30:09.177092731Z" level=info msg="Container to stop \"e95a4aec3b6acef46a5191ecf5bd57f4433d8e42c7358b565c234c760c7e6915\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:30:09.177155 containerd[1529]: time="2025-07-06T23:30:09.177102171Z" level=info msg="Container to stop \"314340126df67576202e5e2aae3ea712baafb998a376d66399c60c5b331b05e6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:30:09.177155 containerd[1529]: time="2025-07-06T23:30:09.177111131Z" level=info msg="Container to stop \"271209b3628789e731acfb0c226c602e5a56a502b83623412421a026ffa81463\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:30:09.184509 systemd[1]: cri-containerd-edc57f02fc52190339d712a96c25c8a591ca613ebfafefe90ea12fddd2a0607d.scope: Deactivated successfully. Jul 6 23:30:09.190086 systemd[1]: cri-containerd-ce0ecee5d795dea95e2198976e23c574d3d0ef5aedacf957f2c5df9a59e9faf8.scope: Deactivated successfully. Jul 6 23:30:09.206650 containerd[1529]: time="2025-07-06T23:30:09.190403535Z" level=info msg="TaskExit event in podsandbox handler container_id:\"edc57f02fc52190339d712a96c25c8a591ca613ebfafefe90ea12fddd2a0607d\" id:\"edc57f02fc52190339d712a96c25c8a591ca613ebfafefe90ea12fddd2a0607d\" pid:2811 exit_status:137 exited_at:{seconds:1751844609 nanos:189260215}" Jul 6 23:30:09.214505 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce0ecee5d795dea95e2198976e23c574d3d0ef5aedacf957f2c5df9a59e9faf8-rootfs.mount: Deactivated successfully. Jul 6 23:30:09.220499 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-edc57f02fc52190339d712a96c25c8a591ca613ebfafefe90ea12fddd2a0607d-rootfs.mount: Deactivated successfully. Jul 6 23:30:09.222530 containerd[1529]: time="2025-07-06T23:30:09.222406305Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce0ecee5d795dea95e2198976e23c574d3d0ef5aedacf957f2c5df9a59e9faf8\" id:\"ce0ecee5d795dea95e2198976e23c574d3d0ef5aedacf957f2c5df9a59e9faf8\" pid:2890 exit_status:137 exited_at:{seconds:1751844609 nanos:193951376}" Jul 6 23:30:09.224254 containerd[1529]: time="2025-07-06T23:30:09.222667865Z" level=info msg="received exit event sandbox_id:\"edc57f02fc52190339d712a96c25c8a591ca613ebfafefe90ea12fddd2a0607d\" exit_status:137 exited_at:{seconds:1751844609 nanos:189260215}" Jul 6 23:30:09.224254 containerd[1529]: time="2025-07-06T23:30:09.222909905Z" level=info msg="TearDown network for sandbox \"ce0ecee5d795dea95e2198976e23c574d3d0ef5aedacf957f2c5df9a59e9faf8\" successfully" Jul 6 23:30:09.224254 containerd[1529]: time="2025-07-06T23:30:09.222931545Z" level=info msg="StopPodSandbox for \"ce0ecee5d795dea95e2198976e23c574d3d0ef5aedacf957f2c5df9a59e9faf8\" returns successfully" Jul 6 23:30:09.224254 containerd[1529]: time="2025-07-06T23:30:09.222943505Z" level=info msg="shim disconnected" id=ce0ecee5d795dea95e2198976e23c574d3d0ef5aedacf957f2c5df9a59e9faf8 namespace=k8s.io Jul 6 23:30:09.224254 containerd[1529]: time="2025-07-06T23:30:09.222966705Z" level=warning msg="cleaning up after shim disconnected" id=ce0ecee5d795dea95e2198976e23c574d3d0ef5aedacf957f2c5df9a59e9faf8 namespace=k8s.io Jul 6 23:30:09.224254 containerd[1529]: time="2025-07-06T23:30:09.223003665Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:30:09.224254 containerd[1529]: time="2025-07-06T23:30:09.222911625Z" level=info msg="shim disconnected" id=edc57f02fc52190339d712a96c25c8a591ca613ebfafefe90ea12fddd2a0607d namespace=k8s.io Jul 6 23:30:09.224254 containerd[1529]: time="2025-07-06T23:30:09.223055545Z" level=warning msg="cleaning up after shim disconnected" id=edc57f02fc52190339d712a96c25c8a591ca613ebfafefe90ea12fddd2a0607d namespace=k8s.io Jul 6 23:30:09.224254 containerd[1529]: time="2025-07-06T23:30:09.223077745Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:30:09.224254 containerd[1529]: time="2025-07-06T23:30:09.223830426Z" level=info msg="TearDown network for sandbox \"edc57f02fc52190339d712a96c25c8a591ca613ebfafefe90ea12fddd2a0607d\" successfully" Jul 6 23:30:09.224254 containerd[1529]: time="2025-07-06T23:30:09.223856906Z" level=info msg="StopPodSandbox for \"edc57f02fc52190339d712a96c25c8a591ca613ebfafefe90ea12fddd2a0607d\" returns successfully" Jul 6 23:30:09.224769 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ce0ecee5d795dea95e2198976e23c574d3d0ef5aedacf957f2c5df9a59e9faf8-shm.mount: Deactivated successfully. Jul 6 23:30:09.224874 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-edc57f02fc52190339d712a96c25c8a591ca613ebfafefe90ea12fddd2a0607d-shm.mount: Deactivated successfully. Jul 6 23:30:09.227903 containerd[1529]: time="2025-07-06T23:30:09.225498866Z" level=info msg="received exit event sandbox_id:\"ce0ecee5d795dea95e2198976e23c574d3d0ef5aedacf957f2c5df9a59e9faf8\" exit_status:137 exited_at:{seconds:1751844609 nanos:193951376}" Jul 6 23:30:09.354603 kubelet[2667]: I0706 23:30:09.353747 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d20e950a-9322-4244-a16a-b65570f06454-bpf-maps\") pod \"d20e950a-9322-4244-a16a-b65570f06454\" (UID: \"d20e950a-9322-4244-a16a-b65570f06454\") " Jul 6 23:30:09.355424 kubelet[2667]: I0706 23:30:09.354984 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d20e950a-9322-4244-a16a-b65570f06454-host-proc-sys-net\") pod \"d20e950a-9322-4244-a16a-b65570f06454\" (UID: \"d20e950a-9322-4244-a16a-b65570f06454\") " Jul 6 23:30:09.355424 kubelet[2667]: I0706 23:30:09.355017 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d20e950a-9322-4244-a16a-b65570f06454-host-proc-sys-kernel\") pod \"d20e950a-9322-4244-a16a-b65570f06454\" (UID: \"d20e950a-9322-4244-a16a-b65570f06454\") " Jul 6 23:30:09.355424 kubelet[2667]: I0706 23:30:09.355039 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d20e950a-9322-4244-a16a-b65570f06454-clustermesh-secrets\") pod \"d20e950a-9322-4244-a16a-b65570f06454\" (UID: \"d20e950a-9322-4244-a16a-b65570f06454\") " Jul 6 23:30:09.355424 kubelet[2667]: I0706 23:30:09.355056 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ck6c\" (UniqueName: \"kubernetes.io/projected/d20e950a-9322-4244-a16a-b65570f06454-kube-api-access-9ck6c\") pod \"d20e950a-9322-4244-a16a-b65570f06454\" (UID: \"d20e950a-9322-4244-a16a-b65570f06454\") " Jul 6 23:30:09.355424 kubelet[2667]: I0706 23:30:09.355070 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d20e950a-9322-4244-a16a-b65570f06454-hostproc\") pod \"d20e950a-9322-4244-a16a-b65570f06454\" (UID: \"d20e950a-9322-4244-a16a-b65570f06454\") " Jul 6 23:30:09.355424 kubelet[2667]: I0706 23:30:09.355086 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d20e950a-9322-4244-a16a-b65570f06454-cni-path\") pod \"d20e950a-9322-4244-a16a-b65570f06454\" (UID: \"d20e950a-9322-4244-a16a-b65570f06454\") " Jul 6 23:30:09.356142 kubelet[2667]: I0706 23:30:09.355100 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d20e950a-9322-4244-a16a-b65570f06454-etc-cni-netd\") pod \"d20e950a-9322-4244-a16a-b65570f06454\" (UID: \"d20e950a-9322-4244-a16a-b65570f06454\") " Jul 6 23:30:09.356142 kubelet[2667]: I0706 23:30:09.355199 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khh2q\" (UniqueName: \"kubernetes.io/projected/12a5dfa2-c7a8-49e7-a1dd-322a31a1246e-kube-api-access-khh2q\") pod \"12a5dfa2-c7a8-49e7-a1dd-322a31a1246e\" (UID: \"12a5dfa2-c7a8-49e7-a1dd-322a31a1246e\") " Jul 6 23:30:09.356142 kubelet[2667]: I0706 23:30:09.355224 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/12a5dfa2-c7a8-49e7-a1dd-322a31a1246e-cilium-config-path\") pod \"12a5dfa2-c7a8-49e7-a1dd-322a31a1246e\" (UID: \"12a5dfa2-c7a8-49e7-a1dd-322a31a1246e\") " Jul 6 23:30:09.356142 kubelet[2667]: I0706 23:30:09.355245 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d20e950a-9322-4244-a16a-b65570f06454-cilium-config-path\") pod \"d20e950a-9322-4244-a16a-b65570f06454\" (UID: \"d20e950a-9322-4244-a16a-b65570f06454\") " Jul 6 23:30:09.356142 kubelet[2667]: I0706 23:30:09.355259 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d20e950a-9322-4244-a16a-b65570f06454-cilium-run\") pod \"d20e950a-9322-4244-a16a-b65570f06454\" (UID: \"d20e950a-9322-4244-a16a-b65570f06454\") " Jul 6 23:30:09.356142 kubelet[2667]: I0706 23:30:09.355277 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d20e950a-9322-4244-a16a-b65570f06454-xtables-lock\") pod \"d20e950a-9322-4244-a16a-b65570f06454\" (UID: \"d20e950a-9322-4244-a16a-b65570f06454\") " Jul 6 23:30:09.356285 kubelet[2667]: I0706 23:30:09.355294 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d20e950a-9322-4244-a16a-b65570f06454-hubble-tls\") pod \"d20e950a-9322-4244-a16a-b65570f06454\" (UID: \"d20e950a-9322-4244-a16a-b65570f06454\") " Jul 6 23:30:09.356285 kubelet[2667]: I0706 23:30:09.355313 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d20e950a-9322-4244-a16a-b65570f06454-lib-modules\") pod \"d20e950a-9322-4244-a16a-b65570f06454\" (UID: \"d20e950a-9322-4244-a16a-b65570f06454\") " Jul 6 23:30:09.356285 kubelet[2667]: I0706 23:30:09.355328 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d20e950a-9322-4244-a16a-b65570f06454-cilium-cgroup\") pod \"d20e950a-9322-4244-a16a-b65570f06454\" (UID: \"d20e950a-9322-4244-a16a-b65570f06454\") " Jul 6 23:30:09.358648 kubelet[2667]: I0706 23:30:09.357957 2667 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d20e950a-9322-4244-a16a-b65570f06454-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d20e950a-9322-4244-a16a-b65570f06454" (UID: "d20e950a-9322-4244-a16a-b65570f06454"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:30:09.358648 kubelet[2667]: I0706 23:30:09.357962 2667 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d20e950a-9322-4244-a16a-b65570f06454-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d20e950a-9322-4244-a16a-b65570f06454" (UID: "d20e950a-9322-4244-a16a-b65570f06454"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:30:09.358648 kubelet[2667]: I0706 23:30:09.358038 2667 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d20e950a-9322-4244-a16a-b65570f06454-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d20e950a-9322-4244-a16a-b65570f06454" (UID: "d20e950a-9322-4244-a16a-b65570f06454"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:30:09.358648 kubelet[2667]: I0706 23:30:09.358470 2667 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d20e950a-9322-4244-a16a-b65570f06454-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d20e950a-9322-4244-a16a-b65570f06454" (UID: "d20e950a-9322-4244-a16a-b65570f06454"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:30:09.358648 kubelet[2667]: I0706 23:30:09.358478 2667 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d20e950a-9322-4244-a16a-b65570f06454-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d20e950a-9322-4244-a16a-b65570f06454" (UID: "d20e950a-9322-4244-a16a-b65570f06454"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:30:09.358797 kubelet[2667]: I0706 23:30:09.358521 2667 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d20e950a-9322-4244-a16a-b65570f06454-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d20e950a-9322-4244-a16a-b65570f06454" (UID: "d20e950a-9322-4244-a16a-b65570f06454"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:30:09.363302 kubelet[2667]: I0706 23:30:09.363260 2667 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12a5dfa2-c7a8-49e7-a1dd-322a31a1246e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "12a5dfa2-c7a8-49e7-a1dd-322a31a1246e" (UID: "12a5dfa2-c7a8-49e7-a1dd-322a31a1246e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 6 23:30:09.364595 kubelet[2667]: I0706 23:30:09.364373 2667 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d20e950a-9322-4244-a16a-b65570f06454-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d20e950a-9322-4244-a16a-b65570f06454" (UID: "d20e950a-9322-4244-a16a-b65570f06454"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 6 23:30:09.364900 kubelet[2667]: I0706 23:30:09.364867 2667 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d20e950a-9322-4244-a16a-b65570f06454-cni-path" (OuterVolumeSpecName: "cni-path") pod "d20e950a-9322-4244-a16a-b65570f06454" (UID: "d20e950a-9322-4244-a16a-b65570f06454"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:30:09.364946 kubelet[2667]: I0706 23:30:09.364905 2667 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d20e950a-9322-4244-a16a-b65570f06454-hostproc" (OuterVolumeSpecName: "hostproc") pod "d20e950a-9322-4244-a16a-b65570f06454" (UID: "d20e950a-9322-4244-a16a-b65570f06454"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:30:09.364946 kubelet[2667]: I0706 23:30:09.364924 2667 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d20e950a-9322-4244-a16a-b65570f06454-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d20e950a-9322-4244-a16a-b65570f06454" (UID: "d20e950a-9322-4244-a16a-b65570f06454"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:30:09.364946 kubelet[2667]: I0706 23:30:09.364942 2667 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d20e950a-9322-4244-a16a-b65570f06454-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d20e950a-9322-4244-a16a-b65570f06454" (UID: "d20e950a-9322-4244-a16a-b65570f06454"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:30:09.366673 kubelet[2667]: I0706 23:30:09.366600 2667 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d20e950a-9322-4244-a16a-b65570f06454-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d20e950a-9322-4244-a16a-b65570f06454" (UID: "d20e950a-9322-4244-a16a-b65570f06454"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:30:09.367090 kubelet[2667]: I0706 23:30:09.367023 2667 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d20e950a-9322-4244-a16a-b65570f06454-kube-api-access-9ck6c" (OuterVolumeSpecName: "kube-api-access-9ck6c") pod "d20e950a-9322-4244-a16a-b65570f06454" (UID: "d20e950a-9322-4244-a16a-b65570f06454"). InnerVolumeSpecName "kube-api-access-9ck6c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:30:09.367493 kubelet[2667]: I0706 23:30:09.367456 2667 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d20e950a-9322-4244-a16a-b65570f06454-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d20e950a-9322-4244-a16a-b65570f06454" (UID: "d20e950a-9322-4244-a16a-b65570f06454"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 6 23:30:09.369000 kubelet[2667]: I0706 23:30:09.368961 2667 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12a5dfa2-c7a8-49e7-a1dd-322a31a1246e-kube-api-access-khh2q" (OuterVolumeSpecName: "kube-api-access-khh2q") pod "12a5dfa2-c7a8-49e7-a1dd-322a31a1246e" (UID: "12a5dfa2-c7a8-49e7-a1dd-322a31a1246e"). InnerVolumeSpecName "kube-api-access-khh2q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:30:09.456199 kubelet[2667]: I0706 23:30:09.456115 2667 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d20e950a-9322-4244-a16a-b65570f06454-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 6 23:30:09.456355 kubelet[2667]: I0706 23:30:09.456235 2667 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d20e950a-9322-4244-a16a-b65570f06454-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 6 23:30:09.456355 kubelet[2667]: I0706 23:30:09.456250 2667 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d20e950a-9322-4244-a16a-b65570f06454-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 6 23:30:09.456355 kubelet[2667]: I0706 23:30:09.456259 2667 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d20e950a-9322-4244-a16a-b65570f06454-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 6 23:30:09.456355 kubelet[2667]: I0706 23:30:09.456269 2667 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d20e950a-9322-4244-a16a-b65570f06454-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 6 23:30:09.456355 kubelet[2667]: I0706 23:30:09.456283 2667 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d20e950a-9322-4244-a16a-b65570f06454-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 6 23:30:09.456355 kubelet[2667]: I0706 23:30:09.456291 2667 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d20e950a-9322-4244-a16a-b65570f06454-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 6 23:30:09.456355 kubelet[2667]: I0706 23:30:09.456301 2667 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9ck6c\" (UniqueName: \"kubernetes.io/projected/d20e950a-9322-4244-a16a-b65570f06454-kube-api-access-9ck6c\") on node \"localhost\" DevicePath \"\"" Jul 6 23:30:09.456355 kubelet[2667]: I0706 23:30:09.456320 2667 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d20e950a-9322-4244-a16a-b65570f06454-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 6 23:30:09.456520 kubelet[2667]: I0706 23:30:09.456328 2667 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d20e950a-9322-4244-a16a-b65570f06454-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 6 23:30:09.456520 kubelet[2667]: I0706 23:30:09.456336 2667 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d20e950a-9322-4244-a16a-b65570f06454-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 6 23:30:09.456520 kubelet[2667]: I0706 23:30:09.456348 2667 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-khh2q\" (UniqueName: \"kubernetes.io/projected/12a5dfa2-c7a8-49e7-a1dd-322a31a1246e-kube-api-access-khh2q\") on node \"localhost\" DevicePath \"\"" Jul 6 23:30:09.456520 kubelet[2667]: I0706 23:30:09.456356 2667 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/12a5dfa2-c7a8-49e7-a1dd-322a31a1246e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 6 23:30:09.456520 kubelet[2667]: I0706 23:30:09.456365 2667 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d20e950a-9322-4244-a16a-b65570f06454-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 6 23:30:09.456520 kubelet[2667]: I0706 23:30:09.456373 2667 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d20e950a-9322-4244-a16a-b65570f06454-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 6 23:30:09.456520 kubelet[2667]: I0706 23:30:09.456387 2667 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d20e950a-9322-4244-a16a-b65570f06454-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 6 23:30:09.530880 kubelet[2667]: I0706 23:30:09.530826 2667 scope.go:117] "RemoveContainer" containerID="8dbe19fb273c2622f88eae2324594c5f4bcd7370138bb6eb6105de5ba28ff241" Jul 6 23:30:09.535155 containerd[1529]: time="2025-07-06T23:30:09.534710286Z" level=info msg="RemoveContainer for \"8dbe19fb273c2622f88eae2324594c5f4bcd7370138bb6eb6105de5ba28ff241\"" Jul 6 23:30:09.540164 systemd[1]: Removed slice kubepods-besteffort-pod12a5dfa2_c7a8_49e7_a1dd_322a31a1246e.slice - libcontainer container kubepods-besteffort-pod12a5dfa2_c7a8_49e7_a1dd_322a31a1246e.slice. Jul 6 23:30:09.545010 containerd[1529]: time="2025-07-06T23:30:09.544498929Z" level=info msg="RemoveContainer for \"8dbe19fb273c2622f88eae2324594c5f4bcd7370138bb6eb6105de5ba28ff241\" returns successfully" Jul 6 23:30:09.557106 kubelet[2667]: I0706 23:30:09.556454 2667 scope.go:117] "RemoveContainer" containerID="8dbe19fb273c2622f88eae2324594c5f4bcd7370138bb6eb6105de5ba28ff241" Jul 6 23:30:09.557659 containerd[1529]: time="2025-07-06T23:30:09.557599334Z" level=error msg="ContainerStatus for \"8dbe19fb273c2622f88eae2324594c5f4bcd7370138bb6eb6105de5ba28ff241\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8dbe19fb273c2622f88eae2324594c5f4bcd7370138bb6eb6105de5ba28ff241\": not found" Jul 6 23:30:09.558862 systemd[1]: Removed slice kubepods-burstable-podd20e950a_9322_4244_a16a_b65570f06454.slice - libcontainer container kubepods-burstable-podd20e950a_9322_4244_a16a_b65570f06454.slice. Jul 6 23:30:09.558973 systemd[1]: kubepods-burstable-podd20e950a_9322_4244_a16a_b65570f06454.slice: Consumed 6.867s CPU time, 122.7M memory peak, 172K read from disk, 12.9M written to disk. Jul 6 23:30:09.565325 kubelet[2667]: E0706 23:30:09.565274 2667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8dbe19fb273c2622f88eae2324594c5f4bcd7370138bb6eb6105de5ba28ff241\": not found" containerID="8dbe19fb273c2622f88eae2324594c5f4bcd7370138bb6eb6105de5ba28ff241" Jul 6 23:30:09.565882 kubelet[2667]: I0706 23:30:09.565464 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8dbe19fb273c2622f88eae2324594c5f4bcd7370138bb6eb6105de5ba28ff241"} err="failed to get container status \"8dbe19fb273c2622f88eae2324594c5f4bcd7370138bb6eb6105de5ba28ff241\": rpc error: code = NotFound desc = an error occurred when try to find container \"8dbe19fb273c2622f88eae2324594c5f4bcd7370138bb6eb6105de5ba28ff241\": not found" Jul 6 23:30:09.565882 kubelet[2667]: I0706 23:30:09.565535 2667 scope.go:117] "RemoveContainer" containerID="314340126df67576202e5e2aae3ea712baafb998a376d66399c60c5b331b05e6" Jul 6 23:30:09.569301 containerd[1529]: time="2025-07-06T23:30:09.569254977Z" level=info msg="RemoveContainer for \"314340126df67576202e5e2aae3ea712baafb998a376d66399c60c5b331b05e6\"" Jul 6 23:30:09.579142 containerd[1529]: time="2025-07-06T23:30:09.579088301Z" level=info msg="RemoveContainer for \"314340126df67576202e5e2aae3ea712baafb998a376d66399c60c5b331b05e6\" returns successfully" Jul 6 23:30:09.579662 kubelet[2667]: I0706 23:30:09.579620 2667 scope.go:117] "RemoveContainer" containerID="271209b3628789e731acfb0c226c602e5a56a502b83623412421a026ffa81463" Jul 6 23:30:09.581552 containerd[1529]: time="2025-07-06T23:30:09.581517181Z" level=info msg="RemoveContainer for \"271209b3628789e731acfb0c226c602e5a56a502b83623412421a026ffa81463\"" Jul 6 23:30:09.585319 containerd[1529]: time="2025-07-06T23:30:09.585268663Z" level=info msg="RemoveContainer for \"271209b3628789e731acfb0c226c602e5a56a502b83623412421a026ffa81463\" returns successfully" Jul 6 23:30:09.585811 kubelet[2667]: I0706 23:30:09.585785 2667 scope.go:117] "RemoveContainer" containerID="e95a4aec3b6acef46a5191ecf5bd57f4433d8e42c7358b565c234c760c7e6915" Jul 6 23:30:09.588508 containerd[1529]: time="2025-07-06T23:30:09.588444184Z" level=info msg="RemoveContainer for \"e95a4aec3b6acef46a5191ecf5bd57f4433d8e42c7358b565c234c760c7e6915\"" Jul 6 23:30:09.609878 containerd[1529]: time="2025-07-06T23:30:09.609712390Z" level=info msg="RemoveContainer for \"e95a4aec3b6acef46a5191ecf5bd57f4433d8e42c7358b565c234c760c7e6915\" returns successfully" Jul 6 23:30:09.610958 kubelet[2667]: I0706 23:30:09.610833 2667 scope.go:117] "RemoveContainer" containerID="ccc42c4d89639f1a5a42da2c47ec77c1f9a8195ead3c1669f247635b2cd9e80c" Jul 6 23:30:09.613537 containerd[1529]: time="2025-07-06T23:30:09.613497032Z" level=info msg="RemoveContainer for \"ccc42c4d89639f1a5a42da2c47ec77c1f9a8195ead3c1669f247635b2cd9e80c\"" Jul 6 23:30:09.617255 containerd[1529]: time="2025-07-06T23:30:09.617210993Z" level=info msg="RemoveContainer for \"ccc42c4d89639f1a5a42da2c47ec77c1f9a8195ead3c1669f247635b2cd9e80c\" returns successfully" Jul 6 23:30:09.617505 kubelet[2667]: I0706 23:30:09.617481 2667 scope.go:117] "RemoveContainer" containerID="397376a21c5394265e5884842615c8eb79e0621942c168f570498ce55a0d4c24" Jul 6 23:30:09.620458 containerd[1529]: time="2025-07-06T23:30:09.620399434Z" level=info msg="RemoveContainer for \"397376a21c5394265e5884842615c8eb79e0621942c168f570498ce55a0d4c24\"" Jul 6 23:30:09.623422 containerd[1529]: time="2025-07-06T23:30:09.623377595Z" level=info msg="RemoveContainer for \"397376a21c5394265e5884842615c8eb79e0621942c168f570498ce55a0d4c24\" returns successfully" Jul 6 23:30:09.623734 kubelet[2667]: I0706 23:30:09.623599 2667 scope.go:117] "RemoveContainer" containerID="314340126df67576202e5e2aae3ea712baafb998a376d66399c60c5b331b05e6" Jul 6 23:30:09.623879 containerd[1529]: time="2025-07-06T23:30:09.623846635Z" level=error msg="ContainerStatus for \"314340126df67576202e5e2aae3ea712baafb998a376d66399c60c5b331b05e6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"314340126df67576202e5e2aae3ea712baafb998a376d66399c60c5b331b05e6\": not found" Jul 6 23:30:09.624016 kubelet[2667]: E0706 23:30:09.623987 2667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"314340126df67576202e5e2aae3ea712baafb998a376d66399c60c5b331b05e6\": not found" containerID="314340126df67576202e5e2aae3ea712baafb998a376d66399c60c5b331b05e6" Jul 6 23:30:09.624060 kubelet[2667]: I0706 23:30:09.624022 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"314340126df67576202e5e2aae3ea712baafb998a376d66399c60c5b331b05e6"} err="failed to get container status \"314340126df67576202e5e2aae3ea712baafb998a376d66399c60c5b331b05e6\": rpc error: code = NotFound desc = an error occurred when try to find container \"314340126df67576202e5e2aae3ea712baafb998a376d66399c60c5b331b05e6\": not found" Jul 6 23:30:09.624060 kubelet[2667]: I0706 23:30:09.624045 2667 scope.go:117] "RemoveContainer" containerID="271209b3628789e731acfb0c226c602e5a56a502b83623412421a026ffa81463" Jul 6 23:30:09.624317 containerd[1529]: time="2025-07-06T23:30:09.624265955Z" level=error msg="ContainerStatus for \"271209b3628789e731acfb0c226c602e5a56a502b83623412421a026ffa81463\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"271209b3628789e731acfb0c226c602e5a56a502b83623412421a026ffa81463\": not found" Jul 6 23:30:09.624579 kubelet[2667]: E0706 23:30:09.624465 2667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"271209b3628789e731acfb0c226c602e5a56a502b83623412421a026ffa81463\": not found" containerID="271209b3628789e731acfb0c226c602e5a56a502b83623412421a026ffa81463" Jul 6 23:30:09.624579 kubelet[2667]: I0706 23:30:09.624518 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"271209b3628789e731acfb0c226c602e5a56a502b83623412421a026ffa81463"} err="failed to get container status \"271209b3628789e731acfb0c226c602e5a56a502b83623412421a026ffa81463\": rpc error: code = NotFound desc = an error occurred when try to find container \"271209b3628789e731acfb0c226c602e5a56a502b83623412421a026ffa81463\": not found" Jul 6 23:30:09.624579 kubelet[2667]: I0706 23:30:09.624536 2667 scope.go:117] "RemoveContainer" containerID="e95a4aec3b6acef46a5191ecf5bd57f4433d8e42c7358b565c234c760c7e6915" Jul 6 23:30:09.624828 containerd[1529]: time="2025-07-06T23:30:09.624782595Z" level=error msg="ContainerStatus for \"e95a4aec3b6acef46a5191ecf5bd57f4433d8e42c7358b565c234c760c7e6915\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e95a4aec3b6acef46a5191ecf5bd57f4433d8e42c7358b565c234c760c7e6915\": not found" Jul 6 23:30:09.624965 kubelet[2667]: E0706 23:30:09.624943 2667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e95a4aec3b6acef46a5191ecf5bd57f4433d8e42c7358b565c234c760c7e6915\": not found" containerID="e95a4aec3b6acef46a5191ecf5bd57f4433d8e42c7358b565c234c760c7e6915" Jul 6 23:30:09.625214 kubelet[2667]: I0706 23:30:09.625103 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e95a4aec3b6acef46a5191ecf5bd57f4433d8e42c7358b565c234c760c7e6915"} err="failed to get container status \"e95a4aec3b6acef46a5191ecf5bd57f4433d8e42c7358b565c234c760c7e6915\": rpc error: code = NotFound desc = an error occurred when try to find container \"e95a4aec3b6acef46a5191ecf5bd57f4433d8e42c7358b565c234c760c7e6915\": not found" Jul 6 23:30:09.625214 kubelet[2667]: I0706 23:30:09.625159 2667 scope.go:117] "RemoveContainer" containerID="ccc42c4d89639f1a5a42da2c47ec77c1f9a8195ead3c1669f247635b2cd9e80c" Jul 6 23:30:09.625493 containerd[1529]: time="2025-07-06T23:30:09.625436276Z" level=error msg="ContainerStatus for \"ccc42c4d89639f1a5a42da2c47ec77c1f9a8195ead3c1669f247635b2cd9e80c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ccc42c4d89639f1a5a42da2c47ec77c1f9a8195ead3c1669f247635b2cd9e80c\": not found" Jul 6 23:30:09.625619 kubelet[2667]: E0706 23:30:09.625593 2667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ccc42c4d89639f1a5a42da2c47ec77c1f9a8195ead3c1669f247635b2cd9e80c\": not found" containerID="ccc42c4d89639f1a5a42da2c47ec77c1f9a8195ead3c1669f247635b2cd9e80c" Jul 6 23:30:09.625675 kubelet[2667]: I0706 23:30:09.625626 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ccc42c4d89639f1a5a42da2c47ec77c1f9a8195ead3c1669f247635b2cd9e80c"} err="failed to get container status \"ccc42c4d89639f1a5a42da2c47ec77c1f9a8195ead3c1669f247635b2cd9e80c\": rpc error: code = NotFound desc = an error occurred when try to find container \"ccc42c4d89639f1a5a42da2c47ec77c1f9a8195ead3c1669f247635b2cd9e80c\": not found" Jul 6 23:30:09.625675 kubelet[2667]: I0706 23:30:09.625647 2667 scope.go:117] "RemoveContainer" containerID="397376a21c5394265e5884842615c8eb79e0621942c168f570498ce55a0d4c24" Jul 6 23:30:09.625850 containerd[1529]: time="2025-07-06T23:30:09.625828676Z" level=error msg="ContainerStatus for \"397376a21c5394265e5884842615c8eb79e0621942c168f570498ce55a0d4c24\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"397376a21c5394265e5884842615c8eb79e0621942c168f570498ce55a0d4c24\": not found" Jul 6 23:30:09.626044 kubelet[2667]: E0706 23:30:09.626019 2667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"397376a21c5394265e5884842615c8eb79e0621942c168f570498ce55a0d4c24\": not found" containerID="397376a21c5394265e5884842615c8eb79e0621942c168f570498ce55a0d4c24" Jul 6 23:30:09.626152 kubelet[2667]: I0706 23:30:09.626109 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"397376a21c5394265e5884842615c8eb79e0621942c168f570498ce55a0d4c24"} err="failed to get container status \"397376a21c5394265e5884842615c8eb79e0621942c168f570498ce55a0d4c24\": rpc error: code = NotFound desc = an error occurred when try to find container \"397376a21c5394265e5884842615c8eb79e0621942c168f570498ce55a0d4c24\": not found" Jul 6 23:30:10.139885 systemd[1]: var-lib-kubelet-pods-12a5dfa2\x2dc7a8\x2d49e7\x2da1dd\x2d322a31a1246e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkhh2q.mount: Deactivated successfully. Jul 6 23:30:10.139994 systemd[1]: var-lib-kubelet-pods-d20e950a\x2d9322\x2d4244\x2da16a\x2db65570f06454-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 6 23:30:10.140044 systemd[1]: var-lib-kubelet-pods-d20e950a\x2d9322\x2d4244\x2da16a\x2db65570f06454-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9ck6c.mount: Deactivated successfully. Jul 6 23:30:10.140091 systemd[1]: var-lib-kubelet-pods-d20e950a\x2d9322\x2d4244\x2da16a\x2db65570f06454-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 6 23:30:10.238009 kubelet[2667]: I0706 23:30:10.237265 2667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12a5dfa2-c7a8-49e7-a1dd-322a31a1246e" path="/var/lib/kubelet/pods/12a5dfa2-c7a8-49e7-a1dd-322a31a1246e/volumes" Jul 6 23:30:10.238009 kubelet[2667]: I0706 23:30:10.237634 2667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d20e950a-9322-4244-a16a-b65570f06454" path="/var/lib/kubelet/pods/d20e950a-9322-4244-a16a-b65570f06454/volumes" Jul 6 23:30:11.026810 sshd[4274]: Connection closed by 10.0.0.1 port 47166 Jul 6 23:30:11.027415 sshd-session[4272]: pam_unix(sshd:session): session closed for user core Jul 6 23:30:11.040248 systemd[1]: sshd@22-10.0.0.47:22-10.0.0.1:47166.service: Deactivated successfully. Jul 6 23:30:11.042525 systemd[1]: session-23.scope: Deactivated successfully. Jul 6 23:30:11.043932 systemd-logind[1514]: Session 23 logged out. Waiting for processes to exit. Jul 6 23:30:11.046769 systemd[1]: Started sshd@23-10.0.0.47:22-10.0.0.1:47174.service - OpenSSH per-connection server daemon (10.0.0.1:47174). Jul 6 23:30:11.048276 systemd-logind[1514]: Removed session 23. Jul 6 23:30:11.103989 sshd[4420]: Accepted publickey for core from 10.0.0.1 port 47174 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:30:11.105482 sshd-session[4420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:30:11.113753 systemd-logind[1514]: New session 24 of user core. Jul 6 23:30:11.127743 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 6 23:30:12.294722 kubelet[2667]: E0706 23:30:12.294671 2667 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 6 23:30:12.379163 sshd[4422]: Connection closed by 10.0.0.1 port 47174 Jul 6 23:30:12.380243 sshd-session[4420]: pam_unix(sshd:session): session closed for user core Jul 6 23:30:12.394544 systemd[1]: sshd@23-10.0.0.47:22-10.0.0.1:47174.service: Deactivated successfully. Jul 6 23:30:12.398873 systemd[1]: session-24.scope: Deactivated successfully. Jul 6 23:30:12.399837 systemd[1]: session-24.scope: Consumed 1.118s CPU time, 24.7M memory peak. Jul 6 23:30:12.402339 systemd-logind[1514]: Session 24 logged out. Waiting for processes to exit. Jul 6 23:30:12.407848 systemd[1]: Started sshd@24-10.0.0.47:22-10.0.0.1:47180.service - OpenSSH per-connection server daemon (10.0.0.1:47180). Jul 6 23:30:12.411757 systemd-logind[1514]: Removed session 24. Jul 6 23:30:12.423287 systemd[1]: Created slice kubepods-burstable-pod62afafd3_2724_4535_91e6_ef9e8a04ee04.slice - libcontainer container kubepods-burstable-pod62afafd3_2724_4535_91e6_ef9e8a04ee04.slice. Jul 6 23:30:12.472163 kubelet[2667]: I0706 23:30:12.472037 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/62afafd3-2724-4535-91e6-ef9e8a04ee04-host-proc-sys-net\") pod \"cilium-p8t8l\" (UID: \"62afafd3-2724-4535-91e6-ef9e8a04ee04\") " pod="kube-system/cilium-p8t8l" Jul 6 23:30:12.472163 kubelet[2667]: I0706 23:30:12.472081 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62afafd3-2724-4535-91e6-ef9e8a04ee04-lib-modules\") pod \"cilium-p8t8l\" (UID: \"62afafd3-2724-4535-91e6-ef9e8a04ee04\") " pod="kube-system/cilium-p8t8l" Jul 6 23:30:12.472163 kubelet[2667]: I0706 23:30:12.472101 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/62afafd3-2724-4535-91e6-ef9e8a04ee04-cilium-ipsec-secrets\") pod \"cilium-p8t8l\" (UID: \"62afafd3-2724-4535-91e6-ef9e8a04ee04\") " pod="kube-system/cilium-p8t8l" Jul 6 23:30:12.472163 kubelet[2667]: I0706 23:30:12.472116 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m2ph\" (UniqueName: \"kubernetes.io/projected/62afafd3-2724-4535-91e6-ef9e8a04ee04-kube-api-access-4m2ph\") pod \"cilium-p8t8l\" (UID: \"62afafd3-2724-4535-91e6-ef9e8a04ee04\") " pod="kube-system/cilium-p8t8l" Jul 6 23:30:12.472163 kubelet[2667]: I0706 23:30:12.472156 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/62afafd3-2724-4535-91e6-ef9e8a04ee04-host-proc-sys-kernel\") pod \"cilium-p8t8l\" (UID: \"62afafd3-2724-4535-91e6-ef9e8a04ee04\") " pod="kube-system/cilium-p8t8l" Jul 6 23:30:12.472389 kubelet[2667]: I0706 23:30:12.472180 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/62afafd3-2724-4535-91e6-ef9e8a04ee04-hubble-tls\") pod \"cilium-p8t8l\" (UID: \"62afafd3-2724-4535-91e6-ef9e8a04ee04\") " pod="kube-system/cilium-p8t8l" Jul 6 23:30:12.472389 kubelet[2667]: I0706 23:30:12.472201 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/62afafd3-2724-4535-91e6-ef9e8a04ee04-cni-path\") pod \"cilium-p8t8l\" (UID: \"62afafd3-2724-4535-91e6-ef9e8a04ee04\") " pod="kube-system/cilium-p8t8l" Jul 6 23:30:12.472389 kubelet[2667]: I0706 23:30:12.472216 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/62afafd3-2724-4535-91e6-ef9e8a04ee04-cilium-cgroup\") pod \"cilium-p8t8l\" (UID: \"62afafd3-2724-4535-91e6-ef9e8a04ee04\") " pod="kube-system/cilium-p8t8l" Jul 6 23:30:12.472389 kubelet[2667]: I0706 23:30:12.472231 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/62afafd3-2724-4535-91e6-ef9e8a04ee04-xtables-lock\") pod \"cilium-p8t8l\" (UID: \"62afafd3-2724-4535-91e6-ef9e8a04ee04\") " pod="kube-system/cilium-p8t8l" Jul 6 23:30:12.472389 kubelet[2667]: I0706 23:30:12.472248 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/62afafd3-2724-4535-91e6-ef9e8a04ee04-cilium-run\") pod \"cilium-p8t8l\" (UID: \"62afafd3-2724-4535-91e6-ef9e8a04ee04\") " pod="kube-system/cilium-p8t8l" Jul 6 23:30:12.472389 kubelet[2667]: I0706 23:30:12.472261 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/62afafd3-2724-4535-91e6-ef9e8a04ee04-clustermesh-secrets\") pod \"cilium-p8t8l\" (UID: \"62afafd3-2724-4535-91e6-ef9e8a04ee04\") " pod="kube-system/cilium-p8t8l" Jul 6 23:30:12.472551 kubelet[2667]: I0706 23:30:12.472277 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/62afafd3-2724-4535-91e6-ef9e8a04ee04-hostproc\") pod \"cilium-p8t8l\" (UID: \"62afafd3-2724-4535-91e6-ef9e8a04ee04\") " pod="kube-system/cilium-p8t8l" Jul 6 23:30:12.472551 kubelet[2667]: I0706 23:30:12.472291 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/62afafd3-2724-4535-91e6-ef9e8a04ee04-cilium-config-path\") pod \"cilium-p8t8l\" (UID: \"62afafd3-2724-4535-91e6-ef9e8a04ee04\") " pod="kube-system/cilium-p8t8l" Jul 6 23:30:12.472551 kubelet[2667]: I0706 23:30:12.472309 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/62afafd3-2724-4535-91e6-ef9e8a04ee04-bpf-maps\") pod \"cilium-p8t8l\" (UID: \"62afafd3-2724-4535-91e6-ef9e8a04ee04\") " pod="kube-system/cilium-p8t8l" Jul 6 23:30:12.472551 kubelet[2667]: I0706 23:30:12.472323 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/62afafd3-2724-4535-91e6-ef9e8a04ee04-etc-cni-netd\") pod \"cilium-p8t8l\" (UID: \"62afafd3-2724-4535-91e6-ef9e8a04ee04\") " pod="kube-system/cilium-p8t8l" Jul 6 23:30:12.478508 sshd[4434]: Accepted publickey for core from 10.0.0.1 port 47180 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:30:12.480110 sshd-session[4434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:30:12.485686 systemd-logind[1514]: New session 25 of user core. Jul 6 23:30:12.494356 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 6 23:30:12.544158 sshd[4436]: Connection closed by 10.0.0.1 port 47180 Jul 6 23:30:12.544307 sshd-session[4434]: pam_unix(sshd:session): session closed for user core Jul 6 23:30:12.554681 systemd[1]: sshd@24-10.0.0.47:22-10.0.0.1:47180.service: Deactivated successfully. Jul 6 23:30:12.556738 systemd[1]: session-25.scope: Deactivated successfully. Jul 6 23:30:12.558076 systemd-logind[1514]: Session 25 logged out. Waiting for processes to exit. Jul 6 23:30:12.561534 systemd[1]: Started sshd@25-10.0.0.47:22-10.0.0.1:45684.service - OpenSSH per-connection server daemon (10.0.0.1:45684). Jul 6 23:30:12.562324 systemd-logind[1514]: Removed session 25. Jul 6 23:30:12.616026 sshd[4443]: Accepted publickey for core from 10.0.0.1 port 45684 ssh2: RSA SHA256:CSJlI8/o3cgAW3JnP3N/e8VY57OeBgyk25K3mGio6wo Jul 6 23:30:12.617360 sshd-session[4443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:30:12.623237 systemd-logind[1514]: New session 26 of user core. Jul 6 23:30:12.632838 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 6 23:30:12.727533 containerd[1529]: time="2025-07-06T23:30:12.727424825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p8t8l,Uid:62afafd3-2724-4535-91e6-ef9e8a04ee04,Namespace:kube-system,Attempt:0,}" Jul 6 23:30:12.749739 containerd[1529]: time="2025-07-06T23:30:12.749693866Z" level=info msg="connecting to shim 506d39739ebf4698d0d1589f1f576b04b799cdbe682d0f347081d3ee1ef0b9d8" address="unix:///run/containerd/s/1f738692ad1b06d34aec53e725f8f73bf6294bc9be97867193d605895228cf90" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:30:12.770325 systemd[1]: Started cri-containerd-506d39739ebf4698d0d1589f1f576b04b799cdbe682d0f347081d3ee1ef0b9d8.scope - libcontainer container 506d39739ebf4698d0d1589f1f576b04b799cdbe682d0f347081d3ee1ef0b9d8. Jul 6 23:30:12.799234 containerd[1529]: time="2025-07-06T23:30:12.799194076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p8t8l,Uid:62afafd3-2724-4535-91e6-ef9e8a04ee04,Namespace:kube-system,Attempt:0,} returns sandbox id \"506d39739ebf4698d0d1589f1f576b04b799cdbe682d0f347081d3ee1ef0b9d8\"" Jul 6 23:30:12.804348 containerd[1529]: time="2025-07-06T23:30:12.804306805Z" level=info msg="CreateContainer within sandbox \"506d39739ebf4698d0d1589f1f576b04b799cdbe682d0f347081d3ee1ef0b9d8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 6 23:30:12.810248 containerd[1529]: time="2025-07-06T23:30:12.810098776Z" level=info msg="Container b7da75c1d0c88881362e8165453cc35f7d0d00d9702e725434b9d3b7018f3043: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:30:12.815321 containerd[1529]: time="2025-07-06T23:30:12.815277785Z" level=info msg="CreateContainer within sandbox \"506d39739ebf4698d0d1589f1f576b04b799cdbe682d0f347081d3ee1ef0b9d8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b7da75c1d0c88881362e8165453cc35f7d0d00d9702e725434b9d3b7018f3043\"" Jul 6 23:30:12.816845 containerd[1529]: time="2025-07-06T23:30:12.816803228Z" level=info msg="StartContainer for \"b7da75c1d0c88881362e8165453cc35f7d0d00d9702e725434b9d3b7018f3043\"" Jul 6 23:30:12.817750 containerd[1529]: time="2025-07-06T23:30:12.817723830Z" level=info msg="connecting to shim b7da75c1d0c88881362e8165453cc35f7d0d00d9702e725434b9d3b7018f3043" address="unix:///run/containerd/s/1f738692ad1b06d34aec53e725f8f73bf6294bc9be97867193d605895228cf90" protocol=ttrpc version=3 Jul 6 23:30:12.844354 systemd[1]: Started cri-containerd-b7da75c1d0c88881362e8165453cc35f7d0d00d9702e725434b9d3b7018f3043.scope - libcontainer container b7da75c1d0c88881362e8165453cc35f7d0d00d9702e725434b9d3b7018f3043. Jul 6 23:30:12.872741 containerd[1529]: time="2025-07-06T23:30:12.872696530Z" level=info msg="StartContainer for \"b7da75c1d0c88881362e8165453cc35f7d0d00d9702e725434b9d3b7018f3043\" returns successfully" Jul 6 23:30:12.889467 systemd[1]: cri-containerd-b7da75c1d0c88881362e8165453cc35f7d0d00d9702e725434b9d3b7018f3043.scope: Deactivated successfully. Jul 6 23:30:12.891978 containerd[1529]: time="2025-07-06T23:30:12.891946205Z" level=info msg="received exit event container_id:\"b7da75c1d0c88881362e8165453cc35f7d0d00d9702e725434b9d3b7018f3043\" id:\"b7da75c1d0c88881362e8165453cc35f7d0d00d9702e725434b9d3b7018f3043\" pid:4514 exited_at:{seconds:1751844612 nanos:891635645}" Jul 6 23:30:12.892179 containerd[1529]: time="2025-07-06T23:30:12.892020406Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b7da75c1d0c88881362e8165453cc35f7d0d00d9702e725434b9d3b7018f3043\" id:\"b7da75c1d0c88881362e8165453cc35f7d0d00d9702e725434b9d3b7018f3043\" pid:4514 exited_at:{seconds:1751844612 nanos:891635645}" Jul 6 23:30:13.564944 containerd[1529]: time="2025-07-06T23:30:13.564841500Z" level=info msg="CreateContainer within sandbox \"506d39739ebf4698d0d1589f1f576b04b799cdbe682d0f347081d3ee1ef0b9d8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 6 23:30:13.586009 containerd[1529]: time="2025-07-06T23:30:13.585856788Z" level=info msg="Container 0005c196eaca330d329116b2954527920724cd3628773cd456a67de984e2d607: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:30:13.588868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1638132567.mount: Deactivated successfully. Jul 6 23:30:13.601665 containerd[1529]: time="2025-07-06T23:30:13.601528424Z" level=info msg="CreateContainer within sandbox \"506d39739ebf4698d0d1589f1f576b04b799cdbe682d0f347081d3ee1ef0b9d8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0005c196eaca330d329116b2954527920724cd3628773cd456a67de984e2d607\"" Jul 6 23:30:13.602033 containerd[1529]: time="2025-07-06T23:30:13.601980705Z" level=info msg="StartContainer for \"0005c196eaca330d329116b2954527920724cd3628773cd456a67de984e2d607\"" Jul 6 23:30:13.603013 containerd[1529]: time="2025-07-06T23:30:13.602809387Z" level=info msg="connecting to shim 0005c196eaca330d329116b2954527920724cd3628773cd456a67de984e2d607" address="unix:///run/containerd/s/1f738692ad1b06d34aec53e725f8f73bf6294bc9be97867193d605895228cf90" protocol=ttrpc version=3 Jul 6 23:30:13.628324 systemd[1]: Started cri-containerd-0005c196eaca330d329116b2954527920724cd3628773cd456a67de984e2d607.scope - libcontainer container 0005c196eaca330d329116b2954527920724cd3628773cd456a67de984e2d607. Jul 6 23:30:13.657868 containerd[1529]: time="2025-07-06T23:30:13.657824073Z" level=info msg="StartContainer for \"0005c196eaca330d329116b2954527920724cd3628773cd456a67de984e2d607\" returns successfully" Jul 6 23:30:13.669716 systemd[1]: cri-containerd-0005c196eaca330d329116b2954527920724cd3628773cd456a67de984e2d607.scope: Deactivated successfully. Jul 6 23:30:13.671250 containerd[1529]: time="2025-07-06T23:30:13.671214024Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0005c196eaca330d329116b2954527920724cd3628773cd456a67de984e2d607\" id:\"0005c196eaca330d329116b2954527920724cd3628773cd456a67de984e2d607\" pid:4560 exited_at:{seconds:1751844613 nanos:670812143}" Jul 6 23:30:13.671338 containerd[1529]: time="2025-07-06T23:30:13.671237384Z" level=info msg="received exit event container_id:\"0005c196eaca330d329116b2954527920724cd3628773cd456a67de984e2d607\" id:\"0005c196eaca330d329116b2954527920724cd3628773cd456a67de984e2d607\" pid:4560 exited_at:{seconds:1751844613 nanos:670812143}" Jul 6 23:30:13.695819 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0005c196eaca330d329116b2954527920724cd3628773cd456a67de984e2d607-rootfs.mount: Deactivated successfully. Jul 6 23:30:13.718930 kubelet[2667]: I0706 23:30:13.718860 2667 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-06T23:30:13Z","lastTransitionTime":"2025-07-06T23:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 6 23:30:14.563001 containerd[1529]: time="2025-07-06T23:30:14.562877887Z" level=info msg="CreateContainer within sandbox \"506d39739ebf4698d0d1589f1f576b04b799cdbe682d0f347081d3ee1ef0b9d8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 6 23:30:14.571280 containerd[1529]: time="2025-07-06T23:30:14.571231190Z" level=info msg="Container 2e56f9271ddcef1121f3b966d1b9e00512372e036404d47ce4ef034bed4c7ba4: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:30:14.581645 containerd[1529]: time="2025-07-06T23:30:14.581579579Z" level=info msg="CreateContainer within sandbox \"506d39739ebf4698d0d1589f1f576b04b799cdbe682d0f347081d3ee1ef0b9d8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2e56f9271ddcef1121f3b966d1b9e00512372e036404d47ce4ef034bed4c7ba4\"" Jul 6 23:30:14.582629 containerd[1529]: time="2025-07-06T23:30:14.582475381Z" level=info msg="StartContainer for \"2e56f9271ddcef1121f3b966d1b9e00512372e036404d47ce4ef034bed4c7ba4\"" Jul 6 23:30:14.584403 containerd[1529]: time="2025-07-06T23:30:14.584376146Z" level=info msg="connecting to shim 2e56f9271ddcef1121f3b966d1b9e00512372e036404d47ce4ef034bed4c7ba4" address="unix:///run/containerd/s/1f738692ad1b06d34aec53e725f8f73bf6294bc9be97867193d605895228cf90" protocol=ttrpc version=3 Jul 6 23:30:14.612380 systemd[1]: Started cri-containerd-2e56f9271ddcef1121f3b966d1b9e00512372e036404d47ce4ef034bed4c7ba4.scope - libcontainer container 2e56f9271ddcef1121f3b966d1b9e00512372e036404d47ce4ef034bed4c7ba4. Jul 6 23:30:14.653260 systemd[1]: cri-containerd-2e56f9271ddcef1121f3b966d1b9e00512372e036404d47ce4ef034bed4c7ba4.scope: Deactivated successfully. Jul 6 23:30:14.654603 containerd[1529]: time="2025-07-06T23:30:14.654504259Z" level=info msg="received exit event container_id:\"2e56f9271ddcef1121f3b966d1b9e00512372e036404d47ce4ef034bed4c7ba4\" id:\"2e56f9271ddcef1121f3b966d1b9e00512372e036404d47ce4ef034bed4c7ba4\" pid:4605 exited_at:{seconds:1751844614 nanos:654070418}" Jul 6 23:30:14.654889 containerd[1529]: time="2025-07-06T23:30:14.654837340Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2e56f9271ddcef1121f3b966d1b9e00512372e036404d47ce4ef034bed4c7ba4\" id:\"2e56f9271ddcef1121f3b966d1b9e00512372e036404d47ce4ef034bed4c7ba4\" pid:4605 exited_at:{seconds:1751844614 nanos:654070418}" Jul 6 23:30:14.663635 containerd[1529]: time="2025-07-06T23:30:14.663600204Z" level=info msg="StartContainer for \"2e56f9271ddcef1121f3b966d1b9e00512372e036404d47ce4ef034bed4c7ba4\" returns successfully" Jul 6 23:30:14.676844 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e56f9271ddcef1121f3b966d1b9e00512372e036404d47ce4ef034bed4c7ba4-rootfs.mount: Deactivated successfully. Jul 6 23:30:15.573316 containerd[1529]: time="2025-07-06T23:30:15.573275640Z" level=info msg="CreateContainer within sandbox \"506d39739ebf4698d0d1589f1f576b04b799cdbe682d0f347081d3ee1ef0b9d8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 6 23:30:15.591780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2201352699.mount: Deactivated successfully. Jul 6 23:30:15.592851 containerd[1529]: time="2025-07-06T23:30:15.592807622Z" level=info msg="Container d6e65abaf49903056ff92f8c050f4406022b2a7a3757430c816062f2b14a9141: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:30:15.613436 containerd[1529]: time="2025-07-06T23:30:15.613313367Z" level=info msg="CreateContainer within sandbox \"506d39739ebf4698d0d1589f1f576b04b799cdbe682d0f347081d3ee1ef0b9d8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d6e65abaf49903056ff92f8c050f4406022b2a7a3757430c816062f2b14a9141\"" Jul 6 23:30:15.614113 containerd[1529]: time="2025-07-06T23:30:15.614070570Z" level=info msg="StartContainer for \"d6e65abaf49903056ff92f8c050f4406022b2a7a3757430c816062f2b14a9141\"" Jul 6 23:30:15.615656 containerd[1529]: time="2025-07-06T23:30:15.615527735Z" level=info msg="connecting to shim d6e65abaf49903056ff92f8c050f4406022b2a7a3757430c816062f2b14a9141" address="unix:///run/containerd/s/1f738692ad1b06d34aec53e725f8f73bf6294bc9be97867193d605895228cf90" protocol=ttrpc version=3 Jul 6 23:30:15.652357 systemd[1]: Started cri-containerd-d6e65abaf49903056ff92f8c050f4406022b2a7a3757430c816062f2b14a9141.scope - libcontainer container d6e65abaf49903056ff92f8c050f4406022b2a7a3757430c816062f2b14a9141. Jul 6 23:30:15.677022 systemd[1]: cri-containerd-d6e65abaf49903056ff92f8c050f4406022b2a7a3757430c816062f2b14a9141.scope: Deactivated successfully. Jul 6 23:30:15.679415 containerd[1529]: time="2025-07-06T23:30:15.679371178Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d6e65abaf49903056ff92f8c050f4406022b2a7a3757430c816062f2b14a9141\" id:\"d6e65abaf49903056ff92f8c050f4406022b2a7a3757430c816062f2b14a9141\" pid:4647 exited_at:{seconds:1751844615 nanos:677824893}" Jul 6 23:30:15.680292 containerd[1529]: time="2025-07-06T23:30:15.680256261Z" level=info msg="received exit event container_id:\"d6e65abaf49903056ff92f8c050f4406022b2a7a3757430c816062f2b14a9141\" id:\"d6e65abaf49903056ff92f8c050f4406022b2a7a3757430c816062f2b14a9141\" pid:4647 exited_at:{seconds:1751844615 nanos:677824893}" Jul 6 23:30:15.688835 containerd[1529]: time="2025-07-06T23:30:15.688006686Z" level=info msg="StartContainer for \"d6e65abaf49903056ff92f8c050f4406022b2a7a3757430c816062f2b14a9141\" returns successfully" Jul 6 23:30:15.700495 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6e65abaf49903056ff92f8c050f4406022b2a7a3757430c816062f2b14a9141-rootfs.mount: Deactivated successfully. Jul 6 23:30:16.579099 containerd[1529]: time="2025-07-06T23:30:16.579052138Z" level=info msg="CreateContainer within sandbox \"506d39739ebf4698d0d1589f1f576b04b799cdbe682d0f347081d3ee1ef0b9d8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 6 23:30:16.593696 containerd[1529]: time="2025-07-06T23:30:16.592765828Z" level=info msg="Container 6fc2a3e1e7f532202944a2bce15e8ecd965de44a7e9b023111349c673c7bdfe8: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:30:16.607014 containerd[1529]: time="2025-07-06T23:30:16.606960919Z" level=info msg="CreateContainer within sandbox \"506d39739ebf4698d0d1589f1f576b04b799cdbe682d0f347081d3ee1ef0b9d8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6fc2a3e1e7f532202944a2bce15e8ecd965de44a7e9b023111349c673c7bdfe8\"" Jul 6 23:30:16.607655 containerd[1529]: time="2025-07-06T23:30:16.607543721Z" level=info msg="StartContainer for \"6fc2a3e1e7f532202944a2bce15e8ecd965de44a7e9b023111349c673c7bdfe8\"" Jul 6 23:30:16.608638 containerd[1529]: time="2025-07-06T23:30:16.608611845Z" level=info msg="connecting to shim 6fc2a3e1e7f532202944a2bce15e8ecd965de44a7e9b023111349c673c7bdfe8" address="unix:///run/containerd/s/1f738692ad1b06d34aec53e725f8f73bf6294bc9be97867193d605895228cf90" protocol=ttrpc version=3 Jul 6 23:30:16.631425 systemd[1]: Started cri-containerd-6fc2a3e1e7f532202944a2bce15e8ecd965de44a7e9b023111349c673c7bdfe8.scope - libcontainer container 6fc2a3e1e7f532202944a2bce15e8ecd965de44a7e9b023111349c673c7bdfe8. Jul 6 23:30:16.663868 containerd[1529]: time="2025-07-06T23:30:16.663823125Z" level=info msg="StartContainer for \"6fc2a3e1e7f532202944a2bce15e8ecd965de44a7e9b023111349c673c7bdfe8\" returns successfully" Jul 6 23:30:16.722951 containerd[1529]: time="2025-07-06T23:30:16.722896499Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6fc2a3e1e7f532202944a2bce15e8ecd965de44a7e9b023111349c673c7bdfe8\" id:\"f91fb17379433c5d090d2ed818eb59a0cc487c606e15ee3cab203e25fa0edd7d\" pid:4714 exited_at:{seconds:1751844616 nanos:722487617}" Jul 6 23:30:16.961144 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 6 23:30:17.600489 kubelet[2667]: I0706 23:30:17.600416 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-p8t8l" podStartSLOduration=5.600383963 podStartE2EDuration="5.600383963s" podCreationTimestamp="2025-07-06 23:30:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:30:17.600115602 +0000 UTC m=+85.475763656" watchObservedRunningTime="2025-07-06 23:30:17.600383963 +0000 UTC m=+85.476032057" Jul 6 23:30:19.021713 containerd[1529]: time="2025-07-06T23:30:19.021478353Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6fc2a3e1e7f532202944a2bce15e8ecd965de44a7e9b023111349c673c7bdfe8\" id:\"4d4ecb5582033b8e540ec6b5a1790467f03f66d33d939c231abf73d771ea2e59\" pid:4919 exit_status:1 exited_at:{seconds:1751844619 nanos:20833710}" Jul 6 23:30:19.939193 systemd-networkd[1434]: lxc_health: Link UP Jul 6 23:30:19.947503 systemd-networkd[1434]: lxc_health: Gained carrier Jul 6 23:30:21.210624 containerd[1529]: time="2025-07-06T23:30:21.210580524Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6fc2a3e1e7f532202944a2bce15e8ecd965de44a7e9b023111349c673c7bdfe8\" id:\"94c2e508ef79021b17f1098d42b6f4d9b4f0ccda9036818f94f53923dc2b74e7\" pid:5252 exited_at:{seconds:1751844621 nanos:210030681}" Jul 6 23:30:22.000354 systemd-networkd[1434]: lxc_health: Gained IPv6LL Jul 6 23:30:23.334866 containerd[1529]: time="2025-07-06T23:30:23.334411526Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6fc2a3e1e7f532202944a2bce15e8ecd965de44a7e9b023111349c673c7bdfe8\" id:\"df073cfb561e056fe64fea98aa74215b1463164f661fd1d9008099ab5a0b7b4a\" pid:5281 exited_at:{seconds:1751844623 nanos:334078404}" Jul 6 23:30:25.440062 containerd[1529]: time="2025-07-06T23:30:25.440003804Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6fc2a3e1e7f532202944a2bce15e8ecd965de44a7e9b023111349c673c7bdfe8\" id:\"85e19848b6416e8b05c8b909cb1157d24601fed194b1fa5436ec9f77d18bbeb0\" pid:5311 exited_at:{seconds:1751844625 nanos:439344759}" Jul 6 23:30:25.446143 sshd[4449]: Connection closed by 10.0.0.1 port 45684 Jul 6 23:30:25.446996 sshd-session[4443]: pam_unix(sshd:session): session closed for user core Jul 6 23:30:25.450566 systemd[1]: sshd@25-10.0.0.47:22-10.0.0.1:45684.service: Deactivated successfully. Jul 6 23:30:25.452607 systemd[1]: session-26.scope: Deactivated successfully. Jul 6 23:30:25.453387 systemd-logind[1514]: Session 26 logged out. Waiting for processes to exit. Jul 6 23:30:25.454633 systemd-logind[1514]: Removed session 26.