Mar 20 18:04:56.928878 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 20 18:04:56.928901 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Thu Mar 20 13:18:46 -00 2025 Mar 20 18:04:56.928910 kernel: KASLR enabled Mar 20 18:04:56.928916 kernel: efi: EFI v2.7 by EDK II Mar 20 18:04:56.928923 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Mar 20 18:04:56.928928 kernel: random: crng init done Mar 20 18:04:56.928935 kernel: secureboot: Secure boot disabled Mar 20 18:04:56.928942 kernel: ACPI: Early table checksum verification disabled Mar 20 18:04:56.928948 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Mar 20 18:04:56.928956 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Mar 20 18:04:56.928962 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 18:04:56.928968 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 18:04:56.928974 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 18:04:56.928981 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 18:04:56.928988 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 18:04:56.928996 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 18:04:56.929002 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 18:04:56.929009 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 18:04:56.929015 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 18:04:56.929022 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Mar 20 18:04:56.929028 kernel: NUMA: Failed to initialise from firmware Mar 20 18:04:56.929035 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Mar 20 18:04:56.929041 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Mar 20 18:04:56.929048 kernel: Zone ranges: Mar 20 18:04:56.929054 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Mar 20 18:04:56.929062 kernel: DMA32 empty Mar 20 18:04:56.929068 kernel: Normal empty Mar 20 18:04:56.929074 kernel: Movable zone start for each node Mar 20 18:04:56.929080 kernel: Early memory node ranges Mar 20 18:04:56.929087 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Mar 20 18:04:56.929093 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Mar 20 18:04:56.929100 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Mar 20 18:04:56.929106 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Mar 20 18:04:56.929112 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Mar 20 18:04:56.929118 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Mar 20 18:04:56.929125 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Mar 20 18:04:56.929131 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Mar 20 18:04:56.929139 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Mar 20 18:04:56.929145 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Mar 20 18:04:56.929152 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Mar 20 18:04:56.929161 kernel: psci: probing for conduit method from ACPI. Mar 20 18:04:56.929168 kernel: psci: PSCIv1.1 detected in firmware. Mar 20 18:04:56.929175 kernel: psci: Using standard PSCI v0.2 function IDs Mar 20 18:04:56.929183 kernel: psci: Trusted OS migration not required Mar 20 18:04:56.929190 kernel: psci: SMC Calling Convention v1.1 Mar 20 18:04:56.929197 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Mar 20 18:04:56.929204 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 20 18:04:56.929211 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 20 18:04:56.929230 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Mar 20 18:04:56.929237 kernel: Detected PIPT I-cache on CPU0 Mar 20 18:04:56.929245 kernel: CPU features: detected: GIC system register CPU interface Mar 20 18:04:56.929252 kernel: CPU features: detected: Hardware dirty bit management Mar 20 18:04:56.929258 kernel: CPU features: detected: Spectre-v4 Mar 20 18:04:56.929266 kernel: CPU features: detected: Spectre-BHB Mar 20 18:04:56.929274 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 20 18:04:56.929281 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 20 18:04:56.929288 kernel: CPU features: detected: ARM erratum 1418040 Mar 20 18:04:56.929294 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 20 18:04:56.929301 kernel: alternatives: applying boot alternatives Mar 20 18:04:56.929309 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=7e8d7de7ff8626488e956fa44b1348d7cdfde9b4a90f4fdae2fb2fe94dbb7bff Mar 20 18:04:56.929316 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 20 18:04:56.929323 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 20 18:04:56.929330 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 20 18:04:56.929337 kernel: Fallback order for Node 0: 0 Mar 20 18:04:56.929346 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Mar 20 18:04:56.929353 kernel: Policy zone: DMA Mar 20 18:04:56.929359 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 20 18:04:56.929366 kernel: software IO TLB: area num 4. Mar 20 18:04:56.929373 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Mar 20 18:04:56.929380 kernel: Memory: 2387412K/2572288K available (10304K kernel code, 2186K rwdata, 8096K rodata, 38464K init, 897K bss, 184876K reserved, 0K cma-reserved) Mar 20 18:04:56.929387 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 20 18:04:56.929394 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 20 18:04:56.929401 kernel: rcu: RCU event tracing is enabled. Mar 20 18:04:56.929408 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 20 18:04:56.929415 kernel: Trampoline variant of Tasks RCU enabled. Mar 20 18:04:56.929422 kernel: Tracing variant of Tasks RCU enabled. Mar 20 18:04:56.929430 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 20 18:04:56.929437 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 20 18:04:56.929444 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 20 18:04:56.929451 kernel: GICv3: 256 SPIs implemented Mar 20 18:04:56.929457 kernel: GICv3: 0 Extended SPIs implemented Mar 20 18:04:56.929464 kernel: Root IRQ handler: gic_handle_irq Mar 20 18:04:56.929471 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 20 18:04:56.929478 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Mar 20 18:04:56.929484 kernel: ITS [mem 0x08080000-0x0809ffff] Mar 20 18:04:56.929491 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Mar 20 18:04:56.929498 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Mar 20 18:04:56.929507 kernel: GICv3: using LPI property table @0x00000000400f0000 Mar 20 18:04:56.929513 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Mar 20 18:04:56.929520 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 20 18:04:56.929528 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 20 18:04:56.929545 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 20 18:04:56.929558 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 20 18:04:56.929566 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 20 18:04:56.929573 kernel: arm-pv: using stolen time PV Mar 20 18:04:56.929580 kernel: Console: colour dummy device 80x25 Mar 20 18:04:56.929587 kernel: ACPI: Core revision 20230628 Mar 20 18:04:56.929594 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 20 18:04:56.929604 kernel: pid_max: default: 32768 minimum: 301 Mar 20 18:04:56.929611 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 20 18:04:56.929618 kernel: landlock: Up and running. Mar 20 18:04:56.929625 kernel: SELinux: Initializing. Mar 20 18:04:56.929631 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 20 18:04:56.929639 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 20 18:04:56.929646 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 20 18:04:56.929657 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 20 18:04:56.929670 kernel: rcu: Hierarchical SRCU implementation. Mar 20 18:04:56.929681 kernel: rcu: Max phase no-delay instances is 400. Mar 20 18:04:56.929692 kernel: Platform MSI: ITS@0x8080000 domain created Mar 20 18:04:56.929701 kernel: PCI/MSI: ITS@0x8080000 domain created Mar 20 18:04:56.929710 kernel: Remapping and enabling EFI services. Mar 20 18:04:56.929718 kernel: smp: Bringing up secondary CPUs ... Mar 20 18:04:56.929725 kernel: Detected PIPT I-cache on CPU1 Mar 20 18:04:56.929732 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Mar 20 18:04:56.929739 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Mar 20 18:04:56.929746 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 20 18:04:56.929755 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 20 18:04:56.929762 kernel: Detected PIPT I-cache on CPU2 Mar 20 18:04:56.929775 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Mar 20 18:04:56.929784 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Mar 20 18:04:56.929791 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 20 18:04:56.929798 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Mar 20 18:04:56.929805 kernel: Detected PIPT I-cache on CPU3 Mar 20 18:04:56.929813 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Mar 20 18:04:56.929822 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Mar 20 18:04:56.929832 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 20 18:04:56.929841 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Mar 20 18:04:56.929848 kernel: smp: Brought up 1 node, 4 CPUs Mar 20 18:04:56.929855 kernel: SMP: Total of 4 processors activated. Mar 20 18:04:56.929863 kernel: CPU features: detected: 32-bit EL0 Support Mar 20 18:04:56.929870 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 20 18:04:56.929877 kernel: CPU features: detected: Common not Private translations Mar 20 18:04:56.929885 kernel: CPU features: detected: CRC32 instructions Mar 20 18:04:56.929893 kernel: CPU features: detected: Enhanced Virtualization Traps Mar 20 18:04:56.929900 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 20 18:04:56.929908 kernel: CPU features: detected: LSE atomic instructions Mar 20 18:04:56.929915 kernel: CPU features: detected: Privileged Access Never Mar 20 18:04:56.929922 kernel: CPU features: detected: RAS Extension Support Mar 20 18:04:56.929930 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 20 18:04:56.929937 kernel: CPU: All CPU(s) started at EL1 Mar 20 18:04:56.929944 kernel: alternatives: applying system-wide alternatives Mar 20 18:04:56.929951 kernel: devtmpfs: initialized Mar 20 18:04:56.929959 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 20 18:04:56.929972 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 20 18:04:56.929980 kernel: pinctrl core: initialized pinctrl subsystem Mar 20 18:04:56.929987 kernel: SMBIOS 3.0.0 present. Mar 20 18:04:56.929994 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Mar 20 18:04:56.930001 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 20 18:04:56.930009 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 20 18:04:56.930016 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 20 18:04:56.930024 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 20 18:04:56.930031 kernel: audit: initializing netlink subsys (disabled) Mar 20 18:04:56.930040 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Mar 20 18:04:56.930047 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 20 18:04:56.930054 kernel: cpuidle: using governor menu Mar 20 18:04:56.930061 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 20 18:04:56.930069 kernel: ASID allocator initialised with 32768 entries Mar 20 18:04:56.930076 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 20 18:04:56.930083 kernel: Serial: AMBA PL011 UART driver Mar 20 18:04:56.930090 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 20 18:04:56.930098 kernel: Modules: 0 pages in range for non-PLT usage Mar 20 18:04:56.930106 kernel: Modules: 509248 pages in range for PLT usage Mar 20 18:04:56.930114 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 20 18:04:56.930121 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 20 18:04:56.930128 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 20 18:04:56.930136 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 20 18:04:56.930143 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 20 18:04:56.930150 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 20 18:04:56.930157 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 20 18:04:56.930165 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 20 18:04:56.930173 kernel: ACPI: Added _OSI(Module Device) Mar 20 18:04:56.930180 kernel: ACPI: Added _OSI(Processor Device) Mar 20 18:04:56.930188 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 20 18:04:56.930195 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 20 18:04:56.930202 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 20 18:04:56.930209 kernel: ACPI: Interpreter enabled Mar 20 18:04:56.930217 kernel: ACPI: Using GIC for interrupt routing Mar 20 18:04:56.930224 kernel: ACPI: MCFG table detected, 1 entries Mar 20 18:04:56.930231 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Mar 20 18:04:56.930240 kernel: printk: console [ttyAMA0] enabled Mar 20 18:04:56.930247 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 20 18:04:56.930385 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 20 18:04:56.930463 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 20 18:04:56.930531 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 20 18:04:56.930615 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Mar 20 18:04:56.930692 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Mar 20 18:04:56.930706 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Mar 20 18:04:56.930714 kernel: PCI host bridge to bus 0000:00 Mar 20 18:04:56.930793 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Mar 20 18:04:56.930859 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 20 18:04:56.930923 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Mar 20 18:04:56.930985 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 20 18:04:56.931070 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Mar 20 18:04:56.931154 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Mar 20 18:04:56.931228 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Mar 20 18:04:56.931300 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Mar 20 18:04:56.931370 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Mar 20 18:04:56.931440 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Mar 20 18:04:56.931509 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Mar 20 18:04:56.931599 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Mar 20 18:04:56.931674 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Mar 20 18:04:56.931740 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 20 18:04:56.931803 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Mar 20 18:04:56.931813 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 20 18:04:56.931821 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 20 18:04:56.931829 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 20 18:04:56.931836 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 20 18:04:56.931846 kernel: iommu: Default domain type: Translated Mar 20 18:04:56.931854 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 20 18:04:56.931861 kernel: efivars: Registered efivars operations Mar 20 18:04:56.931869 kernel: vgaarb: loaded Mar 20 18:04:56.931876 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 20 18:04:56.931884 kernel: VFS: Disk quotas dquot_6.6.0 Mar 20 18:04:56.931892 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 20 18:04:56.931899 kernel: pnp: PnP ACPI init Mar 20 18:04:56.931981 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Mar 20 18:04:56.931994 kernel: pnp: PnP ACPI: found 1 devices Mar 20 18:04:56.932002 kernel: NET: Registered PF_INET protocol family Mar 20 18:04:56.932009 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 20 18:04:56.932017 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 20 18:04:56.932025 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 20 18:04:56.932032 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 20 18:04:56.932040 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 20 18:04:56.932048 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 20 18:04:56.932055 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 20 18:04:56.932064 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 20 18:04:56.932072 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 20 18:04:56.932079 kernel: PCI: CLS 0 bytes, default 64 Mar 20 18:04:56.932087 kernel: kvm [1]: HYP mode not available Mar 20 18:04:56.932094 kernel: Initialise system trusted keyrings Mar 20 18:04:56.932101 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 20 18:04:56.932109 kernel: Key type asymmetric registered Mar 20 18:04:56.932116 kernel: Asymmetric key parser 'x509' registered Mar 20 18:04:56.932124 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 20 18:04:56.932133 kernel: io scheduler mq-deadline registered Mar 20 18:04:56.932141 kernel: io scheduler kyber registered Mar 20 18:04:56.932148 kernel: io scheduler bfq registered Mar 20 18:04:56.932156 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 20 18:04:56.932163 kernel: ACPI: button: Power Button [PWRB] Mar 20 18:04:56.932171 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 20 18:04:56.932241 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Mar 20 18:04:56.932252 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 20 18:04:56.932259 kernel: thunder_xcv, ver 1.0 Mar 20 18:04:56.932269 kernel: thunder_bgx, ver 1.0 Mar 20 18:04:56.932276 kernel: nicpf, ver 1.0 Mar 20 18:04:56.932284 kernel: nicvf, ver 1.0 Mar 20 18:04:56.932361 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 20 18:04:56.932427 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-20T18:04:56 UTC (1742493896) Mar 20 18:04:56.932437 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 20 18:04:56.932445 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Mar 20 18:04:56.932453 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 20 18:04:56.932463 kernel: watchdog: Hard watchdog permanently disabled Mar 20 18:04:56.932470 kernel: NET: Registered PF_INET6 protocol family Mar 20 18:04:56.932478 kernel: Segment Routing with IPv6 Mar 20 18:04:56.932485 kernel: In-situ OAM (IOAM) with IPv6 Mar 20 18:04:56.932493 kernel: NET: Registered PF_PACKET protocol family Mar 20 18:04:56.932501 kernel: Key type dns_resolver registered Mar 20 18:04:56.932508 kernel: registered taskstats version 1 Mar 20 18:04:56.932516 kernel: Loading compiled-in X.509 certificates Mar 20 18:04:56.932524 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 60ca5105dc3f344265f11c7b4aeda632cce92b3c' Mar 20 18:04:56.932533 kernel: Key type .fscrypt registered Mar 20 18:04:56.932557 kernel: Key type fscrypt-provisioning registered Mar 20 18:04:56.932565 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 20 18:04:56.932573 kernel: ima: Allocated hash algorithm: sha1 Mar 20 18:04:56.932580 kernel: ima: No architecture policies found Mar 20 18:04:56.932587 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 20 18:04:56.932595 kernel: clk: Disabling unused clocks Mar 20 18:04:56.932602 kernel: Freeing unused kernel memory: 38464K Mar 20 18:04:56.932609 kernel: Run /init as init process Mar 20 18:04:56.932618 kernel: with arguments: Mar 20 18:04:56.932625 kernel: /init Mar 20 18:04:56.932632 kernel: with environment: Mar 20 18:04:56.932639 kernel: HOME=/ Mar 20 18:04:56.932647 kernel: TERM=linux Mar 20 18:04:56.932654 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 20 18:04:56.932662 systemd[1]: Successfully made /usr/ read-only. Mar 20 18:04:56.932681 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 20 18:04:56.932692 systemd[1]: Detected virtualization kvm. Mar 20 18:04:56.932699 systemd[1]: Detected architecture arm64. Mar 20 18:04:56.932707 systemd[1]: Running in initrd. Mar 20 18:04:56.932715 systemd[1]: No hostname configured, using default hostname. Mar 20 18:04:56.932723 systemd[1]: Hostname set to . Mar 20 18:04:56.932731 systemd[1]: Initializing machine ID from VM UUID. Mar 20 18:04:56.932739 systemd[1]: Queued start job for default target initrd.target. Mar 20 18:04:56.932747 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 20 18:04:56.932757 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 20 18:04:56.932766 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 20 18:04:56.932774 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 20 18:04:56.932782 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 20 18:04:56.932791 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 20 18:04:56.932800 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 20 18:04:56.932810 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 20 18:04:56.932818 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 20 18:04:56.932827 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 20 18:04:56.932835 systemd[1]: Reached target paths.target - Path Units. Mar 20 18:04:56.932843 systemd[1]: Reached target slices.target - Slice Units. Mar 20 18:04:56.932851 systemd[1]: Reached target swap.target - Swaps. Mar 20 18:04:56.932859 systemd[1]: Reached target timers.target - Timer Units. Mar 20 18:04:56.932867 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 20 18:04:56.932875 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 20 18:04:56.932884 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 20 18:04:56.932892 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 20 18:04:56.932900 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 20 18:04:56.932908 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 20 18:04:56.932917 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 20 18:04:56.932925 systemd[1]: Reached target sockets.target - Socket Units. Mar 20 18:04:56.932933 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 20 18:04:56.932941 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 20 18:04:56.932951 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 20 18:04:56.932959 systemd[1]: Starting systemd-fsck-usr.service... Mar 20 18:04:56.932967 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 20 18:04:56.932975 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 20 18:04:56.932983 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 18:04:56.932991 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 20 18:04:56.932999 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 20 18:04:56.933009 systemd[1]: Finished systemd-fsck-usr.service. Mar 20 18:04:56.933017 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 18:04:56.933026 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 20 18:04:56.933034 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 20 18:04:56.933061 systemd-journald[237]: Collecting audit messages is disabled. Mar 20 18:04:56.933082 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 20 18:04:56.933089 kernel: Bridge firewalling registered Mar 20 18:04:56.933097 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 20 18:04:56.933106 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 20 18:04:56.933115 systemd-journald[237]: Journal started Mar 20 18:04:56.933135 systemd-journald[237]: Runtime Journal (/run/log/journal/4dcd200ea6c94bafafa02bd061938ab4) is 5.9M, max 47.3M, 41.4M free. Mar 20 18:04:56.911812 systemd-modules-load[238]: Inserted module 'overlay' Mar 20 18:04:56.929798 systemd-modules-load[238]: Inserted module 'br_netfilter' Mar 20 18:04:56.938565 systemd[1]: Started systemd-journald.service - Journal Service. Mar 20 18:04:56.941257 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 20 18:04:56.942787 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 20 18:04:56.946517 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 20 18:04:56.954134 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 18:04:56.958657 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 20 18:04:56.961843 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 20 18:04:56.962988 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 20 18:04:56.964867 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 20 18:04:56.968945 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 20 18:04:56.971725 dracut-cmdline[271]: dracut-dracut-053 Mar 20 18:04:56.975015 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=7e8d7de7ff8626488e956fa44b1348d7cdfde9b4a90f4fdae2fb2fe94dbb7bff Mar 20 18:04:57.015077 systemd-resolved[281]: Positive Trust Anchors: Mar 20 18:04:57.015094 systemd-resolved[281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 20 18:04:57.015125 systemd-resolved[281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 20 18:04:57.021908 systemd-resolved[281]: Defaulting to hostname 'linux'. Mar 20 18:04:57.026532 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 20 18:04:57.027630 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 20 18:04:57.049562 kernel: SCSI subsystem initialized Mar 20 18:04:57.053558 kernel: Loading iSCSI transport class v2.0-870. Mar 20 18:04:57.061555 kernel: iscsi: registered transport (tcp) Mar 20 18:04:57.074559 kernel: iscsi: registered transport (qla4xxx) Mar 20 18:04:57.074574 kernel: QLogic iSCSI HBA Driver Mar 20 18:04:57.115072 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 20 18:04:57.117369 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 20 18:04:57.147959 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 20 18:04:57.147999 kernel: device-mapper: uevent: version 1.0.3 Mar 20 18:04:57.148966 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 20 18:04:57.195593 kernel: raid6: neonx8 gen() 15767 MB/s Mar 20 18:04:57.212569 kernel: raid6: neonx4 gen() 15728 MB/s Mar 20 18:04:57.229561 kernel: raid6: neonx2 gen() 13209 MB/s Mar 20 18:04:57.246567 kernel: raid6: neonx1 gen() 10467 MB/s Mar 20 18:04:57.263570 kernel: raid6: int64x8 gen() 6788 MB/s Mar 20 18:04:57.280571 kernel: raid6: int64x4 gen() 7335 MB/s Mar 20 18:04:57.297567 kernel: raid6: int64x2 gen() 6111 MB/s Mar 20 18:04:57.314595 kernel: raid6: int64x1 gen() 5053 MB/s Mar 20 18:04:57.314618 kernel: raid6: using algorithm neonx8 gen() 15767 MB/s Mar 20 18:04:57.332581 kernel: raid6: .... xor() 11958 MB/s, rmw enabled Mar 20 18:04:57.332611 kernel: raid6: using neon recovery algorithm Mar 20 18:04:57.337756 kernel: xor: measuring software checksum speed Mar 20 18:04:57.337789 kernel: 8regs : 21618 MB/sec Mar 20 18:04:57.339002 kernel: 32regs : 21681 MB/sec Mar 20 18:04:57.339015 kernel: arm64_neon : 27794 MB/sec Mar 20 18:04:57.339029 kernel: xor: using function: arm64_neon (27794 MB/sec) Mar 20 18:04:57.388569 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 20 18:04:57.398256 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 20 18:04:57.400671 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 20 18:04:57.427617 systemd-udevd[462]: Using default interface naming scheme 'v255'. Mar 20 18:04:57.431233 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 20 18:04:57.435331 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 20 18:04:57.460473 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation Mar 20 18:04:57.484124 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 20 18:04:57.486193 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 20 18:04:57.536284 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 20 18:04:57.539066 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 20 18:04:57.561617 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 20 18:04:57.562979 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 20 18:04:57.564900 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 20 18:04:57.567234 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 20 18:04:57.570567 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 20 18:04:57.587396 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 20 18:04:57.591698 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Mar 20 18:04:57.600180 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 20 18:04:57.600288 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 20 18:04:57.600300 kernel: GPT:9289727 != 19775487 Mar 20 18:04:57.600309 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 20 18:04:57.600320 kernel: GPT:9289727 != 19775487 Mar 20 18:04:57.600333 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 20 18:04:57.600343 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 20 18:04:57.600026 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 20 18:04:57.600168 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 18:04:57.605206 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 20 18:04:57.607000 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 20 18:04:57.607135 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 18:04:57.610984 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 18:04:57.612715 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 18:04:57.624205 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by (udev-worker) (524) Mar 20 18:04:57.630562 kernel: BTRFS: device fsid 7c452270-b08f-4ab0-84d1-fe3217dab188 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (513) Mar 20 18:04:57.630941 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 18:04:57.638724 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 20 18:04:57.646206 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 20 18:04:57.662194 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 20 18:04:57.668355 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 20 18:04:57.669500 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 20 18:04:57.672291 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 20 18:04:57.675021 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 20 18:04:57.697362 disk-uuid[551]: Primary Header is updated. Mar 20 18:04:57.697362 disk-uuid[551]: Secondary Entries is updated. Mar 20 18:04:57.697362 disk-uuid[551]: Secondary Header is updated. Mar 20 18:04:57.702578 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 20 18:04:57.706082 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 18:04:58.712736 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 20 18:04:58.712797 disk-uuid[556]: The operation has completed successfully. Mar 20 18:04:58.735357 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 20 18:04:58.735447 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 20 18:04:58.761198 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 20 18:04:58.774169 sh[571]: Success Mar 20 18:04:58.792564 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 20 18:04:58.819833 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 20 18:04:58.822000 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 20 18:04:58.836515 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 20 18:04:58.844281 kernel: BTRFS info (device dm-0): first mount of filesystem 7c452270-b08f-4ab0-84d1-fe3217dab188 Mar 20 18:04:58.844321 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 20 18:04:58.844333 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 20 18:04:58.846128 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 20 18:04:58.846145 kernel: BTRFS info (device dm-0): using free space tree Mar 20 18:04:58.850564 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 20 18:04:58.851792 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 20 18:04:58.852427 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 20 18:04:58.855075 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 20 18:04:58.874019 kernel: BTRFS info (device vda6): first mount of filesystem 487c8301-a281-43a7-bff1-8a4858590ad6 Mar 20 18:04:58.874060 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 20 18:04:58.874071 kernel: BTRFS info (device vda6): using free space tree Mar 20 18:04:58.876563 kernel: BTRFS info (device vda6): auto enabling async discard Mar 20 18:04:58.880558 kernel: BTRFS info (device vda6): last unmount of filesystem 487c8301-a281-43a7-bff1-8a4858590ad6 Mar 20 18:04:58.882696 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 20 18:04:58.885679 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 20 18:04:58.950224 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 20 18:04:58.953289 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 20 18:04:58.978860 ignition[662]: Ignition 2.20.0 Mar 20 18:04:58.978869 ignition[662]: Stage: fetch-offline Mar 20 18:04:58.978897 ignition[662]: no configs at "/usr/lib/ignition/base.d" Mar 20 18:04:58.978904 ignition[662]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 18:04:58.979054 ignition[662]: parsed url from cmdline: "" Mar 20 18:04:58.979057 ignition[662]: no config URL provided Mar 20 18:04:58.979061 ignition[662]: reading system config file "/usr/lib/ignition/user.ign" Mar 20 18:04:58.979068 ignition[662]: no config at "/usr/lib/ignition/user.ign" Mar 20 18:04:58.979089 ignition[662]: op(1): [started] loading QEMU firmware config module Mar 20 18:04:58.979093 ignition[662]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 20 18:04:58.986979 ignition[662]: op(1): [finished] loading QEMU firmware config module Mar 20 18:04:58.994893 systemd-networkd[759]: lo: Link UP Mar 20 18:04:58.994904 systemd-networkd[759]: lo: Gained carrier Mar 20 18:04:58.995649 systemd-networkd[759]: Enumeration completed Mar 20 18:04:58.995753 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 20 18:04:58.996207 systemd-networkd[759]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 20 18:04:58.996211 systemd-networkd[759]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 20 18:04:58.997594 systemd[1]: Reached target network.target - Network. Mar 20 18:04:58.999267 systemd-networkd[759]: eth0: Link UP Mar 20 18:04:58.999270 systemd-networkd[759]: eth0: Gained carrier Mar 20 18:04:58.999277 systemd-networkd[759]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 20 18:04:59.024588 systemd-networkd[759]: eth0: DHCPv4 address 10.0.0.103/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 20 18:04:59.034923 ignition[662]: parsing config with SHA512: 6bb1a6374902095353ff4af9a1e6e08b15eacf89487b3a301b34066135ca32bd99c7fab0bc53cd47fe435b32da26982780392ef90bf74a6d0707b655c342cfd6 Mar 20 18:04:59.039467 unknown[662]: fetched base config from "system" Mar 20 18:04:59.039475 unknown[662]: fetched user config from "qemu" Mar 20 18:04:59.039889 ignition[662]: fetch-offline: fetch-offline passed Mar 20 18:04:59.041610 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 20 18:04:59.039954 ignition[662]: Ignition finished successfully Mar 20 18:04:59.043041 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 20 18:04:59.043769 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 20 18:04:59.063900 ignition[768]: Ignition 2.20.0 Mar 20 18:04:59.063908 ignition[768]: Stage: kargs Mar 20 18:04:59.064049 ignition[768]: no configs at "/usr/lib/ignition/base.d" Mar 20 18:04:59.064057 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 18:04:59.066475 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 20 18:04:59.064894 ignition[768]: kargs: kargs passed Mar 20 18:04:59.064935 ignition[768]: Ignition finished successfully Mar 20 18:04:59.068723 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 20 18:04:59.086901 ignition[776]: Ignition 2.20.0 Mar 20 18:04:59.086908 ignition[776]: Stage: disks Mar 20 18:04:59.087047 ignition[776]: no configs at "/usr/lib/ignition/base.d" Mar 20 18:04:59.089355 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 20 18:04:59.087055 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 18:04:59.090875 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 20 18:04:59.087933 ignition[776]: disks: disks passed Mar 20 18:04:59.092444 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 20 18:04:59.087973 ignition[776]: Ignition finished successfully Mar 20 18:04:59.094419 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 20 18:04:59.096260 systemd[1]: Reached target sysinit.target - System Initialization. Mar 20 18:04:59.097629 systemd[1]: Reached target basic.target - Basic System. Mar 20 18:04:59.100098 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 20 18:04:59.118583 systemd-fsck[788]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 20 18:04:59.122655 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 20 18:04:59.126098 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 20 18:04:59.177558 kernel: EXT4-fs (vda9): mounted filesystem b7437caf-1938-4bc6-8e3f-9394bb7ad561 r/w with ordered data mode. Quota mode: none. Mar 20 18:04:59.177761 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 20 18:04:59.178830 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 20 18:04:59.181806 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 20 18:04:59.184124 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 20 18:04:59.185880 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 20 18:04:59.185926 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 20 18:04:59.185946 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 20 18:04:59.194730 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 20 18:04:59.196966 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 20 18:04:59.201264 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (797) Mar 20 18:04:59.201289 kernel: BTRFS info (device vda6): first mount of filesystem 487c8301-a281-43a7-bff1-8a4858590ad6 Mar 20 18:04:59.201301 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 20 18:04:59.202891 kernel: BTRFS info (device vda6): using free space tree Mar 20 18:04:59.209559 kernel: BTRFS info (device vda6): auto enabling async discard Mar 20 18:04:59.209927 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 20 18:04:59.237729 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Mar 20 18:04:59.241654 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Mar 20 18:04:59.245210 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Mar 20 18:04:59.249017 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Mar 20 18:04:59.319530 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 20 18:04:59.321622 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 20 18:04:59.323071 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 20 18:04:59.344554 kernel: BTRFS info (device vda6): last unmount of filesystem 487c8301-a281-43a7-bff1-8a4858590ad6 Mar 20 18:04:59.360713 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 20 18:04:59.369409 ignition[911]: INFO : Ignition 2.20.0 Mar 20 18:04:59.369409 ignition[911]: INFO : Stage: mount Mar 20 18:04:59.370944 ignition[911]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 20 18:04:59.370944 ignition[911]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 18:04:59.370944 ignition[911]: INFO : mount: mount passed Mar 20 18:04:59.370944 ignition[911]: INFO : Ignition finished successfully Mar 20 18:04:59.372561 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 20 18:04:59.376800 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 20 18:04:59.973711 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 20 18:04:59.975337 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 20 18:04:59.993190 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/vda6 scanned by mount (924) Mar 20 18:04:59.993221 kernel: BTRFS info (device vda6): first mount of filesystem 487c8301-a281-43a7-bff1-8a4858590ad6 Mar 20 18:04:59.993232 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 20 18:04:59.994733 kernel: BTRFS info (device vda6): using free space tree Mar 20 18:04:59.996557 kernel: BTRFS info (device vda6): auto enabling async discard Mar 20 18:04:59.997909 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 20 18:05:00.020090 ignition[941]: INFO : Ignition 2.20.0 Mar 20 18:05:00.020090 ignition[941]: INFO : Stage: files Mar 20 18:05:00.021499 ignition[941]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 20 18:05:00.021499 ignition[941]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 18:05:00.021499 ignition[941]: DEBUG : files: compiled without relabeling support, skipping Mar 20 18:05:00.024823 ignition[941]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 20 18:05:00.024823 ignition[941]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 20 18:05:00.024823 ignition[941]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 20 18:05:00.024823 ignition[941]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 20 18:05:00.024823 ignition[941]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 20 18:05:00.024690 unknown[941]: wrote ssh authorized keys file for user: core Mar 20 18:05:00.031991 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Mar 20 18:05:00.031991 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Mar 20 18:05:00.110177 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 20 18:05:00.367998 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Mar 20 18:05:00.367998 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 20 18:05:00.367998 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 20 18:05:00.714252 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 20 18:05:00.864165 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 20 18:05:00.866020 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 20 18:05:00.866020 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 20 18:05:00.866020 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 20 18:05:00.866020 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 20 18:05:00.866020 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 20 18:05:00.866020 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 20 18:05:00.866020 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 20 18:05:00.866020 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 20 18:05:00.866020 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 20 18:05:00.866020 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 20 18:05:00.866020 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 20 18:05:00.866020 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 20 18:05:00.866020 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 20 18:05:00.866020 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Mar 20 18:05:01.013755 systemd-networkd[759]: eth0: Gained IPv6LL Mar 20 18:05:01.158685 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 20 18:05:01.946768 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 20 18:05:01.948959 ignition[941]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 20 18:05:01.948959 ignition[941]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 20 18:05:01.948959 ignition[941]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 20 18:05:01.948959 ignition[941]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 20 18:05:01.948959 ignition[941]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 20 18:05:01.948959 ignition[941]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 20 18:05:01.948959 ignition[941]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 20 18:05:01.948959 ignition[941]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 20 18:05:01.948959 ignition[941]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 20 18:05:01.965247 ignition[941]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 20 18:05:01.968354 ignition[941]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 20 18:05:01.969839 ignition[941]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 20 18:05:01.969839 ignition[941]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 20 18:05:01.969839 ignition[941]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 20 18:05:01.969839 ignition[941]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 20 18:05:01.969839 ignition[941]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 20 18:05:01.969839 ignition[941]: INFO : files: files passed Mar 20 18:05:01.969839 ignition[941]: INFO : Ignition finished successfully Mar 20 18:05:01.971144 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 20 18:05:01.973671 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 20 18:05:01.977478 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 20 18:05:01.984217 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 20 18:05:01.984314 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 20 18:05:01.987978 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory Mar 20 18:05:01.989223 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 20 18:05:01.989223 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 20 18:05:01.993052 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 20 18:05:01.989329 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 20 18:05:01.992106 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 20 18:05:01.994653 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 20 18:05:02.020845 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 20 18:05:02.020939 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 20 18:05:02.022993 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 20 18:05:02.024754 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 20 18:05:02.026482 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 20 18:05:02.027141 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 20 18:05:02.041432 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 20 18:05:02.043659 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 20 18:05:02.061555 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 20 18:05:02.062729 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 20 18:05:02.064687 systemd[1]: Stopped target timers.target - Timer Units. Mar 20 18:05:02.066318 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 20 18:05:02.066422 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 20 18:05:02.068850 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 20 18:05:02.070733 systemd[1]: Stopped target basic.target - Basic System. Mar 20 18:05:02.072313 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 20 18:05:02.073948 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 20 18:05:02.075752 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 20 18:05:02.077677 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 20 18:05:02.079522 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 20 18:05:02.081456 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 20 18:05:02.083387 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 20 18:05:02.085087 systemd[1]: Stopped target swap.target - Swaps. Mar 20 18:05:02.086554 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 20 18:05:02.086677 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 20 18:05:02.088870 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 20 18:05:02.090684 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 20 18:05:02.092510 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 20 18:05:02.092614 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 20 18:05:02.094593 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 20 18:05:02.094710 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 20 18:05:02.097461 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 20 18:05:02.097642 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 20 18:05:02.099455 systemd[1]: Stopped target paths.target - Path Units. Mar 20 18:05:02.101020 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 20 18:05:02.104621 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 20 18:05:02.107084 systemd[1]: Stopped target slices.target - Slice Units. Mar 20 18:05:02.108026 systemd[1]: Stopped target sockets.target - Socket Units. Mar 20 18:05:02.109463 systemd[1]: iscsid.socket: Deactivated successfully. Mar 20 18:05:02.109567 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 20 18:05:02.111061 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 20 18:05:02.111134 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 20 18:05:02.112597 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 20 18:05:02.112726 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 20 18:05:02.114428 systemd[1]: ignition-files.service: Deactivated successfully. Mar 20 18:05:02.114526 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 20 18:05:02.116679 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 20 18:05:02.118154 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 20 18:05:02.119268 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 20 18:05:02.119380 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 20 18:05:02.121454 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 20 18:05:02.121564 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 20 18:05:02.135741 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 20 18:05:02.136681 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 20 18:05:02.144836 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 20 18:05:02.145816 ignition[998]: INFO : Ignition 2.20.0 Mar 20 18:05:02.145816 ignition[998]: INFO : Stage: umount Mar 20 18:05:02.145816 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 20 18:05:02.145816 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 18:05:02.149420 ignition[998]: INFO : umount: umount passed Mar 20 18:05:02.149420 ignition[998]: INFO : Ignition finished successfully Mar 20 18:05:02.147746 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 20 18:05:02.147836 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 20 18:05:02.150615 systemd[1]: Stopped target network.target - Network. Mar 20 18:05:02.151933 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 20 18:05:02.151989 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 20 18:05:02.153621 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 20 18:05:02.153673 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 20 18:05:02.155572 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 20 18:05:02.155615 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 20 18:05:02.157343 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 20 18:05:02.157382 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 20 18:05:02.159152 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 20 18:05:02.160776 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 20 18:05:02.163991 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 20 18:05:02.164093 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 20 18:05:02.167235 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 20 18:05:02.167447 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 20 18:05:02.167481 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 20 18:05:02.170971 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 20 18:05:02.171183 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 20 18:05:02.171275 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 20 18:05:02.173951 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 20 18:05:02.174354 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 20 18:05:02.174404 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 20 18:05:02.176997 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 20 18:05:02.178124 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 20 18:05:02.178178 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 20 18:05:02.180064 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 20 18:05:02.180106 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 20 18:05:02.183208 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 20 18:05:02.183248 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 20 18:05:02.185111 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 20 18:05:02.189184 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 20 18:05:02.202770 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 20 18:05:02.202922 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 20 18:05:02.205591 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 20 18:05:02.205666 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 20 18:05:02.207084 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 20 18:05:02.207117 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 20 18:05:02.208895 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 20 18:05:02.208948 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 20 18:05:02.211696 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 20 18:05:02.211749 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 20 18:05:02.214352 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 20 18:05:02.214402 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 18:05:02.217349 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 20 18:05:02.218476 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 20 18:05:02.218547 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 20 18:05:02.221435 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 20 18:05:02.221479 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 20 18:05:02.223879 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 20 18:05:02.223923 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 20 18:05:02.225962 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 20 18:05:02.226004 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 18:05:02.234785 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 20 18:05:02.234895 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 20 18:05:02.236571 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 20 18:05:02.236679 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 20 18:05:02.238964 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 20 18:05:02.240596 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 20 18:05:02.242203 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 20 18:05:02.243586 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 20 18:05:02.243666 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 20 18:05:02.246187 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 20 18:05:02.259889 systemd[1]: Switching root. Mar 20 18:05:02.278448 systemd-journald[237]: Journal stopped Mar 20 18:05:03.011747 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Mar 20 18:05:03.011804 kernel: SELinux: policy capability network_peer_controls=1 Mar 20 18:05:03.011816 kernel: SELinux: policy capability open_perms=1 Mar 20 18:05:03.011825 kernel: SELinux: policy capability extended_socket_class=1 Mar 20 18:05:03.011835 kernel: SELinux: policy capability always_check_network=0 Mar 20 18:05:03.011844 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 20 18:05:03.011858 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 20 18:05:03.011869 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 20 18:05:03.011878 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 20 18:05:03.011890 kernel: audit: type=1403 audit(1742493902.436:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 20 18:05:03.011900 systemd[1]: Successfully loaded SELinux policy in 30.504ms. Mar 20 18:05:03.011916 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.341ms. Mar 20 18:05:03.011926 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 20 18:05:03.011937 systemd[1]: Detected virtualization kvm. Mar 20 18:05:03.011947 systemd[1]: Detected architecture arm64. Mar 20 18:05:03.011957 systemd[1]: Detected first boot. Mar 20 18:05:03.011966 systemd[1]: Initializing machine ID from VM UUID. Mar 20 18:05:03.011978 kernel: NET: Registered PF_VSOCK protocol family Mar 20 18:05:03.011989 zram_generator::config[1048]: No configuration found. Mar 20 18:05:03.012000 systemd[1]: Populated /etc with preset unit settings. Mar 20 18:05:03.012011 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 20 18:05:03.012021 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 20 18:05:03.012032 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 20 18:05:03.012042 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 20 18:05:03.012052 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 20 18:05:03.012063 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 20 18:05:03.012075 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 20 18:05:03.012086 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 20 18:05:03.012101 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 20 18:05:03.012111 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 20 18:05:03.012122 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 20 18:05:03.012132 systemd[1]: Created slice user.slice - User and Session Slice. Mar 20 18:05:03.012143 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 20 18:05:03.012154 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 20 18:05:03.012166 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 20 18:05:03.012177 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 20 18:05:03.012187 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 20 18:05:03.012198 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 20 18:05:03.012208 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 20 18:05:03.012219 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 20 18:05:03.012230 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 20 18:05:03.012240 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 20 18:05:03.012252 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 20 18:05:03.012263 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 20 18:05:03.012273 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 20 18:05:03.012283 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 20 18:05:03.012294 systemd[1]: Reached target slices.target - Slice Units. Mar 20 18:05:03.012304 systemd[1]: Reached target swap.target - Swaps. Mar 20 18:05:03.012315 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 20 18:05:03.012327 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 20 18:05:03.012337 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 20 18:05:03.012349 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 20 18:05:03.012360 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 20 18:05:03.012371 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 20 18:05:03.012381 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 20 18:05:03.012391 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 20 18:05:03.012400 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 20 18:05:03.012410 systemd[1]: Mounting media.mount - External Media Directory... Mar 20 18:05:03.012421 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 20 18:05:03.012431 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 20 18:05:03.012442 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 20 18:05:03.012453 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 20 18:05:03.012463 systemd[1]: Reached target machines.target - Containers. Mar 20 18:05:03.012476 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 20 18:05:03.012486 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 20 18:05:03.012496 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 20 18:05:03.012506 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 20 18:05:03.012515 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 20 18:05:03.012527 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 20 18:05:03.012549 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 20 18:05:03.012564 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 20 18:05:03.012574 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 20 18:05:03.012584 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 20 18:05:03.012594 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 20 18:05:03.012604 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 20 18:05:03.012614 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 20 18:05:03.012624 systemd[1]: Stopped systemd-fsck-usr.service. Mar 20 18:05:03.012636 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 20 18:05:03.012652 kernel: fuse: init (API version 7.39) Mar 20 18:05:03.012662 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 20 18:05:03.012672 kernel: loop: module loaded Mar 20 18:05:03.012681 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 20 18:05:03.012691 kernel: ACPI: bus type drm_connector registered Mar 20 18:05:03.012701 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 20 18:05:03.012711 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 20 18:05:03.012721 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 20 18:05:03.012733 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 20 18:05:03.012743 systemd[1]: verity-setup.service: Deactivated successfully. Mar 20 18:05:03.012755 systemd[1]: Stopped verity-setup.service. Mar 20 18:05:03.012784 systemd-journald[1116]: Collecting audit messages is disabled. Mar 20 18:05:03.012810 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 20 18:05:03.012827 systemd-journald[1116]: Journal started Mar 20 18:05:03.012847 systemd-journald[1116]: Runtime Journal (/run/log/journal/4dcd200ea6c94bafafa02bd061938ab4) is 5.9M, max 47.3M, 41.4M free. Mar 20 18:05:02.812228 systemd[1]: Queued start job for default target multi-user.target. Mar 20 18:05:02.829334 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 20 18:05:02.829731 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 20 18:05:03.015318 systemd[1]: Started systemd-journald.service - Journal Service. Mar 20 18:05:03.015951 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 20 18:05:03.017124 systemd[1]: Mounted media.mount - External Media Directory. Mar 20 18:05:03.018153 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 20 18:05:03.019389 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 20 18:05:03.020624 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 20 18:05:03.023568 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 20 18:05:03.024903 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 20 18:05:03.026388 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 20 18:05:03.026583 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 20 18:05:03.027901 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 20 18:05:03.028063 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 20 18:05:03.029390 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 20 18:05:03.029590 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 20 18:05:03.030910 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 20 18:05:03.031070 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 20 18:05:03.032517 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 20 18:05:03.032707 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 20 18:05:03.033907 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 20 18:05:03.034069 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 20 18:05:03.035343 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 20 18:05:03.036899 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 20 18:05:03.038311 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 20 18:05:03.039794 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 20 18:05:03.051955 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 20 18:05:03.054278 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 20 18:05:03.056260 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 20 18:05:03.057430 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 20 18:05:03.057459 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 20 18:05:03.059300 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 20 18:05:03.069366 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 20 18:05:03.071324 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 20 18:05:03.072384 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 20 18:05:03.073594 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 20 18:05:03.075348 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 20 18:05:03.076465 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 20 18:05:03.077274 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 20 18:05:03.078364 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 20 18:05:03.085265 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 20 18:05:03.087302 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 20 18:05:03.089452 systemd-journald[1116]: Time spent on flushing to /var/log/journal/4dcd200ea6c94bafafa02bd061938ab4 is 16.140ms for 871 entries. Mar 20 18:05:03.089452 systemd-journald[1116]: System Journal (/var/log/journal/4dcd200ea6c94bafafa02bd061938ab4) is 8M, max 195.6M, 187.6M free. Mar 20 18:05:03.116678 systemd-journald[1116]: Received client request to flush runtime journal. Mar 20 18:05:03.116716 kernel: loop0: detected capacity change from 0 to 201592 Mar 20 18:05:03.090990 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 20 18:05:03.095584 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 20 18:05:03.097238 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 20 18:05:03.099800 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 20 18:05:03.102025 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 20 18:05:03.103782 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 20 18:05:03.107108 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 20 18:05:03.109526 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 20 18:05:03.114669 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 20 18:05:03.118291 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 20 18:05:03.119907 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 20 18:05:03.124948 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Mar 20 18:05:03.124965 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Mar 20 18:05:03.129280 udevadm[1177]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 20 18:05:03.131872 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 20 18:05:03.134967 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 20 18:05:03.135562 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 20 18:05:03.148501 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 20 18:05:03.169580 kernel: loop1: detected capacity change from 0 to 126448 Mar 20 18:05:03.176613 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 20 18:05:03.179145 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 20 18:05:03.194674 kernel: loop2: detected capacity change from 0 to 103832 Mar 20 18:05:03.200469 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Mar 20 18:05:03.200487 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Mar 20 18:05:03.205021 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 20 18:05:03.230553 kernel: loop3: detected capacity change from 0 to 201592 Mar 20 18:05:03.236561 kernel: loop4: detected capacity change from 0 to 126448 Mar 20 18:05:03.242572 kernel: loop5: detected capacity change from 0 to 103832 Mar 20 18:05:03.246697 (sd-merge)[1194]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 20 18:05:03.247074 (sd-merge)[1194]: Merged extensions into '/usr'. Mar 20 18:05:03.250031 systemd[1]: Reload requested from client PID 1167 ('systemd-sysext') (unit systemd-sysext.service)... Mar 20 18:05:03.250140 systemd[1]: Reloading... Mar 20 18:05:03.302567 zram_generator::config[1225]: No configuration found. Mar 20 18:05:03.356560 ldconfig[1161]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 20 18:05:03.387580 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 18:05:03.435795 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 20 18:05:03.435963 systemd[1]: Reloading finished in 185 ms. Mar 20 18:05:03.458221 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 20 18:05:03.459690 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 20 18:05:03.476863 systemd[1]: Starting ensure-sysext.service... Mar 20 18:05:03.478656 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 20 18:05:03.493901 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 20 18:05:03.494105 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 20 18:05:03.494744 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 20 18:05:03.494959 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Mar 20 18:05:03.495014 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Mar 20 18:05:03.497659 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Mar 20 18:05:03.497669 systemd-tmpfiles[1257]: Skipping /boot Mar 20 18:05:03.497738 systemd[1]: Reload requested from client PID 1256 ('systemctl') (unit ensure-sysext.service)... Mar 20 18:05:03.497749 systemd[1]: Reloading... Mar 20 18:05:03.506255 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Mar 20 18:05:03.506272 systemd-tmpfiles[1257]: Skipping /boot Mar 20 18:05:03.541570 zram_generator::config[1289]: No configuration found. Mar 20 18:05:03.616649 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 18:05:03.665079 systemd[1]: Reloading finished in 167 ms. Mar 20 18:05:03.677964 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 20 18:05:03.690202 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 20 18:05:03.697732 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 20 18:05:03.700137 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 20 18:05:03.707664 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 20 18:05:03.710698 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 20 18:05:03.716835 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 20 18:05:03.722239 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 20 18:05:03.733466 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 20 18:05:03.737711 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 20 18:05:03.739340 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 20 18:05:03.741530 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 20 18:05:03.743845 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 20 18:05:03.744951 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 20 18:05:03.745062 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 20 18:05:03.754194 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 20 18:05:03.760791 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 20 18:05:03.763151 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 20 18:05:03.763346 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 20 18:05:03.764927 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 20 18:05:03.765084 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 20 18:05:03.766847 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 20 18:05:03.766985 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 20 18:05:03.769163 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 20 18:05:03.771908 systemd-udevd[1329]: Using default interface naming scheme 'v255'. Mar 20 18:05:03.777778 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 20 18:05:03.779424 augenrules[1357]: No rules Mar 20 18:05:03.779896 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 20 18:05:03.781161 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 20 18:05:03.783831 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 20 18:05:03.797523 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 20 18:05:03.798608 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 20 18:05:03.798740 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 20 18:05:03.798847 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 20 18:05:03.799732 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 20 18:05:03.803313 systemd[1]: audit-rules.service: Deactivated successfully. Mar 20 18:05:03.803500 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 20 18:05:03.806593 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 20 18:05:03.808500 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 20 18:05:03.811191 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 20 18:05:03.811799 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 20 18:05:03.814243 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 20 18:05:03.815131 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 20 18:05:03.831699 systemd[1]: Finished ensure-sysext.service. Mar 20 18:05:03.832802 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 20 18:05:03.836822 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 20 18:05:03.844514 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 20 18:05:03.846038 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 20 18:05:03.848827 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 20 18:05:03.852734 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 20 18:05:03.853550 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1373) Mar 20 18:05:03.857711 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 20 18:05:03.863964 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 20 18:05:03.865186 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 20 18:05:03.865229 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 20 18:05:03.875444 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 20 18:05:03.881031 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 20 18:05:03.882118 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 20 18:05:03.884322 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 20 18:05:03.884500 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 20 18:05:03.886091 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 20 18:05:03.886245 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 20 18:05:03.887500 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 20 18:05:03.887785 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 20 18:05:03.894411 augenrules[1397]: /sbin/augenrules: No change Mar 20 18:05:03.910966 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 20 18:05:03.913763 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 20 18:05:03.915529 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 20 18:05:03.915606 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 20 18:05:03.919718 augenrules[1430]: No rules Mar 20 18:05:03.921418 systemd-resolved[1326]: Positive Trust Anchors: Mar 20 18:05:03.923142 systemd-resolved[1326]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 20 18:05:03.923174 systemd-resolved[1326]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 20 18:05:03.924965 systemd[1]: audit-rules.service: Deactivated successfully. Mar 20 18:05:03.925179 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 20 18:05:03.929382 systemd-resolved[1326]: Defaulting to hostname 'linux'. Mar 20 18:05:03.938874 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 20 18:05:03.940106 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 20 18:05:03.947174 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 20 18:05:03.975897 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 20 18:05:03.977472 systemd[1]: Reached target time-set.target - System Time Set. Mar 20 18:05:03.985763 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 18:05:03.985926 systemd-networkd[1404]: lo: Link UP Mar 20 18:05:03.985929 systemd-networkd[1404]: lo: Gained carrier Mar 20 18:05:03.990388 systemd-networkd[1404]: Enumeration completed Mar 20 18:05:03.999114 systemd-networkd[1404]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 20 18:05:03.999132 systemd-networkd[1404]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 20 18:05:03.999362 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 20 18:05:03.999661 systemd-networkd[1404]: eth0: Link UP Mar 20 18:05:03.999671 systemd-networkd[1404]: eth0: Gained carrier Mar 20 18:05:03.999684 systemd-networkd[1404]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 20 18:05:04.000981 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 20 18:05:04.004007 systemd[1]: Reached target network.target - Network. Mar 20 18:05:04.006230 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 20 18:05:04.008317 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 20 18:05:04.010603 systemd-networkd[1404]: eth0: DHCPv4 address 10.0.0.103/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 20 18:05:04.011085 systemd-timesyncd[1408]: Network configuration changed, trying to establish connection. Mar 20 18:05:04.482453 systemd-timesyncd[1408]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 20 18:05:04.482498 systemd-timesyncd[1408]: Initial clock synchronization to Thu 2025-03-20 18:05:04.482391 UTC. Mar 20 18:05:04.482669 systemd-resolved[1326]: Clock change detected. Flushing caches. Mar 20 18:05:04.488114 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 20 18:05:04.497681 lvm[1445]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 20 18:05:04.497950 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 20 18:05:04.507816 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 18:05:04.534536 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 20 18:05:04.535899 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 20 18:05:04.537020 systemd[1]: Reached target sysinit.target - System Initialization. Mar 20 18:05:04.538142 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 20 18:05:04.539375 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 20 18:05:04.540712 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 20 18:05:04.541846 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 20 18:05:04.543162 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 20 18:05:04.544362 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 20 18:05:04.544408 systemd[1]: Reached target paths.target - Path Units. Mar 20 18:05:04.545258 systemd[1]: Reached target timers.target - Timer Units. Mar 20 18:05:04.546931 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 20 18:05:04.549193 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 20 18:05:04.552307 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 20 18:05:04.553701 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 20 18:05:04.554920 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 20 18:05:04.557968 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 20 18:05:04.559335 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 20 18:05:04.561493 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 20 18:05:04.563014 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 20 18:05:04.564164 systemd[1]: Reached target sockets.target - Socket Units. Mar 20 18:05:04.565091 systemd[1]: Reached target basic.target - Basic System. Mar 20 18:05:04.566049 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 20 18:05:04.566082 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 20 18:05:04.566931 systemd[1]: Starting containerd.service - containerd container runtime... Mar 20 18:05:04.568332 lvm[1456]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 20 18:05:04.568857 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 20 18:05:04.571480 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 20 18:05:04.573554 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 20 18:05:04.574752 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 20 18:05:04.579541 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 20 18:05:04.581715 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 20 18:05:04.585279 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 20 18:05:04.586681 jq[1459]: false Mar 20 18:05:04.587214 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 20 18:05:04.592245 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 20 18:05:04.594322 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 20 18:05:04.594734 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 20 18:05:04.598127 systemd[1]: Starting update-engine.service - Update Engine... Mar 20 18:05:04.600070 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 20 18:05:04.601998 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 20 18:05:04.606473 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 20 18:05:04.606645 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 20 18:05:04.606884 dbus-daemon[1458]: [system] SELinux support is enabled Mar 20 18:05:04.606884 systemd[1]: motdgen.service: Deactivated successfully. Mar 20 18:05:04.607022 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 20 18:05:04.608213 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 20 18:05:04.613315 extend-filesystems[1460]: Found loop3 Mar 20 18:05:04.613315 extend-filesystems[1460]: Found loop4 Mar 20 18:05:04.613315 extend-filesystems[1460]: Found loop5 Mar 20 18:05:04.613315 extend-filesystems[1460]: Found vda Mar 20 18:05:04.613315 extend-filesystems[1460]: Found vda1 Mar 20 18:05:04.613315 extend-filesystems[1460]: Found vda2 Mar 20 18:05:04.613315 extend-filesystems[1460]: Found vda3 Mar 20 18:05:04.613315 extend-filesystems[1460]: Found usr Mar 20 18:05:04.613315 extend-filesystems[1460]: Found vda4 Mar 20 18:05:04.613315 extend-filesystems[1460]: Found vda6 Mar 20 18:05:04.613315 extend-filesystems[1460]: Found vda7 Mar 20 18:05:04.613315 extend-filesystems[1460]: Found vda9 Mar 20 18:05:04.613315 extend-filesystems[1460]: Checking size of /dev/vda9 Mar 20 18:05:04.647199 extend-filesystems[1460]: Resized partition /dev/vda9 Mar 20 18:05:04.613817 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 20 18:05:04.648239 jq[1474]: true Mar 20 18:05:04.615521 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 20 18:05:04.648503 tar[1478]: linux-arm64/LICENSE Mar 20 18:05:04.648503 tar[1478]: linux-arm64/helm Mar 20 18:05:04.627075 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 20 18:05:04.627122 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 20 18:05:04.648877 jq[1480]: true Mar 20 18:05:04.629452 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 20 18:05:04.629467 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 20 18:05:04.635458 (ntainerd)[1481]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 20 18:05:04.653571 extend-filesystems[1492]: resize2fs 1.47.2 (1-Jan-2025) Mar 20 18:05:04.658447 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 20 18:05:04.667394 update_engine[1471]: I20250320 18:05:04.667065 1471 main.cc:92] Flatcar Update Engine starting Mar 20 18:05:04.672788 systemd[1]: Started update-engine.service - Update Engine. Mar 20 18:05:04.675421 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 20 18:05:04.676459 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1366) Mar 20 18:05:04.676589 update_engine[1471]: I20250320 18:05:04.675244 1471 update_check_scheduler.cc:74] Next update check in 2m33s Mar 20 18:05:04.693402 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 20 18:05:04.710769 extend-filesystems[1492]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 20 18:05:04.710769 extend-filesystems[1492]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 20 18:05:04.710769 extend-filesystems[1492]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 20 18:05:04.726732 extend-filesystems[1460]: Resized filesystem in /dev/vda9 Mar 20 18:05:04.728337 bash[1510]: Updated "/home/core/.ssh/authorized_keys" Mar 20 18:05:04.712196 systemd-logind[1469]: Watching system buttons on /dev/input/event0 (Power Button) Mar 20 18:05:04.715738 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 20 18:05:04.715931 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 20 18:05:04.716041 systemd-logind[1469]: New seat seat0. Mar 20 18:05:04.719978 systemd[1]: Started systemd-logind.service - User Login Management. Mar 20 18:05:04.722868 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 20 18:05:04.728705 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 20 18:05:04.769229 locksmithd[1512]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 20 18:05:04.857445 containerd[1481]: time="2025-03-20T18:05:04Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 20 18:05:04.859755 containerd[1481]: time="2025-03-20T18:05:04.859714548Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 Mar 20 18:05:04.872233 containerd[1481]: time="2025-03-20T18:05:04.872062148Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="5.88µs" Mar 20 18:05:04.872233 containerd[1481]: time="2025-03-20T18:05:04.872094108Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 20 18:05:04.872233 containerd[1481]: time="2025-03-20T18:05:04.872111788Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 20 18:05:04.872571 containerd[1481]: time="2025-03-20T18:05:04.872545188Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 20 18:05:04.872644 containerd[1481]: time="2025-03-20T18:05:04.872630068Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 20 18:05:04.872982 containerd[1481]: time="2025-03-20T18:05:04.872758948Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 20 18:05:04.872982 containerd[1481]: time="2025-03-20T18:05:04.872836788Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 20 18:05:04.872982 containerd[1481]: time="2025-03-20T18:05:04.872851228Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 20 18:05:04.873370 containerd[1481]: time="2025-03-20T18:05:04.873345148Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 20 18:05:04.873549 containerd[1481]: time="2025-03-20T18:05:04.873527948Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 20 18:05:04.873626 containerd[1481]: time="2025-03-20T18:05:04.873610748Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 20 18:05:04.873682 containerd[1481]: time="2025-03-20T18:05:04.873668548Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 20 18:05:04.873877 containerd[1481]: time="2025-03-20T18:05:04.873854788Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 20 18:05:04.874275 containerd[1481]: time="2025-03-20T18:05:04.874252508Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 20 18:05:04.874372 containerd[1481]: time="2025-03-20T18:05:04.874355748Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 20 18:05:04.874701 containerd[1481]: time="2025-03-20T18:05:04.874505948Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 20 18:05:04.874701 containerd[1481]: time="2025-03-20T18:05:04.874562108Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 20 18:05:04.875000 containerd[1481]: time="2025-03-20T18:05:04.874978708Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 20 18:05:04.875182 containerd[1481]: time="2025-03-20T18:05:04.875103948Z" level=info msg="metadata content store policy set" policy=shared Mar 20 18:05:04.878520 containerd[1481]: time="2025-03-20T18:05:04.878491228Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 20 18:05:04.878696 containerd[1481]: time="2025-03-20T18:05:04.878676388Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 20 18:05:04.878824 containerd[1481]: time="2025-03-20T18:05:04.878806788Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 20 18:05:04.880625 containerd[1481]: time="2025-03-20T18:05:04.878870428Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 20 18:05:04.880625 containerd[1481]: time="2025-03-20T18:05:04.878887228Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 20 18:05:04.880625 containerd[1481]: time="2025-03-20T18:05:04.878900788Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 20 18:05:04.880625 containerd[1481]: time="2025-03-20T18:05:04.878920788Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 20 18:05:04.880625 containerd[1481]: time="2025-03-20T18:05:04.878936388Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 20 18:05:04.880625 containerd[1481]: time="2025-03-20T18:05:04.878948828Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 20 18:05:04.880625 containerd[1481]: time="2025-03-20T18:05:04.878960788Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 20 18:05:04.880625 containerd[1481]: time="2025-03-20T18:05:04.878971628Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 20 18:05:04.880625 containerd[1481]: time="2025-03-20T18:05:04.878990228Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 20 18:05:04.880625 containerd[1481]: time="2025-03-20T18:05:04.879095828Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 20 18:05:04.880625 containerd[1481]: time="2025-03-20T18:05:04.879122828Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 20 18:05:04.880625 containerd[1481]: time="2025-03-20T18:05:04.879134748Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 20 18:05:04.880625 containerd[1481]: time="2025-03-20T18:05:04.879146188Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 20 18:05:04.880625 containerd[1481]: time="2025-03-20T18:05:04.879157548Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 20 18:05:04.880885 containerd[1481]: time="2025-03-20T18:05:04.879167548Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 20 18:05:04.880885 containerd[1481]: time="2025-03-20T18:05:04.879178548Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 20 18:05:04.880885 containerd[1481]: time="2025-03-20T18:05:04.879188748Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 20 18:05:04.880885 containerd[1481]: time="2025-03-20T18:05:04.879206708Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 20 18:05:04.880885 containerd[1481]: time="2025-03-20T18:05:04.879217308Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 20 18:05:04.880885 containerd[1481]: time="2025-03-20T18:05:04.879228148Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 20 18:05:04.880885 containerd[1481]: time="2025-03-20T18:05:04.879523428Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 20 18:05:04.880885 containerd[1481]: time="2025-03-20T18:05:04.879542668Z" level=info msg="Start snapshots syncer" Mar 20 18:05:04.880885 containerd[1481]: time="2025-03-20T18:05:04.879570948Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 20 18:05:04.881026 containerd[1481]: time="2025-03-20T18:05:04.879783308Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 20 18:05:04.881026 containerd[1481]: time="2025-03-20T18:05:04.879827988Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 20 18:05:04.881120 containerd[1481]: time="2025-03-20T18:05:04.879899708Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 20 18:05:04.881120 containerd[1481]: time="2025-03-20T18:05:04.880000468Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 20 18:05:04.881120 containerd[1481]: time="2025-03-20T18:05:04.880023628Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 20 18:05:04.881120 containerd[1481]: time="2025-03-20T18:05:04.880034988Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 20 18:05:04.881120 containerd[1481]: time="2025-03-20T18:05:04.880044268Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 20 18:05:04.881120 containerd[1481]: time="2025-03-20T18:05:04.880056068Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 20 18:05:04.881120 containerd[1481]: time="2025-03-20T18:05:04.880065828Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 20 18:05:04.881120 containerd[1481]: time="2025-03-20T18:05:04.880076148Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 20 18:05:04.881120 containerd[1481]: time="2025-03-20T18:05:04.880099668Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 20 18:05:04.881120 containerd[1481]: time="2025-03-20T18:05:04.880113628Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 20 18:05:04.881120 containerd[1481]: time="2025-03-20T18:05:04.880122508Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 20 18:05:04.881120 containerd[1481]: time="2025-03-20T18:05:04.880168788Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 20 18:05:04.881120 containerd[1481]: time="2025-03-20T18:05:04.880184428Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 20 18:05:04.881120 containerd[1481]: time="2025-03-20T18:05:04.880193308Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 20 18:05:04.881336 containerd[1481]: time="2025-03-20T18:05:04.880202268Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 20 18:05:04.881336 containerd[1481]: time="2025-03-20T18:05:04.880209748Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 20 18:05:04.881336 containerd[1481]: time="2025-03-20T18:05:04.880223828Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 20 18:05:04.881336 containerd[1481]: time="2025-03-20T18:05:04.880235028Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 20 18:05:04.881336 containerd[1481]: time="2025-03-20T18:05:04.880313228Z" level=info msg="runtime interface created" Mar 20 18:05:04.881336 containerd[1481]: time="2025-03-20T18:05:04.880318268Z" level=info msg="created NRI interface" Mar 20 18:05:04.881336 containerd[1481]: time="2025-03-20T18:05:04.880327508Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 20 18:05:04.881336 containerd[1481]: time="2025-03-20T18:05:04.880338948Z" level=info msg="Connect containerd service" Mar 20 18:05:04.881336 containerd[1481]: time="2025-03-20T18:05:04.880364428Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 20 18:05:04.883120 containerd[1481]: time="2025-03-20T18:05:04.883094148Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 20 18:05:04.924642 sshd_keygen[1477]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 20 18:05:04.945776 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 20 18:05:04.949563 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 20 18:05:04.961491 systemd[1]: issuegen.service: Deactivated successfully. Mar 20 18:05:04.961903 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 20 18:05:04.965679 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 20 18:05:04.984570 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 20 18:05:04.987910 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 20 18:05:04.992163 containerd[1481]: time="2025-03-20T18:05:04.989037948Z" level=info msg="Start subscribing containerd event" Mar 20 18:05:04.992163 containerd[1481]: time="2025-03-20T18:05:04.989152468Z" level=info msg="Start recovering state" Mar 20 18:05:04.992163 containerd[1481]: time="2025-03-20T18:05:04.989246988Z" level=info msg="Start event monitor" Mar 20 18:05:04.992163 containerd[1481]: time="2025-03-20T18:05:04.989262348Z" level=info msg="Start cni network conf syncer for default" Mar 20 18:05:04.992163 containerd[1481]: time="2025-03-20T18:05:04.989270948Z" level=info msg="Start streaming server" Mar 20 18:05:04.992163 containerd[1481]: time="2025-03-20T18:05:04.989280428Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 20 18:05:04.992163 containerd[1481]: time="2025-03-20T18:05:04.989287908Z" level=info msg="runtime interface starting up..." Mar 20 18:05:04.992163 containerd[1481]: time="2025-03-20T18:05:04.989294748Z" level=info msg="starting plugins..." Mar 20 18:05:04.992163 containerd[1481]: time="2025-03-20T18:05:04.989307268Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 20 18:05:04.992163 containerd[1481]: time="2025-03-20T18:05:04.989478948Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 20 18:05:04.992163 containerd[1481]: time="2025-03-20T18:05:04.989614668Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 20 18:05:04.992163 containerd[1481]: time="2025-03-20T18:05:04.989688108Z" level=info msg="containerd successfully booted in 0.132623s" Mar 20 18:05:04.992368 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 20 18:05:04.994108 systemd[1]: Reached target getty.target - Login Prompts. Mar 20 18:05:04.995894 systemd[1]: Started containerd.service - containerd container runtime. Mar 20 18:05:05.081401 tar[1478]: linux-arm64/README.md Mar 20 18:05:05.097444 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 20 18:05:05.516609 systemd-networkd[1404]: eth0: Gained IPv6LL Mar 20 18:05:05.518861 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 20 18:05:05.521686 systemd[1]: Reached target network-online.target - Network is Online. Mar 20 18:05:05.524814 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 20 18:05:05.527332 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 18:05:05.543309 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 20 18:05:05.555926 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 20 18:05:05.557236 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 20 18:05:05.559416 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 20 18:05:05.567798 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 20 18:05:06.048604 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 18:05:06.050204 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 20 18:05:06.051980 (kubelet)[1586]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 20 18:05:06.053526 systemd[1]: Startup finished in 546ms (kernel) + 5.736s (initrd) + 3.182s (userspace) = 9.465s. Mar 20 18:05:06.437040 kubelet[1586]: E0320 18:05:06.436937 1586 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 20 18:05:06.439424 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 20 18:05:06.439576 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 20 18:05:06.441464 systemd[1]: kubelet.service: Consumed 776ms CPU time, 250.5M memory peak. Mar 20 18:05:10.107815 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 20 18:05:10.108932 systemd[1]: Started sshd@0-10.0.0.103:22-10.0.0.1:43258.service - OpenSSH per-connection server daemon (10.0.0.1:43258). Mar 20 18:05:10.178916 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 43258 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:05:10.180554 sshd-session[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:05:10.195552 systemd-logind[1469]: New session 1 of user core. Mar 20 18:05:10.196472 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 20 18:05:10.197431 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 20 18:05:10.218915 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 20 18:05:10.221921 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 20 18:05:10.242242 (systemd)[1604]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 20 18:05:10.244078 systemd-logind[1469]: New session c1 of user core. Mar 20 18:05:10.347690 systemd[1604]: Queued start job for default target default.target. Mar 20 18:05:10.357180 systemd[1604]: Created slice app.slice - User Application Slice. Mar 20 18:05:10.357204 systemd[1604]: Reached target paths.target - Paths. Mar 20 18:05:10.357234 systemd[1604]: Reached target timers.target - Timers. Mar 20 18:05:10.358303 systemd[1604]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 20 18:05:10.365995 systemd[1604]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 20 18:05:10.366050 systemd[1604]: Reached target sockets.target - Sockets. Mar 20 18:05:10.366084 systemd[1604]: Reached target basic.target - Basic System. Mar 20 18:05:10.366110 systemd[1604]: Reached target default.target - Main User Target. Mar 20 18:05:10.366132 systemd[1604]: Startup finished in 117ms. Mar 20 18:05:10.366259 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 20 18:05:10.367528 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 20 18:05:10.432624 systemd[1]: Started sshd@1-10.0.0.103:22-10.0.0.1:43268.service - OpenSSH per-connection server daemon (10.0.0.1:43268). Mar 20 18:05:10.478871 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 43268 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:05:10.480087 sshd-session[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:05:10.483760 systemd-logind[1469]: New session 2 of user core. Mar 20 18:05:10.493586 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 20 18:05:10.544222 sshd[1617]: Connection closed by 10.0.0.1 port 43268 Mar 20 18:05:10.544118 sshd-session[1615]: pam_unix(sshd:session): session closed for user core Mar 20 18:05:10.554227 systemd[1]: sshd@1-10.0.0.103:22-10.0.0.1:43268.service: Deactivated successfully. Mar 20 18:05:10.556455 systemd[1]: session-2.scope: Deactivated successfully. Mar 20 18:05:10.557092 systemd-logind[1469]: Session 2 logged out. Waiting for processes to exit. Mar 20 18:05:10.558686 systemd[1]: Started sshd@2-10.0.0.103:22-10.0.0.1:43280.service - OpenSSH per-connection server daemon (10.0.0.1:43280). Mar 20 18:05:10.559279 systemd-logind[1469]: Removed session 2. Mar 20 18:05:10.617850 sshd[1622]: Accepted publickey for core from 10.0.0.1 port 43280 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:05:10.619005 sshd-session[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:05:10.623277 systemd-logind[1469]: New session 3 of user core. Mar 20 18:05:10.630567 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 20 18:05:10.679420 sshd[1625]: Connection closed by 10.0.0.1 port 43280 Mar 20 18:05:10.679206 sshd-session[1622]: pam_unix(sshd:session): session closed for user core Mar 20 18:05:10.690411 systemd[1]: sshd@2-10.0.0.103:22-10.0.0.1:43280.service: Deactivated successfully. Mar 20 18:05:10.691989 systemd[1]: session-3.scope: Deactivated successfully. Mar 20 18:05:10.693278 systemd-logind[1469]: Session 3 logged out. Waiting for processes to exit. Mar 20 18:05:10.694400 systemd[1]: Started sshd@3-10.0.0.103:22-10.0.0.1:43294.service - OpenSSH per-connection server daemon (10.0.0.1:43294). Mar 20 18:05:10.695099 systemd-logind[1469]: Removed session 3. Mar 20 18:05:10.749094 sshd[1630]: Accepted publickey for core from 10.0.0.1 port 43294 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:05:10.750192 sshd-session[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:05:10.754297 systemd-logind[1469]: New session 4 of user core. Mar 20 18:05:10.762539 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 20 18:05:10.813422 sshd[1633]: Connection closed by 10.0.0.1 port 43294 Mar 20 18:05:10.813514 sshd-session[1630]: pam_unix(sshd:session): session closed for user core Mar 20 18:05:10.824534 systemd[1]: sshd@3-10.0.0.103:22-10.0.0.1:43294.service: Deactivated successfully. Mar 20 18:05:10.825945 systemd[1]: session-4.scope: Deactivated successfully. Mar 20 18:05:10.826627 systemd-logind[1469]: Session 4 logged out. Waiting for processes to exit. Mar 20 18:05:10.828346 systemd[1]: Started sshd@4-10.0.0.103:22-10.0.0.1:43300.service - OpenSSH per-connection server daemon (10.0.0.1:43300). Mar 20 18:05:10.829162 systemd-logind[1469]: Removed session 4. Mar 20 18:05:10.878068 sshd[1638]: Accepted publickey for core from 10.0.0.1 port 43300 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:05:10.879497 sshd-session[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:05:10.883271 systemd-logind[1469]: New session 5 of user core. Mar 20 18:05:10.892510 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 20 18:05:10.950176 sudo[1642]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 20 18:05:10.950498 sudo[1642]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 18:05:10.970268 sudo[1642]: pam_unix(sudo:session): session closed for user root Mar 20 18:05:10.971552 sshd[1641]: Connection closed by 10.0.0.1 port 43300 Mar 20 18:05:10.971892 sshd-session[1638]: pam_unix(sshd:session): session closed for user core Mar 20 18:05:10.991050 systemd[1]: sshd@4-10.0.0.103:22-10.0.0.1:43300.service: Deactivated successfully. Mar 20 18:05:10.992598 systemd[1]: session-5.scope: Deactivated successfully. Mar 20 18:05:10.993920 systemd-logind[1469]: Session 5 logged out. Waiting for processes to exit. Mar 20 18:05:10.995250 systemd[1]: Started sshd@5-10.0.0.103:22-10.0.0.1:43312.service - OpenSSH per-connection server daemon (10.0.0.1:43312). Mar 20 18:05:10.996737 systemd-logind[1469]: Removed session 5. Mar 20 18:05:11.045600 sshd[1647]: Accepted publickey for core from 10.0.0.1 port 43312 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:05:11.046703 sshd-session[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:05:11.050579 systemd-logind[1469]: New session 6 of user core. Mar 20 18:05:11.058513 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 20 18:05:11.108205 sudo[1652]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 20 18:05:11.108494 sudo[1652]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 18:05:11.111209 sudo[1652]: pam_unix(sudo:session): session closed for user root Mar 20 18:05:11.115365 sudo[1651]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 20 18:05:11.115658 sudo[1651]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 18:05:11.123151 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 20 18:05:11.161786 augenrules[1674]: No rules Mar 20 18:05:11.162311 systemd[1]: audit-rules.service: Deactivated successfully. Mar 20 18:05:11.162700 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 20 18:05:11.163509 sudo[1651]: pam_unix(sudo:session): session closed for user root Mar 20 18:05:11.164553 sshd[1650]: Connection closed by 10.0.0.1 port 43312 Mar 20 18:05:11.165072 sshd-session[1647]: pam_unix(sshd:session): session closed for user core Mar 20 18:05:11.174276 systemd[1]: sshd@5-10.0.0.103:22-10.0.0.1:43312.service: Deactivated successfully. Mar 20 18:05:11.175824 systemd[1]: session-6.scope: Deactivated successfully. Mar 20 18:05:11.178593 systemd-logind[1469]: Session 6 logged out. Waiting for processes to exit. Mar 20 18:05:11.179652 systemd[1]: Started sshd@6-10.0.0.103:22-10.0.0.1:43314.service - OpenSSH per-connection server daemon (10.0.0.1:43314). Mar 20 18:05:11.180349 systemd-logind[1469]: Removed session 6. Mar 20 18:05:11.230927 sshd[1682]: Accepted publickey for core from 10.0.0.1 port 43314 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:05:11.231972 sshd-session[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:05:11.236042 systemd-logind[1469]: New session 7 of user core. Mar 20 18:05:11.245516 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 20 18:05:11.294937 sudo[1686]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 20 18:05:11.295202 sudo[1686]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 18:05:11.633129 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 20 18:05:11.650745 (dockerd)[1706]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 20 18:05:11.898136 dockerd[1706]: time="2025-03-20T18:05:11.897646948Z" level=info msg="Starting up" Mar 20 18:05:11.899605 dockerd[1706]: time="2025-03-20T18:05:11.899577668Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 20 18:05:12.006911 dockerd[1706]: time="2025-03-20T18:05:12.006854948Z" level=info msg="Loading containers: start." Mar 20 18:05:12.139418 kernel: Initializing XFRM netlink socket Mar 20 18:05:12.193475 systemd-networkd[1404]: docker0: Link UP Mar 20 18:05:12.265535 dockerd[1706]: time="2025-03-20T18:05:12.265483668Z" level=info msg="Loading containers: done." Mar 20 18:05:12.279205 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2910685054-merged.mount: Deactivated successfully. Mar 20 18:05:12.280335 dockerd[1706]: time="2025-03-20T18:05:12.280291388Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 20 18:05:12.280436 dockerd[1706]: time="2025-03-20T18:05:12.280414468Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 Mar 20 18:05:12.280625 dockerd[1706]: time="2025-03-20T18:05:12.280592988Z" level=info msg="Daemon has completed initialization" Mar 20 18:05:12.307293 dockerd[1706]: time="2025-03-20T18:05:12.307183588Z" level=info msg="API listen on /run/docker.sock" Mar 20 18:05:12.307369 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 20 18:05:12.865071 containerd[1481]: time="2025-03-20T18:05:12.865031148Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\"" Mar 20 18:05:13.505318 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2261553870.mount: Deactivated successfully. Mar 20 18:05:14.929755 containerd[1481]: time="2025-03-20T18:05:14.929695988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:05:14.930659 containerd[1481]: time="2025-03-20T18:05:14.930266028Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.3: active requests=0, bytes read=26231952" Mar 20 18:05:14.930828 containerd[1481]: time="2025-03-20T18:05:14.930802468Z" level=info msg="ImageCreate event name:\"sha256:25dd33975ea35cef2fa9b105778dbe3369de267e9ddf81427b7b82e98ff374e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:05:14.933217 containerd[1481]: time="2025-03-20T18:05:14.933181548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:05:14.934394 containerd[1481]: time="2025-03-20T18:05:14.934218668Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.3\" with image id \"sha256:25dd33975ea35cef2fa9b105778dbe3369de267e9ddf81427b7b82e98ff374e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e\", size \"26228750\" in 2.06914792s" Mar 20 18:05:14.934394 containerd[1481]: time="2025-03-20T18:05:14.934254868Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\" returns image reference \"sha256:25dd33975ea35cef2fa9b105778dbe3369de267e9ddf81427b7b82e98ff374e5\"" Mar 20 18:05:14.935056 containerd[1481]: time="2025-03-20T18:05:14.934843348Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\"" Mar 20 18:05:16.561112 containerd[1481]: time="2025-03-20T18:05:16.561068868Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:05:16.562014 containerd[1481]: time="2025-03-20T18:05:16.561536668Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.3: active requests=0, bytes read=22530034" Mar 20 18:05:16.562465 containerd[1481]: time="2025-03-20T18:05:16.562426148Z" level=info msg="ImageCreate event name:\"sha256:9e29b4db8c5cdf9970961ed3a47137ea71ad067643b8e5cccb58085f22a9b315\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:05:16.564902 containerd[1481]: time="2025-03-20T18:05:16.564836228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:05:16.565836 containerd[1481]: time="2025-03-20T18:05:16.565810628Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.3\" with image id \"sha256:9e29b4db8c5cdf9970961ed3a47137ea71ad067643b8e5cccb58085f22a9b315\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411\", size \"23970828\" in 1.63093604s" Mar 20 18:05:16.565885 containerd[1481]: time="2025-03-20T18:05:16.565841508Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\" returns image reference \"sha256:9e29b4db8c5cdf9970961ed3a47137ea71ad067643b8e5cccb58085f22a9b315\"" Mar 20 18:05:16.566287 containerd[1481]: time="2025-03-20T18:05:16.566260988Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\"" Mar 20 18:05:16.689979 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 20 18:05:16.691319 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 18:05:16.801756 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 18:05:16.805000 (kubelet)[1977]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 20 18:05:16.844168 kubelet[1977]: E0320 18:05:16.843794 1977 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 20 18:05:16.846968 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 20 18:05:16.847130 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 20 18:05:16.847779 systemd[1]: kubelet.service: Consumed 139ms CPU time, 105.5M memory peak. Mar 20 18:05:18.195642 containerd[1481]: time="2025-03-20T18:05:18.195442628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:05:18.196522 containerd[1481]: time="2025-03-20T18:05:18.196302668Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.3: active requests=0, bytes read=17482563" Mar 20 18:05:18.197301 containerd[1481]: time="2025-03-20T18:05:18.197243628Z" level=info msg="ImageCreate event name:\"sha256:6b8dfebcc65dc9d4765a91d2923c304e13beca7111c57dfc99f1c3267a6e9f30\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:05:18.199718 containerd[1481]: time="2025-03-20T18:05:18.199675428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:05:18.200883 containerd[1481]: time="2025-03-20T18:05:18.200769708Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.3\" with image id \"sha256:6b8dfebcc65dc9d4765a91d2923c304e13beca7111c57dfc99f1c3267a6e9f30\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df\", size \"18923375\" in 1.63447376s" Mar 20 18:05:18.200883 containerd[1481]: time="2025-03-20T18:05:18.200801788Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\" returns image reference \"sha256:6b8dfebcc65dc9d4765a91d2923c304e13beca7111c57dfc99f1c3267a6e9f30\"" Mar 20 18:05:18.201266 containerd[1481]: time="2025-03-20T18:05:18.201162468Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\"" Mar 20 18:05:19.345500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3345168962.mount: Deactivated successfully. Mar 20 18:05:19.556318 containerd[1481]: time="2025-03-20T18:05:19.556270668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:05:19.557080 containerd[1481]: time="2025-03-20T18:05:19.556872428Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.3: active requests=0, bytes read=27370097" Mar 20 18:05:19.557974 containerd[1481]: time="2025-03-20T18:05:19.557927548Z" level=info msg="ImageCreate event name:\"sha256:2a637602f3e88e76046aa1a75bccdb37b25b2fcba99a380412e2c27ccd55c547\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:05:19.559709 containerd[1481]: time="2025-03-20T18:05:19.559681868Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:05:19.560239 containerd[1481]: time="2025-03-20T18:05:19.560207148Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.3\" with image id \"sha256:2a637602f3e88e76046aa1a75bccdb37b25b2fcba99a380412e2c27ccd55c547\", repo tag \"registry.k8s.io/kube-proxy:v1.32.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\", size \"27369114\" in 1.35900796s" Mar 20 18:05:19.560304 containerd[1481]: time="2025-03-20T18:05:19.560245308Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\" returns image reference \"sha256:2a637602f3e88e76046aa1a75bccdb37b25b2fcba99a380412e2c27ccd55c547\"" Mar 20 18:05:19.560944 containerd[1481]: time="2025-03-20T18:05:19.560759148Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Mar 20 18:05:20.119524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1641429729.mount: Deactivated successfully. Mar 20 18:05:21.335704 containerd[1481]: time="2025-03-20T18:05:21.335442348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:05:21.336604 containerd[1481]: time="2025-03-20T18:05:21.336484388Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Mar 20 18:05:21.337310 containerd[1481]: time="2025-03-20T18:05:21.337249468Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:05:21.339838 containerd[1481]: time="2025-03-20T18:05:21.339757268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:05:21.341018 containerd[1481]: time="2025-03-20T18:05:21.340985068Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.78019596s" Mar 20 18:05:21.341341 containerd[1481]: time="2025-03-20T18:05:21.341132708Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Mar 20 18:05:21.341640 containerd[1481]: time="2025-03-20T18:05:21.341617628Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 20 18:05:21.733239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1671784999.mount: Deactivated successfully. Mar 20 18:05:21.737871 containerd[1481]: time="2025-03-20T18:05:21.737831068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 20 18:05:21.738575 containerd[1481]: time="2025-03-20T18:05:21.738530388Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Mar 20 18:05:21.739260 containerd[1481]: time="2025-03-20T18:05:21.739138548Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 20 18:05:21.740858 containerd[1481]: time="2025-03-20T18:05:21.740809748Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 20 18:05:21.741832 containerd[1481]: time="2025-03-20T18:05:21.741791788Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 400.12936ms" Mar 20 18:05:21.741888 containerd[1481]: time="2025-03-20T18:05:21.741831828Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 20 18:05:21.742452 containerd[1481]: time="2025-03-20T18:05:21.742418028Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Mar 20 18:05:22.308293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2461714291.mount: Deactivated successfully. Mar 20 18:05:24.903746 containerd[1481]: time="2025-03-20T18:05:24.903690028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:05:24.904404 containerd[1481]: time="2025-03-20T18:05:24.904333068Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812431" Mar 20 18:05:24.904919 containerd[1481]: time="2025-03-20T18:05:24.904882268Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:05:24.907704 containerd[1481]: time="2025-03-20T18:05:24.907673948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:05:24.908901 containerd[1481]: time="2025-03-20T18:05:24.908845228Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.1663862s" Mar 20 18:05:24.908901 containerd[1481]: time="2025-03-20T18:05:24.908879268Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Mar 20 18:05:27.097539 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 20 18:05:27.098962 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 18:05:27.242484 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 18:05:27.251690 (kubelet)[2140]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 20 18:05:27.286135 kubelet[2140]: E0320 18:05:27.286060 2140 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 20 18:05:27.288630 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 20 18:05:27.288767 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 20 18:05:27.290449 systemd[1]: kubelet.service: Consumed 128ms CPU time, 104.3M memory peak. Mar 20 18:05:29.757607 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 18:05:29.757742 systemd[1]: kubelet.service: Consumed 128ms CPU time, 104.3M memory peak. Mar 20 18:05:29.759591 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 18:05:29.776933 systemd[1]: Reload requested from client PID 2156 ('systemctl') (unit session-7.scope)... Mar 20 18:05:29.776949 systemd[1]: Reloading... Mar 20 18:05:29.852422 zram_generator::config[2201]: No configuration found. Mar 20 18:05:30.035746 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 18:05:30.106806 systemd[1]: Reloading finished in 329 ms. Mar 20 18:05:30.151613 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 18:05:30.153571 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 18:05:30.154900 systemd[1]: kubelet.service: Deactivated successfully. Mar 20 18:05:30.155168 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 18:05:30.155203 systemd[1]: kubelet.service: Consumed 83ms CPU time, 90.3M memory peak. Mar 20 18:05:30.156597 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 18:05:30.275114 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 18:05:30.278714 (kubelet)[2247]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 20 18:05:30.313186 kubelet[2247]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 18:05:30.313186 kubelet[2247]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 20 18:05:30.313186 kubelet[2247]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 18:05:30.313473 kubelet[2247]: I0320 18:05:30.313183 2247 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 20 18:05:31.093375 kubelet[2247]: I0320 18:05:31.093335 2247 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 20 18:05:31.093375 kubelet[2247]: I0320 18:05:31.093425 2247 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 20 18:05:31.093375 kubelet[2247]: I0320 18:05:31.093696 2247 server.go:954] "Client rotation is on, will bootstrap in background" Mar 20 18:05:31.135311 kubelet[2247]: E0320 18:05:31.135275 2247 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.103:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.103:6443: connect: connection refused" logger="UnhandledError" Mar 20 18:05:31.136470 kubelet[2247]: I0320 18:05:31.136439 2247 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 20 18:05:31.147713 kubelet[2247]: I0320 18:05:31.147687 2247 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 20 18:05:31.150447 kubelet[2247]: I0320 18:05:31.150432 2247 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 20 18:05:31.150648 kubelet[2247]: I0320 18:05:31.150625 2247 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 20 18:05:31.150796 kubelet[2247]: I0320 18:05:31.150649 2247 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 20 18:05:31.150884 kubelet[2247]: I0320 18:05:31.150868 2247 topology_manager.go:138] "Creating topology manager with none policy" Mar 20 18:05:31.150884 kubelet[2247]: I0320 18:05:31.150876 2247 container_manager_linux.go:304] "Creating device plugin manager" Mar 20 18:05:31.151062 kubelet[2247]: I0320 18:05:31.151048 2247 state_mem.go:36] "Initialized new in-memory state store" Mar 20 18:05:31.153433 kubelet[2247]: I0320 18:05:31.153402 2247 kubelet.go:446] "Attempting to sync node with API server" Mar 20 18:05:31.153433 kubelet[2247]: I0320 18:05:31.153427 2247 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 20 18:05:31.153488 kubelet[2247]: I0320 18:05:31.153449 2247 kubelet.go:352] "Adding apiserver pod source" Mar 20 18:05:31.153488 kubelet[2247]: I0320 18:05:31.153463 2247 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 20 18:05:31.156227 kubelet[2247]: I0320 18:05:31.156114 2247 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 20 18:05:31.156380 kubelet[2247]: W0320 18:05:31.156316 2247 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Mar 20 18:05:31.156451 kubelet[2247]: E0320 18:05:31.156376 2247 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.103:6443: connect: connection refused" logger="UnhandledError" Mar 20 18:05:31.157170 kubelet[2247]: W0320 18:05:31.156638 2247 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.103:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Mar 20 18:05:31.157170 kubelet[2247]: E0320 18:05:31.156689 2247 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.103:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.103:6443: connect: connection refused" logger="UnhandledError" Mar 20 18:05:31.157170 kubelet[2247]: I0320 18:05:31.156866 2247 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 20 18:05:31.157170 kubelet[2247]: W0320 18:05:31.156979 2247 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 20 18:05:31.157884 kubelet[2247]: I0320 18:05:31.157855 2247 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 20 18:05:31.157935 kubelet[2247]: I0320 18:05:31.157893 2247 server.go:1287] "Started kubelet" Mar 20 18:05:31.158417 kubelet[2247]: I0320 18:05:31.157988 2247 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 20 18:05:31.159021 kubelet[2247]: I0320 18:05:31.158999 2247 server.go:490] "Adding debug handlers to kubelet server" Mar 20 18:05:31.159136 kubelet[2247]: I0320 18:05:31.159071 2247 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 20 18:05:31.159365 kubelet[2247]: I0320 18:05:31.159337 2247 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 20 18:05:31.161595 kubelet[2247]: I0320 18:05:31.161370 2247 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 20 18:05:31.162928 kubelet[2247]: I0320 18:05:31.162894 2247 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 20 18:05:31.163620 kubelet[2247]: E0320 18:05:31.163311 2247 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.103:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.103:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182e94ff8df05d24 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-20 18:05:31.157871908 +0000 UTC m=+0.876087161,LastTimestamp:2025-03-20 18:05:31.157871908 +0000 UTC m=+0.876087161,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 20 18:05:31.163944 kubelet[2247]: E0320 18:05:31.163903 2247 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 18:05:31.164009 kubelet[2247]: I0320 18:05:31.163950 2247 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 20 18:05:31.164156 kubelet[2247]: I0320 18:05:31.164129 2247 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 20 18:05:31.164156 kubelet[2247]: E0320 18:05:31.164138 2247 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.103:6443: connect: connection refused" interval="200ms" Mar 20 18:05:31.164245 kubelet[2247]: I0320 18:05:31.164188 2247 reconciler.go:26] "Reconciler: start to sync state" Mar 20 18:05:31.164650 kubelet[2247]: E0320 18:05:31.164618 2247 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 20 18:05:31.164650 kubelet[2247]: W0320 18:05:31.164583 2247 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Mar 20 18:05:31.164730 kubelet[2247]: E0320 18:05:31.164666 2247 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.103:6443: connect: connection refused" logger="UnhandledError" Mar 20 18:05:31.166285 kubelet[2247]: I0320 18:05:31.166254 2247 factory.go:221] Registration of the containerd container factory successfully Mar 20 18:05:31.166285 kubelet[2247]: I0320 18:05:31.166283 2247 factory.go:221] Registration of the systemd container factory successfully Mar 20 18:05:31.166427 kubelet[2247]: I0320 18:05:31.166407 2247 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 20 18:05:31.176057 kubelet[2247]: I0320 18:05:31.176034 2247 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 20 18:05:31.176057 kubelet[2247]: I0320 18:05:31.176053 2247 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 20 18:05:31.176169 kubelet[2247]: I0320 18:05:31.176071 2247 state_mem.go:36] "Initialized new in-memory state store" Mar 20 18:05:31.178957 kubelet[2247]: I0320 18:05:31.178838 2247 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 20 18:05:31.179832 kubelet[2247]: I0320 18:05:31.179788 2247 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 20 18:05:31.179832 kubelet[2247]: I0320 18:05:31.179829 2247 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 20 18:05:31.179906 kubelet[2247]: I0320 18:05:31.179850 2247 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 20 18:05:31.180120 kubelet[2247]: I0320 18:05:31.180030 2247 kubelet.go:2388] "Starting kubelet main sync loop" Mar 20 18:05:31.180120 kubelet[2247]: E0320 18:05:31.180076 2247 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 20 18:05:31.180770 kubelet[2247]: W0320 18:05:31.180464 2247 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Mar 20 18:05:31.180770 kubelet[2247]: E0320 18:05:31.180508 2247 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.103:6443: connect: connection refused" logger="UnhandledError" Mar 20 18:05:31.244122 kubelet[2247]: I0320 18:05:31.244018 2247 policy_none.go:49] "None policy: Start" Mar 20 18:05:31.244122 kubelet[2247]: I0320 18:05:31.244052 2247 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 20 18:05:31.244122 kubelet[2247]: I0320 18:05:31.244064 2247 state_mem.go:35] "Initializing new in-memory state store" Mar 20 18:05:31.250011 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 20 18:05:31.263609 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 20 18:05:31.264102 kubelet[2247]: E0320 18:05:31.264072 2247 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 18:05:31.267603 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 20 18:05:31.280395 kubelet[2247]: E0320 18:05:31.280330 2247 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 20 18:05:31.284262 kubelet[2247]: I0320 18:05:31.284206 2247 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 20 18:05:31.284740 kubelet[2247]: I0320 18:05:31.284412 2247 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 20 18:05:31.284740 kubelet[2247]: I0320 18:05:31.284432 2247 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 20 18:05:31.284740 kubelet[2247]: I0320 18:05:31.284657 2247 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 20 18:05:31.285504 kubelet[2247]: E0320 18:05:31.285478 2247 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 20 18:05:31.285558 kubelet[2247]: E0320 18:05:31.285519 2247 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 20 18:05:31.364779 kubelet[2247]: E0320 18:05:31.364679 2247 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.103:6443: connect: connection refused" interval="400ms" Mar 20 18:05:31.385885 kubelet[2247]: I0320 18:05:31.385836 2247 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 20 18:05:31.386364 kubelet[2247]: E0320 18:05:31.386267 2247 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.103:6443/api/v1/nodes\": dial tcp 10.0.0.103:6443: connect: connection refused" node="localhost" Mar 20 18:05:31.488084 systemd[1]: Created slice kubepods-burstable-podcbbb394ff48414687df77e1bc213eeb5.slice - libcontainer container kubepods-burstable-podcbbb394ff48414687df77e1bc213eeb5.slice. Mar 20 18:05:31.508824 kubelet[2247]: E0320 18:05:31.508619 2247 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 20 18:05:31.511404 systemd[1]: Created slice kubepods-burstable-pod3700e556aa2777679a324159272023f1.slice - libcontainer container kubepods-burstable-pod3700e556aa2777679a324159272023f1.slice. Mar 20 18:05:31.522430 kubelet[2247]: E0320 18:05:31.522408 2247 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 20 18:05:31.525024 systemd[1]: Created slice kubepods-burstable-pod452527ed0e6820066705aca8bcc21a74.slice - libcontainer container kubepods-burstable-pod452527ed0e6820066705aca8bcc21a74.slice. Mar 20 18:05:31.526367 kubelet[2247]: E0320 18:05:31.526342 2247 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 20 18:05:31.565683 kubelet[2247]: I0320 18:05:31.565662 2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/452527ed0e6820066705aca8bcc21a74-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"452527ed0e6820066705aca8bcc21a74\") " pod="kube-system/kube-apiserver-localhost" Mar 20 18:05:31.565762 kubelet[2247]: I0320 18:05:31.565693 2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 18:05:31.565762 kubelet[2247]: I0320 18:05:31.565712 2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 18:05:31.565762 kubelet[2247]: I0320 18:05:31.565727 2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 18:05:31.565762 kubelet[2247]: I0320 18:05:31.565744 2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/452527ed0e6820066705aca8bcc21a74-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"452527ed0e6820066705aca8bcc21a74\") " pod="kube-system/kube-apiserver-localhost" Mar 20 18:05:31.565762 kubelet[2247]: I0320 18:05:31.565757 2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 18:05:31.565867 kubelet[2247]: I0320 18:05:31.565771 2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 18:05:31.565867 kubelet[2247]: I0320 18:05:31.565788 2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3700e556aa2777679a324159272023f1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"3700e556aa2777679a324159272023f1\") " pod="kube-system/kube-scheduler-localhost" Mar 20 18:05:31.565867 kubelet[2247]: I0320 18:05:31.565803 2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/452527ed0e6820066705aca8bcc21a74-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"452527ed0e6820066705aca8bcc21a74\") " pod="kube-system/kube-apiserver-localhost" Mar 20 18:05:31.587352 kubelet[2247]: I0320 18:05:31.587314 2247 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 20 18:05:31.587739 kubelet[2247]: E0320 18:05:31.587712 2247 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.103:6443/api/v1/nodes\": dial tcp 10.0.0.103:6443: connect: connection refused" node="localhost" Mar 20 18:05:31.766181 kubelet[2247]: E0320 18:05:31.766063 2247 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.103:6443: connect: connection refused" interval="800ms" Mar 20 18:05:31.810004 containerd[1481]: time="2025-03-20T18:05:31.809948908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:cbbb394ff48414687df77e1bc213eeb5,Namespace:kube-system,Attempt:0,}" Mar 20 18:05:31.823878 containerd[1481]: time="2025-03-20T18:05:31.823736428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:3700e556aa2777679a324159272023f1,Namespace:kube-system,Attempt:0,}" Mar 20 18:05:31.827371 containerd[1481]: time="2025-03-20T18:05:31.827337348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:452527ed0e6820066705aca8bcc21a74,Namespace:kube-system,Attempt:0,}" Mar 20 18:05:31.942286 containerd[1481]: time="2025-03-20T18:05:31.942203308Z" level=info msg="connecting to shim 8865ce2bddf0930f43b9f4200054e9dae72c96f2ac2e2671daaef32548c36055" address="unix:///run/containerd/s/671ffd88eb7a544213ae6b7ebb42599239ba32607132affdd1e39c8eb1e334f9" namespace=k8s.io protocol=ttrpc version=3 Mar 20 18:05:31.945427 containerd[1481]: time="2025-03-20T18:05:31.945376028Z" level=info msg="connecting to shim e0327aa8ca9a0005a95517a8d66c9726159404a12b5a759e92bb9af917f7517c" address="unix:///run/containerd/s/dd1b4107a30a15e753a6765f94476be967db2577b9bb1a19148e35529b3f371d" namespace=k8s.io protocol=ttrpc version=3 Mar 20 18:05:31.946583 containerd[1481]: time="2025-03-20T18:05:31.946556828Z" level=info msg="connecting to shim 210f5989b2ff642fc6f0780844c92a0465ddcd1164c698852e3609775bba370a" address="unix:///run/containerd/s/711c828c4494f2139ccad460dbb61c1cdff96d0d2f2514b8441a3ea9bb8d4379" namespace=k8s.io protocol=ttrpc version=3 Mar 20 18:05:31.973613 systemd[1]: Started cri-containerd-e0327aa8ca9a0005a95517a8d66c9726159404a12b5a759e92bb9af917f7517c.scope - libcontainer container e0327aa8ca9a0005a95517a8d66c9726159404a12b5a759e92bb9af917f7517c. Mar 20 18:05:31.976955 systemd[1]: Started cri-containerd-210f5989b2ff642fc6f0780844c92a0465ddcd1164c698852e3609775bba370a.scope - libcontainer container 210f5989b2ff642fc6f0780844c92a0465ddcd1164c698852e3609775bba370a. Mar 20 18:05:31.978647 systemd[1]: Started cri-containerd-8865ce2bddf0930f43b9f4200054e9dae72c96f2ac2e2671daaef32548c36055.scope - libcontainer container 8865ce2bddf0930f43b9f4200054e9dae72c96f2ac2e2671daaef32548c36055. Mar 20 18:05:31.989271 kubelet[2247]: I0320 18:05:31.989216 2247 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 20 18:05:31.989676 kubelet[2247]: E0320 18:05:31.989633 2247 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.103:6443/api/v1/nodes\": dial tcp 10.0.0.103:6443: connect: connection refused" node="localhost" Mar 20 18:05:32.012049 containerd[1481]: time="2025-03-20T18:05:32.011911908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:cbbb394ff48414687df77e1bc213eeb5,Namespace:kube-system,Attempt:0,} returns sandbox id \"e0327aa8ca9a0005a95517a8d66c9726159404a12b5a759e92bb9af917f7517c\"" Mar 20 18:05:32.012918 containerd[1481]: time="2025-03-20T18:05:32.012878228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:452527ed0e6820066705aca8bcc21a74,Namespace:kube-system,Attempt:0,} returns sandbox id \"210f5989b2ff642fc6f0780844c92a0465ddcd1164c698852e3609775bba370a\"" Mar 20 18:05:32.018708 containerd[1481]: time="2025-03-20T18:05:32.018629628Z" level=info msg="CreateContainer within sandbox \"210f5989b2ff642fc6f0780844c92a0465ddcd1164c698852e3609775bba370a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 20 18:05:32.019415 containerd[1481]: time="2025-03-20T18:05:32.019068188Z" level=info msg="CreateContainer within sandbox \"e0327aa8ca9a0005a95517a8d66c9726159404a12b5a759e92bb9af917f7517c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 20 18:05:32.021012 containerd[1481]: time="2025-03-20T18:05:32.020979148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:3700e556aa2777679a324159272023f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"8865ce2bddf0930f43b9f4200054e9dae72c96f2ac2e2671daaef32548c36055\"" Mar 20 18:05:32.023049 containerd[1481]: time="2025-03-20T18:05:32.022833028Z" level=info msg="CreateContainer within sandbox \"8865ce2bddf0930f43b9f4200054e9dae72c96f2ac2e2671daaef32548c36055\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 20 18:05:32.028449 containerd[1481]: time="2025-03-20T18:05:32.028419628Z" level=info msg="Container f9e73f66d43f4d518c85dad7acd8991fe6bfaf170e36118bd244d7ddade2103b: CDI devices from CRI Config.CDIDevices: []" Mar 20 18:05:32.030446 containerd[1481]: time="2025-03-20T18:05:32.030197748Z" level=info msg="Container a84fec3c19f6258873f46fbb1c8e06c1245af5f946265e8874c691aa076e4d86: CDI devices from CRI Config.CDIDevices: []" Mar 20 18:05:32.034526 containerd[1481]: time="2025-03-20T18:05:32.034493548Z" level=info msg="Container 89365caed43a0fd2aaee0301e9a5bd53b0246b06ad88f1ed915d4e0666602f51: CDI devices from CRI Config.CDIDevices: []" Mar 20 18:05:32.036578 containerd[1481]: time="2025-03-20T18:05:32.036487348Z" level=info msg="CreateContainer within sandbox \"e0327aa8ca9a0005a95517a8d66c9726159404a12b5a759e92bb9af917f7517c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f9e73f66d43f4d518c85dad7acd8991fe6bfaf170e36118bd244d7ddade2103b\"" Mar 20 18:05:32.037083 containerd[1481]: time="2025-03-20T18:05:32.037049628Z" level=info msg="StartContainer for \"f9e73f66d43f4d518c85dad7acd8991fe6bfaf170e36118bd244d7ddade2103b\"" Mar 20 18:05:32.038086 containerd[1481]: time="2025-03-20T18:05:32.038055348Z" level=info msg="connecting to shim f9e73f66d43f4d518c85dad7acd8991fe6bfaf170e36118bd244d7ddade2103b" address="unix:///run/containerd/s/dd1b4107a30a15e753a6765f94476be967db2577b9bb1a19148e35529b3f371d" protocol=ttrpc version=3 Mar 20 18:05:32.042720 containerd[1481]: time="2025-03-20T18:05:32.042660668Z" level=info msg="CreateContainer within sandbox \"210f5989b2ff642fc6f0780844c92a0465ddcd1164c698852e3609775bba370a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a84fec3c19f6258873f46fbb1c8e06c1245af5f946265e8874c691aa076e4d86\"" Mar 20 18:05:32.044050 containerd[1481]: time="2025-03-20T18:05:32.043222588Z" level=info msg="StartContainer for \"a84fec3c19f6258873f46fbb1c8e06c1245af5f946265e8874c691aa076e4d86\"" Mar 20 18:05:32.044276 containerd[1481]: time="2025-03-20T18:05:32.044247188Z" level=info msg="connecting to shim a84fec3c19f6258873f46fbb1c8e06c1245af5f946265e8874c691aa076e4d86" address="unix:///run/containerd/s/711c828c4494f2139ccad460dbb61c1cdff96d0d2f2514b8441a3ea9bb8d4379" protocol=ttrpc version=3 Mar 20 18:05:32.046298 containerd[1481]: time="2025-03-20T18:05:32.046252948Z" level=info msg="CreateContainer within sandbox \"8865ce2bddf0930f43b9f4200054e9dae72c96f2ac2e2671daaef32548c36055\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"89365caed43a0fd2aaee0301e9a5bd53b0246b06ad88f1ed915d4e0666602f51\"" Mar 20 18:05:32.046709 containerd[1481]: time="2025-03-20T18:05:32.046673548Z" level=info msg="StartContainer for \"89365caed43a0fd2aaee0301e9a5bd53b0246b06ad88f1ed915d4e0666602f51\"" Mar 20 18:05:32.047688 containerd[1481]: time="2025-03-20T18:05:32.047653548Z" level=info msg="connecting to shim 89365caed43a0fd2aaee0301e9a5bd53b0246b06ad88f1ed915d4e0666602f51" address="unix:///run/containerd/s/671ffd88eb7a544213ae6b7ebb42599239ba32607132affdd1e39c8eb1e334f9" protocol=ttrpc version=3 Mar 20 18:05:32.060538 systemd[1]: Started cri-containerd-f9e73f66d43f4d518c85dad7acd8991fe6bfaf170e36118bd244d7ddade2103b.scope - libcontainer container f9e73f66d43f4d518c85dad7acd8991fe6bfaf170e36118bd244d7ddade2103b. Mar 20 18:05:32.064508 systemd[1]: Started cri-containerd-89365caed43a0fd2aaee0301e9a5bd53b0246b06ad88f1ed915d4e0666602f51.scope - libcontainer container 89365caed43a0fd2aaee0301e9a5bd53b0246b06ad88f1ed915d4e0666602f51. Mar 20 18:05:32.065742 systemd[1]: Started cri-containerd-a84fec3c19f6258873f46fbb1c8e06c1245af5f946265e8874c691aa076e4d86.scope - libcontainer container a84fec3c19f6258873f46fbb1c8e06c1245af5f946265e8874c691aa076e4d86. Mar 20 18:05:32.067619 kubelet[2247]: W0320 18:05:32.067486 2247 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Mar 20 18:05:32.067619 kubelet[2247]: E0320 18:05:32.067576 2247 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.103:6443: connect: connection refused" logger="UnhandledError" Mar 20 18:05:32.106798 containerd[1481]: time="2025-03-20T18:05:32.106764668Z" level=info msg="StartContainer for \"f9e73f66d43f4d518c85dad7acd8991fe6bfaf170e36118bd244d7ddade2103b\" returns successfully" Mar 20 18:05:32.111647 containerd[1481]: time="2025-03-20T18:05:32.111613668Z" level=info msg="StartContainer for \"89365caed43a0fd2aaee0301e9a5bd53b0246b06ad88f1ed915d4e0666602f51\" returns successfully" Mar 20 18:05:32.123338 containerd[1481]: time="2025-03-20T18:05:32.123313508Z" level=info msg="StartContainer for \"a84fec3c19f6258873f46fbb1c8e06c1245af5f946265e8874c691aa076e4d86\" returns successfully" Mar 20 18:05:32.190563 kubelet[2247]: E0320 18:05:32.190519 2247 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 20 18:05:32.193969 kubelet[2247]: E0320 18:05:32.193920 2247 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 20 18:05:32.202906 kubelet[2247]: E0320 18:05:32.200187 2247 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 20 18:05:32.252876 kubelet[2247]: W0320 18:05:32.251091 2247 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Mar 20 18:05:32.252876 kubelet[2247]: E0320 18:05:32.251154 2247 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.103:6443: connect: connection refused" logger="UnhandledError" Mar 20 18:05:32.295511 kubelet[2247]: W0320 18:05:32.295350 2247 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.103:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Mar 20 18:05:32.295511 kubelet[2247]: E0320 18:05:32.295446 2247 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.103:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.103:6443: connect: connection refused" logger="UnhandledError" Mar 20 18:05:32.791504 kubelet[2247]: I0320 18:05:32.791373 2247 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 20 18:05:33.197681 kubelet[2247]: E0320 18:05:33.197586 2247 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 20 18:05:33.198124 kubelet[2247]: E0320 18:05:33.198104 2247 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 20 18:05:34.479295 kubelet[2247]: E0320 18:05:34.479256 2247 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 20 18:05:34.517649 kubelet[2247]: E0320 18:05:34.517423 2247 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.182e94ff8df05d24 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-20 18:05:31.157871908 +0000 UTC m=+0.876087161,LastTimestamp:2025-03-20 18:05:31.157871908 +0000 UTC m=+0.876087161,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 20 18:05:34.527703 kubelet[2247]: I0320 18:05:34.527633 2247 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Mar 20 18:05:34.527703 kubelet[2247]: E0320 18:05:34.527675 2247 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 20 18:05:34.530594 kubelet[2247]: E0320 18:05:34.530554 2247 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 18:05:34.570794 kubelet[2247]: E0320 18:05:34.570689 2247 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.182e94ff8e572464 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-20 18:05:31.164607588 +0000 UTC m=+0.882822841,LastTimestamp:2025-03-20 18:05:31.164607588 +0000 UTC m=+0.882822841,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 20 18:05:34.630653 kubelet[2247]: E0320 18:05:34.630618 2247 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 18:05:34.729751 kubelet[2247]: E0320 18:05:34.729663 2247 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 20 18:05:34.730796 kubelet[2247]: E0320 18:05:34.730763 2247 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 18:05:34.831690 kubelet[2247]: E0320 18:05:34.831640 2247 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 18:05:34.964074 kubelet[2247]: I0320 18:05:34.964028 2247 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 20 18:05:34.971212 kubelet[2247]: E0320 18:05:34.971172 2247 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 20 18:05:34.971212 kubelet[2247]: I0320 18:05:34.971199 2247 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 20 18:05:34.972721 kubelet[2247]: E0320 18:05:34.972692 2247 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 20 18:05:34.972721 kubelet[2247]: I0320 18:05:34.972718 2247 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 20 18:05:34.974243 kubelet[2247]: E0320 18:05:34.974196 2247 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 20 18:05:35.159234 kubelet[2247]: I0320 18:05:35.158509 2247 apiserver.go:52] "Watching apiserver" Mar 20 18:05:35.164779 kubelet[2247]: I0320 18:05:35.164730 2247 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 20 18:05:35.478612 kubelet[2247]: I0320 18:05:35.478504 2247 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 20 18:05:36.229837 systemd[1]: Reload requested from client PID 2519 ('systemctl') (unit session-7.scope)... Mar 20 18:05:36.229853 systemd[1]: Reloading... Mar 20 18:05:36.298489 zram_generator::config[2563]: No configuration found. Mar 20 18:05:36.378809 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 18:05:36.461137 systemd[1]: Reloading finished in 231 ms. Mar 20 18:05:36.480176 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 18:05:36.489708 systemd[1]: kubelet.service: Deactivated successfully. Mar 20 18:05:36.489954 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 18:05:36.489998 systemd[1]: kubelet.service: Consumed 1.265s CPU time, 127.9M memory peak. Mar 20 18:05:36.492192 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 18:05:36.605222 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 18:05:36.609747 (kubelet)[2605]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 20 18:05:36.649265 kubelet[2605]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 18:05:36.649265 kubelet[2605]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 20 18:05:36.649265 kubelet[2605]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 18:05:36.649609 kubelet[2605]: I0320 18:05:36.649322 2605 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 20 18:05:36.654841 kubelet[2605]: I0320 18:05:36.654793 2605 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 20 18:05:36.654841 kubelet[2605]: I0320 18:05:36.654824 2605 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 20 18:05:36.655064 kubelet[2605]: I0320 18:05:36.655036 2605 server.go:954] "Client rotation is on, will bootstrap in background" Mar 20 18:05:36.656192 kubelet[2605]: I0320 18:05:36.656163 2605 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 20 18:05:36.659146 kubelet[2605]: I0320 18:05:36.659114 2605 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 20 18:05:36.663252 kubelet[2605]: I0320 18:05:36.663233 2605 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 20 18:05:36.665369 kubelet[2605]: I0320 18:05:36.665341 2605 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 20 18:05:36.665586 kubelet[2605]: I0320 18:05:36.665553 2605 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 20 18:05:36.665740 kubelet[2605]: I0320 18:05:36.665582 2605 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 20 18:05:36.665826 kubelet[2605]: I0320 18:05:36.665743 2605 topology_manager.go:138] "Creating topology manager with none policy" Mar 20 18:05:36.665826 kubelet[2605]: I0320 18:05:36.665751 2605 container_manager_linux.go:304] "Creating device plugin manager" Mar 20 18:05:36.665826 kubelet[2605]: I0320 18:05:36.665790 2605 state_mem.go:36] "Initialized new in-memory state store" Mar 20 18:05:36.666120 kubelet[2605]: I0320 18:05:36.665914 2605 kubelet.go:446] "Attempting to sync node with API server" Mar 20 18:05:36.666120 kubelet[2605]: I0320 18:05:36.665929 2605 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 20 18:05:36.666120 kubelet[2605]: I0320 18:05:36.665952 2605 kubelet.go:352] "Adding apiserver pod source" Mar 20 18:05:36.666120 kubelet[2605]: I0320 18:05:36.665962 2605 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 20 18:05:36.666767 kubelet[2605]: I0320 18:05:36.666747 2605 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 20 18:05:36.667929 kubelet[2605]: I0320 18:05:36.667841 2605 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 20 18:05:36.668328 kubelet[2605]: I0320 18:05:36.668210 2605 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 20 18:05:36.668328 kubelet[2605]: I0320 18:05:36.668241 2605 server.go:1287] "Started kubelet" Mar 20 18:05:36.669302 kubelet[2605]: I0320 18:05:36.669190 2605 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 20 18:05:36.670245 kubelet[2605]: I0320 18:05:36.670228 2605 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 20 18:05:36.670583 kubelet[2605]: I0320 18:05:36.670515 2605 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 20 18:05:36.670824 kubelet[2605]: I0320 18:05:36.670765 2605 reconciler.go:26] "Reconciler: start to sync state" Mar 20 18:05:36.670869 kubelet[2605]: I0320 18:05:36.670827 2605 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 20 18:05:36.671026 kubelet[2605]: I0320 18:05:36.670008 2605 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 20 18:05:36.671747 kubelet[2605]: E0320 18:05:36.671619 2605 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 18:05:36.673305 kubelet[2605]: I0320 18:05:36.673286 2605 server.go:490] "Adding debug handlers to kubelet server" Mar 20 18:05:36.675192 kubelet[2605]: I0320 18:05:36.672602 2605 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 20 18:05:36.676140 kubelet[2605]: I0320 18:05:36.676120 2605 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 20 18:05:36.676263 kubelet[2605]: I0320 18:05:36.673637 2605 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 20 18:05:36.677056 kubelet[2605]: E0320 18:05:36.677038 2605 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 20 18:05:36.677199 kubelet[2605]: I0320 18:05:36.677125 2605 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 20 18:05:36.678008 kubelet[2605]: I0320 18:05:36.677989 2605 factory.go:221] Registration of the containerd container factory successfully Mar 20 18:05:36.678238 kubelet[2605]: I0320 18:05:36.678226 2605 factory.go:221] Registration of the systemd container factory successfully Mar 20 18:05:36.686347 kubelet[2605]: I0320 18:05:36.678018 2605 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 20 18:05:36.687493 kubelet[2605]: I0320 18:05:36.687469 2605 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 20 18:05:36.687595 kubelet[2605]: I0320 18:05:36.687584 2605 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 20 18:05:36.687735 kubelet[2605]: I0320 18:05:36.687678 2605 kubelet.go:2388] "Starting kubelet main sync loop" Mar 20 18:05:36.688168 kubelet[2605]: E0320 18:05:36.688146 2605 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 20 18:05:36.719747 kubelet[2605]: I0320 18:05:36.719725 2605 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 20 18:05:36.719747 kubelet[2605]: I0320 18:05:36.719742 2605 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 20 18:05:36.719875 kubelet[2605]: I0320 18:05:36.719759 2605 state_mem.go:36] "Initialized new in-memory state store" Mar 20 18:05:36.719938 kubelet[2605]: I0320 18:05:36.719901 2605 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 20 18:05:36.719938 kubelet[2605]: I0320 18:05:36.719917 2605 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 20 18:05:36.719996 kubelet[2605]: I0320 18:05:36.719942 2605 policy_none.go:49] "None policy: Start" Mar 20 18:05:36.719996 kubelet[2605]: I0320 18:05:36.719952 2605 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 20 18:05:36.719996 kubelet[2605]: I0320 18:05:36.719961 2605 state_mem.go:35] "Initializing new in-memory state store" Mar 20 18:05:36.720064 kubelet[2605]: I0320 18:05:36.720050 2605 state_mem.go:75] "Updated machine memory state" Mar 20 18:05:36.723328 kubelet[2605]: I0320 18:05:36.723291 2605 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 20 18:05:36.723502 kubelet[2605]: I0320 18:05:36.723475 2605 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 20 18:05:36.723546 kubelet[2605]: I0320 18:05:36.723495 2605 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 20 18:05:36.723750 kubelet[2605]: I0320 18:05:36.723656 2605 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 20 18:05:36.725152 kubelet[2605]: E0320 18:05:36.725125 2605 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 20 18:05:36.790715 kubelet[2605]: I0320 18:05:36.789660 2605 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 20 18:05:36.790715 kubelet[2605]: I0320 18:05:36.789740 2605 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 20 18:05:36.790715 kubelet[2605]: I0320 18:05:36.790494 2605 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 20 18:05:36.795439 kubelet[2605]: E0320 18:05:36.795369 2605 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 20 18:05:36.825614 kubelet[2605]: I0320 18:05:36.825594 2605 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 20 18:05:36.830902 kubelet[2605]: I0320 18:05:36.830875 2605 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Mar 20 18:05:36.830959 kubelet[2605]: I0320 18:05:36.830942 2605 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Mar 20 18:05:36.872830 kubelet[2605]: I0320 18:05:36.872796 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 18:05:36.872830 kubelet[2605]: I0320 18:05:36.872828 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/452527ed0e6820066705aca8bcc21a74-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"452527ed0e6820066705aca8bcc21a74\") " pod="kube-system/kube-apiserver-localhost" Mar 20 18:05:36.872933 kubelet[2605]: I0320 18:05:36.872846 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 18:05:36.872933 kubelet[2605]: I0320 18:05:36.872863 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 18:05:36.872933 kubelet[2605]: I0320 18:05:36.872879 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 18:05:36.872933 kubelet[2605]: I0320 18:05:36.872902 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3700e556aa2777679a324159272023f1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"3700e556aa2777679a324159272023f1\") " pod="kube-system/kube-scheduler-localhost" Mar 20 18:05:36.872933 kubelet[2605]: I0320 18:05:36.872917 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/452527ed0e6820066705aca8bcc21a74-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"452527ed0e6820066705aca8bcc21a74\") " pod="kube-system/kube-apiserver-localhost" Mar 20 18:05:36.873040 kubelet[2605]: I0320 18:05:36.872933 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/452527ed0e6820066705aca8bcc21a74-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"452527ed0e6820066705aca8bcc21a74\") " pod="kube-system/kube-apiserver-localhost" Mar 20 18:05:36.873040 kubelet[2605]: I0320 18:05:36.872951 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 18:05:37.301522 sudo[2638]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 20 18:05:37.301794 sudo[2638]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 20 18:05:37.666708 kubelet[2605]: I0320 18:05:37.666599 2605 apiserver.go:52] "Watching apiserver" Mar 20 18:05:37.670883 kubelet[2605]: I0320 18:05:37.670855 2605 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 20 18:05:37.745082 kubelet[2605]: I0320 18:05:37.745019 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.744999896 podStartE2EDuration="2.744999896s" podCreationTimestamp="2025-03-20 18:05:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 18:05:37.736455489 +0000 UTC m=+1.123803582" watchObservedRunningTime="2025-03-20 18:05:37.744999896 +0000 UTC m=+1.132347989" Mar 20 18:05:37.747921 sudo[2638]: pam_unix(sudo:session): session closed for user root Mar 20 18:05:37.757291 kubelet[2605]: I0320 18:05:37.753708 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.753611984 podStartE2EDuration="1.753611984s" podCreationTimestamp="2025-03-20 18:05:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 18:05:37.745918461 +0000 UTC m=+1.133266554" watchObservedRunningTime="2025-03-20 18:05:37.753611984 +0000 UTC m=+1.140960077" Mar 20 18:05:37.764883 kubelet[2605]: I0320 18:05:37.764797 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.764785325 podStartE2EDuration="1.764785325s" podCreationTimestamp="2025-03-20 18:05:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 18:05:37.753978866 +0000 UTC m=+1.141326959" watchObservedRunningTime="2025-03-20 18:05:37.764785325 +0000 UTC m=+1.152133418" Mar 20 18:05:40.232750 sudo[1686]: pam_unix(sudo:session): session closed for user root Mar 20 18:05:40.236030 sshd[1685]: Connection closed by 10.0.0.1 port 43314 Mar 20 18:05:40.236548 sshd-session[1682]: pam_unix(sshd:session): session closed for user core Mar 20 18:05:40.240211 systemd-logind[1469]: Session 7 logged out. Waiting for processes to exit. Mar 20 18:05:40.240862 systemd[1]: sshd@6-10.0.0.103:22-10.0.0.1:43314.service: Deactivated successfully. Mar 20 18:05:40.243124 systemd[1]: session-7.scope: Deactivated successfully. Mar 20 18:05:40.243302 systemd[1]: session-7.scope: Consumed 7.689s CPU time, 261M memory peak. Mar 20 18:05:40.244847 systemd-logind[1469]: Removed session 7. Mar 20 18:05:42.152257 kubelet[2605]: I0320 18:05:42.152204 2605 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 20 18:05:42.152625 containerd[1481]: time="2025-03-20T18:05:42.152559897Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 20 18:05:42.152818 kubelet[2605]: I0320 18:05:42.152768 2605 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 20 18:05:43.173928 systemd[1]: Created slice kubepods-besteffort-pod1d4d0585_0980_4eb5_863f_a7b0565f3efa.slice - libcontainer container kubepods-besteffort-pod1d4d0585_0980_4eb5_863f_a7b0565f3efa.slice. Mar 20 18:05:43.185201 systemd[1]: Created slice kubepods-burstable-podfd32ef83_3bf4_44b3_b48e_09e0441573ed.slice - libcontainer container kubepods-burstable-podfd32ef83_3bf4_44b3_b48e_09e0441573ed.slice. Mar 20 18:05:43.216035 kubelet[2605]: I0320 18:05:43.215994 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fd32ef83-3bf4-44b3-b48e-09e0441573ed-cilium-config-path\") pod \"cilium-rv9j8\" (UID: \"fd32ef83-3bf4-44b3-b48e-09e0441573ed\") " pod="kube-system/cilium-rv9j8" Mar 20 18:05:43.216858 kubelet[2605]: I0320 18:05:43.216468 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1d4d0585-0980-4eb5-863f-a7b0565f3efa-kube-proxy\") pod \"kube-proxy-m6f5j\" (UID: \"1d4d0585-0980-4eb5-863f-a7b0565f3efa\") " pod="kube-system/kube-proxy-m6f5j" Mar 20 18:05:43.216858 kubelet[2605]: I0320 18:05:43.216512 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fd32ef83-3bf4-44b3-b48e-09e0441573ed-cilium-run\") pod \"cilium-rv9j8\" (UID: \"fd32ef83-3bf4-44b3-b48e-09e0441573ed\") " pod="kube-system/cilium-rv9j8" Mar 20 18:05:43.216858 kubelet[2605]: I0320 18:05:43.216530 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fd32ef83-3bf4-44b3-b48e-09e0441573ed-hostproc\") pod \"cilium-rv9j8\" (UID: \"fd32ef83-3bf4-44b3-b48e-09e0441573ed\") " pod="kube-system/cilium-rv9j8" Mar 20 18:05:43.216858 kubelet[2605]: I0320 18:05:43.216548 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fd32ef83-3bf4-44b3-b48e-09e0441573ed-cni-path\") pod \"cilium-rv9j8\" (UID: \"fd32ef83-3bf4-44b3-b48e-09e0441573ed\") " pod="kube-system/cilium-rv9j8" Mar 20 18:05:43.216858 kubelet[2605]: I0320 18:05:43.216565 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fd32ef83-3bf4-44b3-b48e-09e0441573ed-clustermesh-secrets\") pod \"cilium-rv9j8\" (UID: \"fd32ef83-3bf4-44b3-b48e-09e0441573ed\") " pod="kube-system/cilium-rv9j8" Mar 20 18:05:43.216858 kubelet[2605]: I0320 18:05:43.216581 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fd32ef83-3bf4-44b3-b48e-09e0441573ed-bpf-maps\") pod \"cilium-rv9j8\" (UID: \"fd32ef83-3bf4-44b3-b48e-09e0441573ed\") " pod="kube-system/cilium-rv9j8" Mar 20 18:05:43.217050 kubelet[2605]: I0320 18:05:43.216598 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fd32ef83-3bf4-44b3-b48e-09e0441573ed-host-proc-sys-net\") pod \"cilium-rv9j8\" (UID: \"fd32ef83-3bf4-44b3-b48e-09e0441573ed\") " pod="kube-system/cilium-rv9j8" Mar 20 18:05:43.217050 kubelet[2605]: I0320 18:05:43.216616 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fd32ef83-3bf4-44b3-b48e-09e0441573ed-hubble-tls\") pod \"cilium-rv9j8\" (UID: \"fd32ef83-3bf4-44b3-b48e-09e0441573ed\") " pod="kube-system/cilium-rv9j8" Mar 20 18:05:43.217050 kubelet[2605]: I0320 18:05:43.216632 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-642f2\" (UniqueName: \"kubernetes.io/projected/fd32ef83-3bf4-44b3-b48e-09e0441573ed-kube-api-access-642f2\") pod \"cilium-rv9j8\" (UID: \"fd32ef83-3bf4-44b3-b48e-09e0441573ed\") " pod="kube-system/cilium-rv9j8" Mar 20 18:05:43.217050 kubelet[2605]: I0320 18:05:43.216657 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1d4d0585-0980-4eb5-863f-a7b0565f3efa-xtables-lock\") pod \"kube-proxy-m6f5j\" (UID: \"1d4d0585-0980-4eb5-863f-a7b0565f3efa\") " pod="kube-system/kube-proxy-m6f5j" Mar 20 18:05:43.217050 kubelet[2605]: I0320 18:05:43.216675 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1d4d0585-0980-4eb5-863f-a7b0565f3efa-lib-modules\") pod \"kube-proxy-m6f5j\" (UID: \"1d4d0585-0980-4eb5-863f-a7b0565f3efa\") " pod="kube-system/kube-proxy-m6f5j" Mar 20 18:05:43.217155 kubelet[2605]: I0320 18:05:43.216694 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fd32ef83-3bf4-44b3-b48e-09e0441573ed-cilium-cgroup\") pod \"cilium-rv9j8\" (UID: \"fd32ef83-3bf4-44b3-b48e-09e0441573ed\") " pod="kube-system/cilium-rv9j8" Mar 20 18:05:43.217155 kubelet[2605]: I0320 18:05:43.216710 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd32ef83-3bf4-44b3-b48e-09e0441573ed-lib-modules\") pod \"cilium-rv9j8\" (UID: \"fd32ef83-3bf4-44b3-b48e-09e0441573ed\") " pod="kube-system/cilium-rv9j8" Mar 20 18:05:43.217155 kubelet[2605]: I0320 18:05:43.216728 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd32ef83-3bf4-44b3-b48e-09e0441573ed-xtables-lock\") pod \"cilium-rv9j8\" (UID: \"fd32ef83-3bf4-44b3-b48e-09e0441573ed\") " pod="kube-system/cilium-rv9j8" Mar 20 18:05:43.217155 kubelet[2605]: I0320 18:05:43.216743 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fd32ef83-3bf4-44b3-b48e-09e0441573ed-host-proc-sys-kernel\") pod \"cilium-rv9j8\" (UID: \"fd32ef83-3bf4-44b3-b48e-09e0441573ed\") " pod="kube-system/cilium-rv9j8" Mar 20 18:05:43.217155 kubelet[2605]: I0320 18:05:43.216791 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fd32ef83-3bf4-44b3-b48e-09e0441573ed-etc-cni-netd\") pod \"cilium-rv9j8\" (UID: \"fd32ef83-3bf4-44b3-b48e-09e0441573ed\") " pod="kube-system/cilium-rv9j8" Mar 20 18:05:43.217249 kubelet[2605]: I0320 18:05:43.216808 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hkjj\" (UniqueName: \"kubernetes.io/projected/1d4d0585-0980-4eb5-863f-a7b0565f3efa-kube-api-access-4hkjj\") pod \"kube-proxy-m6f5j\" (UID: \"1d4d0585-0980-4eb5-863f-a7b0565f3efa\") " pod="kube-system/kube-proxy-m6f5j" Mar 20 18:05:43.281985 systemd[1]: Created slice kubepods-besteffort-pod593cca11_0202_4ac9_b9dc_636b96607a81.slice - libcontainer container kubepods-besteffort-pod593cca11_0202_4ac9_b9dc_636b96607a81.slice. Mar 20 18:05:43.318469 kubelet[2605]: I0320 18:05:43.317893 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/593cca11-0202-4ac9-b9dc-636b96607a81-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-5g2ns\" (UID: \"593cca11-0202-4ac9-b9dc-636b96607a81\") " pod="kube-system/cilium-operator-6c4d7847fc-5g2ns" Mar 20 18:05:43.318469 kubelet[2605]: I0320 18:05:43.318039 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpbfv\" (UniqueName: \"kubernetes.io/projected/593cca11-0202-4ac9-b9dc-636b96607a81-kube-api-access-hpbfv\") pod \"cilium-operator-6c4d7847fc-5g2ns\" (UID: \"593cca11-0202-4ac9-b9dc-636b96607a81\") " pod="kube-system/cilium-operator-6c4d7847fc-5g2ns" Mar 20 18:05:43.483720 containerd[1481]: time="2025-03-20T18:05:43.483593544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m6f5j,Uid:1d4d0585-0980-4eb5-863f-a7b0565f3efa,Namespace:kube-system,Attempt:0,}" Mar 20 18:05:43.488328 containerd[1481]: time="2025-03-20T18:05:43.488294081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rv9j8,Uid:fd32ef83-3bf4-44b3-b48e-09e0441573ed,Namespace:kube-system,Attempt:0,}" Mar 20 18:05:43.500964 containerd[1481]: time="2025-03-20T18:05:43.500919248Z" level=info msg="connecting to shim a932df1de7e40c0d02930c7359c9d231533e9343f3879212d9ecaca3d883faa2" address="unix:///run/containerd/s/73469423c38c510556b54aedfc1443b37ce40af31c298ccc2ba06c2e81356105" namespace=k8s.io protocol=ttrpc version=3 Mar 20 18:05:43.505609 containerd[1481]: time="2025-03-20T18:05:43.505568465Z" level=info msg="connecting to shim 86c378cafc1a75d4d6bcac27d9a30e8c1d7086b31fb2d2987b804352c1243a1a" address="unix:///run/containerd/s/b487bb739176b35af846e76d0b016f24d5f635b6280ed0fd9f864d33d5457c3e" namespace=k8s.io protocol=ttrpc version=3 Mar 20 18:05:43.525542 systemd[1]: Started cri-containerd-a932df1de7e40c0d02930c7359c9d231533e9343f3879212d9ecaca3d883faa2.scope - libcontainer container a932df1de7e40c0d02930c7359c9d231533e9343f3879212d9ecaca3d883faa2. Mar 20 18:05:43.527920 systemd[1]: Started cri-containerd-86c378cafc1a75d4d6bcac27d9a30e8c1d7086b31fb2d2987b804352c1243a1a.scope - libcontainer container 86c378cafc1a75d4d6bcac27d9a30e8c1d7086b31fb2d2987b804352c1243a1a. Mar 20 18:05:43.553162 containerd[1481]: time="2025-03-20T18:05:43.553067082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m6f5j,Uid:1d4d0585-0980-4eb5-863f-a7b0565f3efa,Namespace:kube-system,Attempt:0,} returns sandbox id \"a932df1de7e40c0d02930c7359c9d231533e9343f3879212d9ecaca3d883faa2\"" Mar 20 18:05:43.553582 containerd[1481]: time="2025-03-20T18:05:43.553530124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rv9j8,Uid:fd32ef83-3bf4-44b3-b48e-09e0441573ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"86c378cafc1a75d4d6bcac27d9a30e8c1d7086b31fb2d2987b804352c1243a1a\"" Mar 20 18:05:43.557257 containerd[1481]: time="2025-03-20T18:05:43.557231658Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 20 18:05:43.557444 containerd[1481]: time="2025-03-20T18:05:43.557289618Z" level=info msg="CreateContainer within sandbox \"a932df1de7e40c0d02930c7359c9d231533e9343f3879212d9ecaca3d883faa2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 20 18:05:43.566400 containerd[1481]: time="2025-03-20T18:05:43.566300491Z" level=info msg="Container f189db17033aa0747b4650a53340126c9f6adf1c07410ec338b19c81d31aa9a9: CDI devices from CRI Config.CDIDevices: []" Mar 20 18:05:43.581234 containerd[1481]: time="2025-03-20T18:05:43.581180507Z" level=info msg="CreateContainer within sandbox \"a932df1de7e40c0d02930c7359c9d231533e9343f3879212d9ecaca3d883faa2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f189db17033aa0747b4650a53340126c9f6adf1c07410ec338b19c81d31aa9a9\"" Mar 20 18:05:43.581953 containerd[1481]: time="2025-03-20T18:05:43.581680069Z" level=info msg="StartContainer for \"f189db17033aa0747b4650a53340126c9f6adf1c07410ec338b19c81d31aa9a9\"" Mar 20 18:05:43.583052 containerd[1481]: time="2025-03-20T18:05:43.583024034Z" level=info msg="connecting to shim f189db17033aa0747b4650a53340126c9f6adf1c07410ec338b19c81d31aa9a9" address="unix:///run/containerd/s/73469423c38c510556b54aedfc1443b37ce40af31c298ccc2ba06c2e81356105" protocol=ttrpc version=3 Mar 20 18:05:43.586507 containerd[1481]: time="2025-03-20T18:05:43.586458487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-5g2ns,Uid:593cca11-0202-4ac9-b9dc-636b96607a81,Namespace:kube-system,Attempt:0,}" Mar 20 18:05:43.600548 systemd[1]: Started cri-containerd-f189db17033aa0747b4650a53340126c9f6adf1c07410ec338b19c81d31aa9a9.scope - libcontainer container f189db17033aa0747b4650a53340126c9f6adf1c07410ec338b19c81d31aa9a9. Mar 20 18:05:43.601255 containerd[1481]: time="2025-03-20T18:05:43.601144581Z" level=info msg="connecting to shim dbdde60a0be5a026088ee52d3a006767b0a2017feb710437794a89d798a5c01a" address="unix:///run/containerd/s/871a0d9017badc498802a8f32217c6d5e0c56b8c1fb16779d99b438022bf686e" namespace=k8s.io protocol=ttrpc version=3 Mar 20 18:05:43.628626 systemd[1]: Started cri-containerd-dbdde60a0be5a026088ee52d3a006767b0a2017feb710437794a89d798a5c01a.scope - libcontainer container dbdde60a0be5a026088ee52d3a006767b0a2017feb710437794a89d798a5c01a. Mar 20 18:05:43.647692 containerd[1481]: time="2025-03-20T18:05:43.647186633Z" level=info msg="StartContainer for \"f189db17033aa0747b4650a53340126c9f6adf1c07410ec338b19c81d31aa9a9\" returns successfully" Mar 20 18:05:43.666423 containerd[1481]: time="2025-03-20T18:05:43.666354544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-5g2ns,Uid:593cca11-0202-4ac9-b9dc-636b96607a81,Namespace:kube-system,Attempt:0,} returns sandbox id \"dbdde60a0be5a026088ee52d3a006767b0a2017feb710437794a89d798a5c01a\"" Mar 20 18:05:43.730635 kubelet[2605]: I0320 18:05:43.730510 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m6f5j" podStartSLOduration=0.730434503 podStartE2EDuration="730.434503ms" podCreationTimestamp="2025-03-20 18:05:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 18:05:43.730285742 +0000 UTC m=+7.117633835" watchObservedRunningTime="2025-03-20 18:05:43.730434503 +0000 UTC m=+7.117782676" Mar 20 18:05:47.648406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2323402839.mount: Deactivated successfully. Mar 20 18:05:49.322448 containerd[1481]: time="2025-03-20T18:05:49.322392685Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:05:49.323268 containerd[1481]: time="2025-03-20T18:05:49.323229087Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Mar 20 18:05:49.324220 containerd[1481]: time="2025-03-20T18:05:49.324192770Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:05:49.325586 containerd[1481]: time="2025-03-20T18:05:49.325502973Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.768158275s" Mar 20 18:05:49.325586 containerd[1481]: time="2025-03-20T18:05:49.325539053Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 20 18:05:49.327771 containerd[1481]: time="2025-03-20T18:05:49.327672658Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 20 18:05:49.331405 containerd[1481]: time="2025-03-20T18:05:49.330607666Z" level=info msg="CreateContainer within sandbox \"86c378cafc1a75d4d6bcac27d9a30e8c1d7086b31fb2d2987b804352c1243a1a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 20 18:05:49.342445 containerd[1481]: time="2025-03-20T18:05:49.342069735Z" level=info msg="Container 028d0b834d75dcdf53f76a2c5e9e91caf4e294886c6de3023c4cd0a3f3d99cce: CDI devices from CRI Config.CDIDevices: []" Mar 20 18:05:49.347352 containerd[1481]: time="2025-03-20T18:05:49.347305268Z" level=info msg="CreateContainer within sandbox \"86c378cafc1a75d4d6bcac27d9a30e8c1d7086b31fb2d2987b804352c1243a1a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"028d0b834d75dcdf53f76a2c5e9e91caf4e294886c6de3023c4cd0a3f3d99cce\"" Mar 20 18:05:49.348559 containerd[1481]: time="2025-03-20T18:05:49.348504551Z" level=info msg="StartContainer for \"028d0b834d75dcdf53f76a2c5e9e91caf4e294886c6de3023c4cd0a3f3d99cce\"" Mar 20 18:05:49.349889 containerd[1481]: time="2025-03-20T18:05:49.349852594Z" level=info msg="connecting to shim 028d0b834d75dcdf53f76a2c5e9e91caf4e294886c6de3023c4cd0a3f3d99cce" address="unix:///run/containerd/s/b487bb739176b35af846e76d0b016f24d5f635b6280ed0fd9f864d33d5457c3e" protocol=ttrpc version=3 Mar 20 18:05:49.391606 systemd[1]: Started cri-containerd-028d0b834d75dcdf53f76a2c5e9e91caf4e294886c6de3023c4cd0a3f3d99cce.scope - libcontainer container 028d0b834d75dcdf53f76a2c5e9e91caf4e294886c6de3023c4cd0a3f3d99cce. Mar 20 18:05:49.414484 containerd[1481]: time="2025-03-20T18:05:49.414364277Z" level=info msg="StartContainer for \"028d0b834d75dcdf53f76a2c5e9e91caf4e294886c6de3023c4cd0a3f3d99cce\" returns successfully" Mar 20 18:05:49.469989 systemd[1]: cri-containerd-028d0b834d75dcdf53f76a2c5e9e91caf4e294886c6de3023c4cd0a3f3d99cce.scope: Deactivated successfully. Mar 20 18:05:49.496012 containerd[1481]: time="2025-03-20T18:05:49.495921724Z" level=info msg="TaskExit event in podsandbox handler container_id:\"028d0b834d75dcdf53f76a2c5e9e91caf4e294886c6de3023c4cd0a3f3d99cce\" id:\"028d0b834d75dcdf53f76a2c5e9e91caf4e294886c6de3023c4cd0a3f3d99cce\" pid:3021 exited_at:{seconds:1742493949 nanos:489100106}" Mar 20 18:05:49.499928 containerd[1481]: time="2025-03-20T18:05:49.499894534Z" level=info msg="received exit event container_id:\"028d0b834d75dcdf53f76a2c5e9e91caf4e294886c6de3023c4cd0a3f3d99cce\" id:\"028d0b834d75dcdf53f76a2c5e9e91caf4e294886c6de3023c4cd0a3f3d99cce\" pid:3021 exited_at:{seconds:1742493949 nanos:489100106}" Mar 20 18:05:49.539612 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-028d0b834d75dcdf53f76a2c5e9e91caf4e294886c6de3023c4cd0a3f3d99cce-rootfs.mount: Deactivated successfully. Mar 20 18:05:49.589136 update_engine[1471]: I20250320 18:05:49.588471 1471 update_attempter.cc:509] Updating boot flags... Mar 20 18:05:49.685507 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3059) Mar 20 18:05:49.731456 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3057) Mar 20 18:05:49.753882 containerd[1481]: time="2025-03-20T18:05:49.752657613Z" level=info msg="CreateContainer within sandbox \"86c378cafc1a75d4d6bcac27d9a30e8c1d7086b31fb2d2987b804352c1243a1a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 20 18:05:49.776529 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3057) Mar 20 18:05:49.802003 containerd[1481]: time="2025-03-20T18:05:49.801952937Z" level=info msg="Container 4e0db1fc35db18310d902552a028b11793bddf73861f7db25a241c195813d106: CDI devices from CRI Config.CDIDevices: []" Mar 20 18:05:49.810001 containerd[1481]: time="2025-03-20T18:05:49.809955357Z" level=info msg="CreateContainer within sandbox \"86c378cafc1a75d4d6bcac27d9a30e8c1d7086b31fb2d2987b804352c1243a1a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4e0db1fc35db18310d902552a028b11793bddf73861f7db25a241c195813d106\"" Mar 20 18:05:49.810552 containerd[1481]: time="2025-03-20T18:05:49.810500719Z" level=info msg="StartContainer for \"4e0db1fc35db18310d902552a028b11793bddf73861f7db25a241c195813d106\"" Mar 20 18:05:49.811521 containerd[1481]: time="2025-03-20T18:05:49.811487361Z" level=info msg="connecting to shim 4e0db1fc35db18310d902552a028b11793bddf73861f7db25a241c195813d106" address="unix:///run/containerd/s/b487bb739176b35af846e76d0b016f24d5f635b6280ed0fd9f864d33d5457c3e" protocol=ttrpc version=3 Mar 20 18:05:49.829570 systemd[1]: Started cri-containerd-4e0db1fc35db18310d902552a028b11793bddf73861f7db25a241c195813d106.scope - libcontainer container 4e0db1fc35db18310d902552a028b11793bddf73861f7db25a241c195813d106. Mar 20 18:05:49.851709 containerd[1481]: time="2025-03-20T18:05:49.851584703Z" level=info msg="StartContainer for \"4e0db1fc35db18310d902552a028b11793bddf73861f7db25a241c195813d106\" returns successfully" Mar 20 18:05:49.879686 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 20 18:05:49.879990 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 20 18:05:49.880365 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 20 18:05:49.881950 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 20 18:05:49.882288 systemd[1]: cri-containerd-4e0db1fc35db18310d902552a028b11793bddf73861f7db25a241c195813d106.scope: Deactivated successfully. Mar 20 18:05:49.882813 containerd[1481]: time="2025-03-20T18:05:49.882764501Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4e0db1fc35db18310d902552a028b11793bddf73861f7db25a241c195813d106\" id:\"4e0db1fc35db18310d902552a028b11793bddf73861f7db25a241c195813d106\" pid:3082 exited_at:{seconds:1742493949 nanos:882483821}" Mar 20 18:05:49.882813 containerd[1481]: time="2025-03-20T18:05:49.882870942Z" level=info msg="received exit event container_id:\"4e0db1fc35db18310d902552a028b11793bddf73861f7db25a241c195813d106\" id:\"4e0db1fc35db18310d902552a028b11793bddf73861f7db25a241c195813d106\" pid:3082 exited_at:{seconds:1742493949 nanos:882483821}" Mar 20 18:05:49.902802 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 20 18:05:50.604197 containerd[1481]: time="2025-03-20T18:05:50.604138710Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:05:50.604946 containerd[1481]: time="2025-03-20T18:05:50.604841792Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Mar 20 18:05:50.605483 containerd[1481]: time="2025-03-20T18:05:50.605451993Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 18:05:50.606845 containerd[1481]: time="2025-03-20T18:05:50.606813196Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.279108778s" Mar 20 18:05:50.606887 containerd[1481]: time="2025-03-20T18:05:50.606857636Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 20 18:05:50.609154 containerd[1481]: time="2025-03-20T18:05:50.609122362Z" level=info msg="CreateContainer within sandbox \"dbdde60a0be5a026088ee52d3a006767b0a2017feb710437794a89d798a5c01a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 20 18:05:50.615187 containerd[1481]: time="2025-03-20T18:05:50.614536014Z" level=info msg="Container 3d86f0c176b1e01deadbd0048b3bee272eb68f5b7998c22285b4fcce9e97f8e0: CDI devices from CRI Config.CDIDevices: []" Mar 20 18:05:50.620793 containerd[1481]: time="2025-03-20T18:05:50.620672509Z" level=info msg="CreateContainer within sandbox \"dbdde60a0be5a026088ee52d3a006767b0a2017feb710437794a89d798a5c01a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3d86f0c176b1e01deadbd0048b3bee272eb68f5b7998c22285b4fcce9e97f8e0\"" Mar 20 18:05:50.621531 containerd[1481]: time="2025-03-20T18:05:50.621494151Z" level=info msg="StartContainer for \"3d86f0c176b1e01deadbd0048b3bee272eb68f5b7998c22285b4fcce9e97f8e0\"" Mar 20 18:05:50.624783 containerd[1481]: time="2025-03-20T18:05:50.622601634Z" level=info msg="connecting to shim 3d86f0c176b1e01deadbd0048b3bee272eb68f5b7998c22285b4fcce9e97f8e0" address="unix:///run/containerd/s/871a0d9017badc498802a8f32217c6d5e0c56b8c1fb16779d99b438022bf686e" protocol=ttrpc version=3 Mar 20 18:05:50.643528 systemd[1]: Started cri-containerd-3d86f0c176b1e01deadbd0048b3bee272eb68f5b7998c22285b4fcce9e97f8e0.scope - libcontainer container 3d86f0c176b1e01deadbd0048b3bee272eb68f5b7998c22285b4fcce9e97f8e0. Mar 20 18:05:50.669050 containerd[1481]: time="2025-03-20T18:05:50.669014784Z" level=info msg="StartContainer for \"3d86f0c176b1e01deadbd0048b3bee272eb68f5b7998c22285b4fcce9e97f8e0\" returns successfully" Mar 20 18:05:50.747027 containerd[1481]: time="2025-03-20T18:05:50.746979568Z" level=info msg="CreateContainer within sandbox \"86c378cafc1a75d4d6bcac27d9a30e8c1d7086b31fb2d2987b804352c1243a1a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 20 18:05:50.751767 kubelet[2605]: I0320 18:05:50.751577 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-5g2ns" podStartSLOduration=0.811297609 podStartE2EDuration="7.751561419s" podCreationTimestamp="2025-03-20 18:05:43 +0000 UTC" firstStartedPulling="2025-03-20 18:05:43.667321828 +0000 UTC m=+7.054669921" lastFinishedPulling="2025-03-20 18:05:50.607585638 +0000 UTC m=+13.994933731" observedRunningTime="2025-03-20 18:05:50.751443579 +0000 UTC m=+14.138791672" watchObservedRunningTime="2025-03-20 18:05:50.751561419 +0000 UTC m=+14.138909512" Mar 20 18:05:50.768168 containerd[1481]: time="2025-03-20T18:05:50.768136218Z" level=info msg="Container 2c5d8c8c677309316d1cdc4beff3f4db920939a6b605a981402c1cfe88960b19: CDI devices from CRI Config.CDIDevices: []" Mar 20 18:05:50.781184 containerd[1481]: time="2025-03-20T18:05:50.781126489Z" level=info msg="CreateContainer within sandbox \"86c378cafc1a75d4d6bcac27d9a30e8c1d7086b31fb2d2987b804352c1243a1a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2c5d8c8c677309316d1cdc4beff3f4db920939a6b605a981402c1cfe88960b19\"" Mar 20 18:05:50.783115 containerd[1481]: time="2025-03-20T18:05:50.781601890Z" level=info msg="StartContainer for \"2c5d8c8c677309316d1cdc4beff3f4db920939a6b605a981402c1cfe88960b19\"" Mar 20 18:05:50.783115 containerd[1481]: time="2025-03-20T18:05:50.782937574Z" level=info msg="connecting to shim 2c5d8c8c677309316d1cdc4beff3f4db920939a6b605a981402c1cfe88960b19" address="unix:///run/containerd/s/b487bb739176b35af846e76d0b016f24d5f635b6280ed0fd9f864d33d5457c3e" protocol=ttrpc version=3 Mar 20 18:05:50.840524 systemd[1]: Started cri-containerd-2c5d8c8c677309316d1cdc4beff3f4db920939a6b605a981402c1cfe88960b19.scope - libcontainer container 2c5d8c8c677309316d1cdc4beff3f4db920939a6b605a981402c1cfe88960b19. Mar 20 18:05:50.922120 systemd[1]: cri-containerd-2c5d8c8c677309316d1cdc4beff3f4db920939a6b605a981402c1cfe88960b19.scope: Deactivated successfully. Mar 20 18:05:50.922647 systemd[1]: cri-containerd-2c5d8c8c677309316d1cdc4beff3f4db920939a6b605a981402c1cfe88960b19.scope: Consumed 41ms CPU time, 4.3M memory peak, 1.2M read from disk. Mar 20 18:05:50.923980 containerd[1481]: time="2025-03-20T18:05:50.923933628Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2c5d8c8c677309316d1cdc4beff3f4db920939a6b605a981402c1cfe88960b19\" id:\"2c5d8c8c677309316d1cdc4beff3f4db920939a6b605a981402c1cfe88960b19\" pid:3184 exited_at:{seconds:1742493950 nanos:923339026}" Mar 20 18:05:50.939757 containerd[1481]: time="2025-03-20T18:05:50.939645625Z" level=info msg="received exit event container_id:\"2c5d8c8c677309316d1cdc4beff3f4db920939a6b605a981402c1cfe88960b19\" id:\"2c5d8c8c677309316d1cdc4beff3f4db920939a6b605a981402c1cfe88960b19\" pid:3184 exited_at:{seconds:1742493950 nanos:923339026}" Mar 20 18:05:50.941795 containerd[1481]: time="2025-03-20T18:05:50.941758630Z" level=info msg="StartContainer for \"2c5d8c8c677309316d1cdc4beff3f4db920939a6b605a981402c1cfe88960b19\" returns successfully" Mar 20 18:05:51.751051 containerd[1481]: time="2025-03-20T18:05:51.750983517Z" level=info msg="CreateContainer within sandbox \"86c378cafc1a75d4d6bcac27d9a30e8c1d7086b31fb2d2987b804352c1243a1a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 20 18:05:51.767029 containerd[1481]: time="2025-03-20T18:05:51.766977072Z" level=info msg="Container 0087b6320893288c9df595f26886961a2959bda00c4da48257e1f1d47787b86c: CDI devices from CRI Config.CDIDevices: []" Mar 20 18:05:51.774561 containerd[1481]: time="2025-03-20T18:05:51.774515649Z" level=info msg="CreateContainer within sandbox \"86c378cafc1a75d4d6bcac27d9a30e8c1d7086b31fb2d2987b804352c1243a1a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0087b6320893288c9df595f26886961a2959bda00c4da48257e1f1d47787b86c\"" Mar 20 18:05:51.775283 containerd[1481]: time="2025-03-20T18:05:51.775028850Z" level=info msg="StartContainer for \"0087b6320893288c9df595f26886961a2959bda00c4da48257e1f1d47787b86c\"" Mar 20 18:05:51.775975 containerd[1481]: time="2025-03-20T18:05:51.775950172Z" level=info msg="connecting to shim 0087b6320893288c9df595f26886961a2959bda00c4da48257e1f1d47787b86c" address="unix:///run/containerd/s/b487bb739176b35af846e76d0b016f24d5f635b6280ed0fd9f864d33d5457c3e" protocol=ttrpc version=3 Mar 20 18:05:51.789530 systemd[1]: Started cri-containerd-0087b6320893288c9df595f26886961a2959bda00c4da48257e1f1d47787b86c.scope - libcontainer container 0087b6320893288c9df595f26886961a2959bda00c4da48257e1f1d47787b86c. Mar 20 18:05:51.814778 systemd[1]: cri-containerd-0087b6320893288c9df595f26886961a2959bda00c4da48257e1f1d47787b86c.scope: Deactivated successfully. Mar 20 18:05:51.815313 containerd[1481]: time="2025-03-20T18:05:51.815256739Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0087b6320893288c9df595f26886961a2959bda00c4da48257e1f1d47787b86c\" id:\"0087b6320893288c9df595f26886961a2959bda00c4da48257e1f1d47787b86c\" pid:3224 exited_at:{seconds:1742493951 nanos:815002179}" Mar 20 18:05:51.816135 containerd[1481]: time="2025-03-20T18:05:51.816012741Z" level=info msg="received exit event container_id:\"0087b6320893288c9df595f26886961a2959bda00c4da48257e1f1d47787b86c\" id:\"0087b6320893288c9df595f26886961a2959bda00c4da48257e1f1d47787b86c\" pid:3224 exited_at:{seconds:1742493951 nanos:815002179}" Mar 20 18:05:51.822667 containerd[1481]: time="2025-03-20T18:05:51.822625436Z" level=info msg="StartContainer for \"0087b6320893288c9df595f26886961a2959bda00c4da48257e1f1d47787b86c\" returns successfully" Mar 20 18:05:51.834516 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0087b6320893288c9df595f26886961a2959bda00c4da48257e1f1d47787b86c-rootfs.mount: Deactivated successfully. Mar 20 18:05:52.756527 containerd[1481]: time="2025-03-20T18:05:52.756472326Z" level=info msg="CreateContainer within sandbox \"86c378cafc1a75d4d6bcac27d9a30e8c1d7086b31fb2d2987b804352c1243a1a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 20 18:05:52.765610 containerd[1481]: time="2025-03-20T18:05:52.764827503Z" level=info msg="Container ab5e40b6ceecd2104ae51e2d4cadd041c98b45be7667adfcd5762a9ba3113783: CDI devices from CRI Config.CDIDevices: []" Mar 20 18:05:52.769349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2803749020.mount: Deactivated successfully. Mar 20 18:05:52.772471 containerd[1481]: time="2025-03-20T18:05:52.772431879Z" level=info msg="CreateContainer within sandbox \"86c378cafc1a75d4d6bcac27d9a30e8c1d7086b31fb2d2987b804352c1243a1a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ab5e40b6ceecd2104ae51e2d4cadd041c98b45be7667adfcd5762a9ba3113783\"" Mar 20 18:05:52.772850 containerd[1481]: time="2025-03-20T18:05:52.772828160Z" level=info msg="StartContainer for \"ab5e40b6ceecd2104ae51e2d4cadd041c98b45be7667adfcd5762a9ba3113783\"" Mar 20 18:05:52.773752 containerd[1481]: time="2025-03-20T18:05:52.773727282Z" level=info msg="connecting to shim ab5e40b6ceecd2104ae51e2d4cadd041c98b45be7667adfcd5762a9ba3113783" address="unix:///run/containerd/s/b487bb739176b35af846e76d0b016f24d5f635b6280ed0fd9f864d33d5457c3e" protocol=ttrpc version=3 Mar 20 18:05:52.795759 systemd[1]: Started cri-containerd-ab5e40b6ceecd2104ae51e2d4cadd041c98b45be7667adfcd5762a9ba3113783.scope - libcontainer container ab5e40b6ceecd2104ae51e2d4cadd041c98b45be7667adfcd5762a9ba3113783. Mar 20 18:05:52.824870 containerd[1481]: time="2025-03-20T18:05:52.824457507Z" level=info msg="StartContainer for \"ab5e40b6ceecd2104ae51e2d4cadd041c98b45be7667adfcd5762a9ba3113783\" returns successfully" Mar 20 18:05:52.931125 containerd[1481]: time="2025-03-20T18:05:52.930560648Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ab5e40b6ceecd2104ae51e2d4cadd041c98b45be7667adfcd5762a9ba3113783\" id:\"5bcf078288ab6e64cf190acf72cf46164583fd1ad25217b44f617221037656f0\" pid:3291 exited_at:{seconds:1742493952 nanos:930100127}" Mar 20 18:05:52.948157 kubelet[2605]: I0320 18:05:52.948120 2605 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Mar 20 18:05:52.980612 systemd[1]: Created slice kubepods-burstable-pod0424d678_01a1_45c9_bc22_82dbd0c73854.slice - libcontainer container kubepods-burstable-pod0424d678_01a1_45c9_bc22_82dbd0c73854.slice. Mar 20 18:05:52.986366 systemd[1]: Created slice kubepods-burstable-pod798b5744_deb5_425a_9f10_9d442e210cb5.slice - libcontainer container kubepods-burstable-pod798b5744_deb5_425a_9f10_9d442e210cb5.slice. Mar 20 18:05:53.104488 kubelet[2605]: I0320 18:05:53.104362 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l2xd\" (UniqueName: \"kubernetes.io/projected/798b5744-deb5-425a-9f10-9d442e210cb5-kube-api-access-8l2xd\") pod \"coredns-668d6bf9bc-brbrp\" (UID: \"798b5744-deb5-425a-9f10-9d442e210cb5\") " pod="kube-system/coredns-668d6bf9bc-brbrp" Mar 20 18:05:53.104488 kubelet[2605]: I0320 18:05:53.104425 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0424d678-01a1-45c9-bc22-82dbd0c73854-config-volume\") pod \"coredns-668d6bf9bc-m2z9g\" (UID: \"0424d678-01a1-45c9-bc22-82dbd0c73854\") " pod="kube-system/coredns-668d6bf9bc-m2z9g" Mar 20 18:05:53.104488 kubelet[2605]: I0320 18:05:53.104447 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whwh4\" (UniqueName: \"kubernetes.io/projected/0424d678-01a1-45c9-bc22-82dbd0c73854-kube-api-access-whwh4\") pod \"coredns-668d6bf9bc-m2z9g\" (UID: \"0424d678-01a1-45c9-bc22-82dbd0c73854\") " pod="kube-system/coredns-668d6bf9bc-m2z9g" Mar 20 18:05:53.104488 kubelet[2605]: I0320 18:05:53.104465 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/798b5744-deb5-425a-9f10-9d442e210cb5-config-volume\") pod \"coredns-668d6bf9bc-brbrp\" (UID: \"798b5744-deb5-425a-9f10-9d442e210cb5\") " pod="kube-system/coredns-668d6bf9bc-brbrp" Mar 20 18:05:53.285315 containerd[1481]: time="2025-03-20T18:05:53.285274870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m2z9g,Uid:0424d678-01a1-45c9-bc22-82dbd0c73854,Namespace:kube-system,Attempt:0,}" Mar 20 18:05:53.291918 containerd[1481]: time="2025-03-20T18:05:53.291883283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-brbrp,Uid:798b5744-deb5-425a-9f10-9d442e210cb5,Namespace:kube-system,Attempt:0,}" Mar 20 18:05:53.783826 kubelet[2605]: I0320 18:05:53.783677 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rv9j8" podStartSLOduration=5.01254636 podStartE2EDuration="10.783658443s" podCreationTimestamp="2025-03-20 18:05:43 +0000 UTC" firstStartedPulling="2025-03-20 18:05:43.556441455 +0000 UTC m=+6.943789508" lastFinishedPulling="2025-03-20 18:05:49.327553498 +0000 UTC m=+12.714901591" observedRunningTime="2025-03-20 18:05:53.782419721 +0000 UTC m=+17.169767814" watchObservedRunningTime="2025-03-20 18:05:53.783658443 +0000 UTC m=+17.171006536" Mar 20 18:05:55.000518 systemd-networkd[1404]: cilium_host: Link UP Mar 20 18:05:55.000637 systemd-networkd[1404]: cilium_net: Link UP Mar 20 18:05:55.001571 systemd-networkd[1404]: cilium_net: Gained carrier Mar 20 18:05:55.002086 systemd-networkd[1404]: cilium_host: Gained carrier Mar 20 18:05:55.072113 systemd-networkd[1404]: cilium_vxlan: Link UP Mar 20 18:05:55.072126 systemd-networkd[1404]: cilium_vxlan: Gained carrier Mar 20 18:05:55.302230 systemd-networkd[1404]: cilium_net: Gained IPv6LL Mar 20 18:05:55.373434 kernel: NET: Registered PF_ALG protocol family Mar 20 18:05:55.428602 systemd-networkd[1404]: cilium_host: Gained IPv6LL Mar 20 18:05:55.924357 systemd-networkd[1404]: lxc_health: Link UP Mar 20 18:05:55.926154 systemd-networkd[1404]: lxc_health: Gained carrier Mar 20 18:05:56.400440 kernel: eth0: renamed from tmp3146f Mar 20 18:05:56.419419 kernel: eth0: renamed from tmp07bf8 Mar 20 18:05:56.425254 systemd-networkd[1404]: lxc74a335a6c3f1: Link UP Mar 20 18:05:56.426486 systemd-networkd[1404]: lxcb00dfb8803cc: Link UP Mar 20 18:05:56.426747 systemd-networkd[1404]: lxc74a335a6c3f1: Gained carrier Mar 20 18:05:56.426880 systemd-networkd[1404]: lxcb00dfb8803cc: Gained carrier Mar 20 18:05:56.652615 systemd-networkd[1404]: cilium_vxlan: Gained IPv6LL Mar 20 18:05:57.036726 systemd-networkd[1404]: lxc_health: Gained IPv6LL Mar 20 18:05:58.380631 systemd-networkd[1404]: lxc74a335a6c3f1: Gained IPv6LL Mar 20 18:05:58.380917 systemd-networkd[1404]: lxcb00dfb8803cc: Gained IPv6LL Mar 20 18:05:59.876954 containerd[1481]: time="2025-03-20T18:05:59.876905267Z" level=info msg="connecting to shim 07bf8336f9db4c3799cd4ccf741bcdbbf392f8fd0bfd40ac83e6d867964153fc" address="unix:///run/containerd/s/9cb3def4a8104402de7648ce7b4ce68560f1ac97b450b44414662dea5a0e8df7" namespace=k8s.io protocol=ttrpc version=3 Mar 20 18:05:59.879137 containerd[1481]: time="2025-03-20T18:05:59.878751949Z" level=info msg="connecting to shim 3146f87a3d2cdea4a19028f736350abd74d2f0606e9256faae804af8adb32ccc" address="unix:///run/containerd/s/0a2f3c55179599ce0d189c99ce172efc34267ebfbcf2b49627c69ad5a59a6fa5" namespace=k8s.io protocol=ttrpc version=3 Mar 20 18:05:59.905603 systemd[1]: Started cri-containerd-07bf8336f9db4c3799cd4ccf741bcdbbf392f8fd0bfd40ac83e6d867964153fc.scope - libcontainer container 07bf8336f9db4c3799cd4ccf741bcdbbf392f8fd0bfd40ac83e6d867964153fc. Mar 20 18:05:59.907002 systemd[1]: Started cri-containerd-3146f87a3d2cdea4a19028f736350abd74d2f0606e9256faae804af8adb32ccc.scope - libcontainer container 3146f87a3d2cdea4a19028f736350abd74d2f0606e9256faae804af8adb32ccc. Mar 20 18:05:59.916374 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 20 18:05:59.918041 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 20 18:05:59.937098 containerd[1481]: time="2025-03-20T18:05:59.936511146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-brbrp,Uid:798b5744-deb5-425a-9f10-9d442e210cb5,Namespace:kube-system,Attempt:0,} returns sandbox id \"07bf8336f9db4c3799cd4ccf741bcdbbf392f8fd0bfd40ac83e6d867964153fc\"" Mar 20 18:05:59.939278 containerd[1481]: time="2025-03-20T18:05:59.939253430Z" level=info msg="CreateContainer within sandbox \"07bf8336f9db4c3799cd4ccf741bcdbbf392f8fd0bfd40ac83e6d867964153fc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 20 18:05:59.943224 containerd[1481]: time="2025-03-20T18:05:59.943140835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m2z9g,Uid:0424d678-01a1-45c9-bc22-82dbd0c73854,Namespace:kube-system,Attempt:0,} returns sandbox id \"3146f87a3d2cdea4a19028f736350abd74d2f0606e9256faae804af8adb32ccc\"" Mar 20 18:05:59.946256 containerd[1481]: time="2025-03-20T18:05:59.946229319Z" level=info msg="CreateContainer within sandbox \"3146f87a3d2cdea4a19028f736350abd74d2f0606e9256faae804af8adb32ccc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 20 18:05:59.952424 containerd[1481]: time="2025-03-20T18:05:59.951305446Z" level=info msg="Container c0ec79bd6e77f3bb2479b91967d8c3af314b29f479376d452ae1e9ad643efeab: CDI devices from CRI Config.CDIDevices: []" Mar 20 18:05:59.958057 containerd[1481]: time="2025-03-20T18:05:59.957956774Z" level=info msg="CreateContainer within sandbox \"07bf8336f9db4c3799cd4ccf741bcdbbf392f8fd0bfd40ac83e6d867964153fc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c0ec79bd6e77f3bb2479b91967d8c3af314b29f479376d452ae1e9ad643efeab\"" Mar 20 18:05:59.958448 containerd[1481]: time="2025-03-20T18:05:59.958420855Z" level=info msg="StartContainer for \"c0ec79bd6e77f3bb2479b91967d8c3af314b29f479376d452ae1e9ad643efeab\"" Mar 20 18:05:59.959327 containerd[1481]: time="2025-03-20T18:05:59.959290936Z" level=info msg="Container eff5d042e2553358f2b2423758da881d5c92fa6f7d29d4a4c0f4b8e992105267: CDI devices from CRI Config.CDIDevices: []" Mar 20 18:05:59.959491 containerd[1481]: time="2025-03-20T18:05:59.959394376Z" level=info msg="connecting to shim c0ec79bd6e77f3bb2479b91967d8c3af314b29f479376d452ae1e9ad643efeab" address="unix:///run/containerd/s/9cb3def4a8104402de7648ce7b4ce68560f1ac97b450b44414662dea5a0e8df7" protocol=ttrpc version=3 Mar 20 18:05:59.964195 containerd[1481]: time="2025-03-20T18:05:59.964107143Z" level=info msg="CreateContainer within sandbox \"3146f87a3d2cdea4a19028f736350abd74d2f0606e9256faae804af8adb32ccc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eff5d042e2553358f2b2423758da881d5c92fa6f7d29d4a4c0f4b8e992105267\"" Mar 20 18:05:59.964525 containerd[1481]: time="2025-03-20T18:05:59.964502663Z" level=info msg="StartContainer for \"eff5d042e2553358f2b2423758da881d5c92fa6f7d29d4a4c0f4b8e992105267\"" Mar 20 18:05:59.965246 containerd[1481]: time="2025-03-20T18:05:59.965221384Z" level=info msg="connecting to shim eff5d042e2553358f2b2423758da881d5c92fa6f7d29d4a4c0f4b8e992105267" address="unix:///run/containerd/s/0a2f3c55179599ce0d189c99ce172efc34267ebfbcf2b49627c69ad5a59a6fa5" protocol=ttrpc version=3 Mar 20 18:05:59.975529 systemd[1]: Started cri-containerd-c0ec79bd6e77f3bb2479b91967d8c3af314b29f479376d452ae1e9ad643efeab.scope - libcontainer container c0ec79bd6e77f3bb2479b91967d8c3af314b29f479376d452ae1e9ad643efeab. Mar 20 18:05:59.977729 systemd[1]: Started cri-containerd-eff5d042e2553358f2b2423758da881d5c92fa6f7d29d4a4c0f4b8e992105267.scope - libcontainer container eff5d042e2553358f2b2423758da881d5c92fa6f7d29d4a4c0f4b8e992105267. Mar 20 18:06:00.007575 containerd[1481]: time="2025-03-20T18:06:00.007503760Z" level=info msg="StartContainer for \"eff5d042e2553358f2b2423758da881d5c92fa6f7d29d4a4c0f4b8e992105267\" returns successfully" Mar 20 18:06:00.025107 containerd[1481]: time="2025-03-20T18:06:00.025065221Z" level=info msg="StartContainer for \"c0ec79bd6e77f3bb2479b91967d8c3af314b29f479376d452ae1e9ad643efeab\" returns successfully" Mar 20 18:06:00.792977 kubelet[2605]: I0320 18:06:00.792915 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-brbrp" podStartSLOduration=17.792901576 podStartE2EDuration="17.792901576s" podCreationTimestamp="2025-03-20 18:05:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 18:06:00.792660495 +0000 UTC m=+24.180008588" watchObservedRunningTime="2025-03-20 18:06:00.792901576 +0000 UTC m=+24.180249629" Mar 20 18:06:00.814541 kubelet[2605]: I0320 18:06:00.814483 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-m2z9g" podStartSLOduration=17.814467322 podStartE2EDuration="17.814467322s" podCreationTimestamp="2025-03-20 18:05:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 18:06:00.804025589 +0000 UTC m=+24.191373682" watchObservedRunningTime="2025-03-20 18:06:00.814467322 +0000 UTC m=+24.201815415" Mar 20 18:06:03.711858 systemd[1]: Started sshd@7-10.0.0.103:22-10.0.0.1:40328.service - OpenSSH per-connection server daemon (10.0.0.1:40328). Mar 20 18:06:03.764117 sshd[3957]: Accepted publickey for core from 10.0.0.1 port 40328 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:06:03.765618 sshd-session[3957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:06:03.770043 systemd-logind[1469]: New session 8 of user core. Mar 20 18:06:03.779534 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 20 18:06:03.907736 sshd[3959]: Connection closed by 10.0.0.1 port 40328 Mar 20 18:06:03.908039 sshd-session[3957]: pam_unix(sshd:session): session closed for user core Mar 20 18:06:03.911143 systemd[1]: sshd@7-10.0.0.103:22-10.0.0.1:40328.service: Deactivated successfully. Mar 20 18:06:03.913123 systemd[1]: session-8.scope: Deactivated successfully. Mar 20 18:06:03.913937 systemd-logind[1469]: Session 8 logged out. Waiting for processes to exit. Mar 20 18:06:03.914848 systemd-logind[1469]: Removed session 8. Mar 20 18:06:08.920120 systemd[1]: Started sshd@8-10.0.0.103:22-10.0.0.1:40342.service - OpenSSH per-connection server daemon (10.0.0.1:40342). Mar 20 18:06:08.977053 sshd[3976]: Accepted publickey for core from 10.0.0.1 port 40342 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:06:08.978298 sshd-session[3976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:06:08.982357 systemd-logind[1469]: New session 9 of user core. Mar 20 18:06:08.989543 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 20 18:06:09.102096 sshd[3978]: Connection closed by 10.0.0.1 port 40342 Mar 20 18:06:09.102560 sshd-session[3976]: pam_unix(sshd:session): session closed for user core Mar 20 18:06:09.105862 systemd[1]: sshd@8-10.0.0.103:22-10.0.0.1:40342.service: Deactivated successfully. Mar 20 18:06:09.107596 systemd[1]: session-9.scope: Deactivated successfully. Mar 20 18:06:09.108431 systemd-logind[1469]: Session 9 logged out. Waiting for processes to exit. Mar 20 18:06:09.109283 systemd-logind[1469]: Removed session 9. Mar 20 18:06:14.113525 systemd[1]: Started sshd@9-10.0.0.103:22-10.0.0.1:50042.service - OpenSSH per-connection server daemon (10.0.0.1:50042). Mar 20 18:06:14.166440 sshd[3996]: Accepted publickey for core from 10.0.0.1 port 50042 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:06:14.167502 sshd-session[3996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:06:14.171012 systemd-logind[1469]: New session 10 of user core. Mar 20 18:06:14.182570 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 20 18:06:14.290751 sshd[3998]: Connection closed by 10.0.0.1 port 50042 Mar 20 18:06:14.291281 sshd-session[3996]: pam_unix(sshd:session): session closed for user core Mar 20 18:06:14.294705 systemd[1]: sshd@9-10.0.0.103:22-10.0.0.1:50042.service: Deactivated successfully. Mar 20 18:06:14.296628 systemd[1]: session-10.scope: Deactivated successfully. Mar 20 18:06:14.297228 systemd-logind[1469]: Session 10 logged out. Waiting for processes to exit. Mar 20 18:06:14.298359 systemd-logind[1469]: Removed session 10. Mar 20 18:06:19.305395 systemd[1]: Started sshd@10-10.0.0.103:22-10.0.0.1:50048.service - OpenSSH per-connection server daemon (10.0.0.1:50048). Mar 20 18:06:19.360420 sshd[4012]: Accepted publickey for core from 10.0.0.1 port 50048 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:06:19.361595 sshd-session[4012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:06:19.365631 systemd-logind[1469]: New session 11 of user core. Mar 20 18:06:19.373528 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 20 18:06:19.483139 sshd[4014]: Connection closed by 10.0.0.1 port 50048 Mar 20 18:06:19.483485 sshd-session[4012]: pam_unix(sshd:session): session closed for user core Mar 20 18:06:19.503789 systemd[1]: sshd@10-10.0.0.103:22-10.0.0.1:50048.service: Deactivated successfully. Mar 20 18:06:19.505560 systemd[1]: session-11.scope: Deactivated successfully. Mar 20 18:06:19.506289 systemd-logind[1469]: Session 11 logged out. Waiting for processes to exit. Mar 20 18:06:19.508104 systemd[1]: Started sshd@11-10.0.0.103:22-10.0.0.1:50056.service - OpenSSH per-connection server daemon (10.0.0.1:50056). Mar 20 18:06:19.509084 systemd-logind[1469]: Removed session 11. Mar 20 18:06:19.559011 sshd[4027]: Accepted publickey for core from 10.0.0.1 port 50056 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:06:19.560052 sshd-session[4027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:06:19.563878 systemd-logind[1469]: New session 12 of user core. Mar 20 18:06:19.579592 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 20 18:06:19.727008 sshd[4030]: Connection closed by 10.0.0.1 port 50056 Mar 20 18:06:19.727651 sshd-session[4027]: pam_unix(sshd:session): session closed for user core Mar 20 18:06:19.737505 systemd[1]: sshd@11-10.0.0.103:22-10.0.0.1:50056.service: Deactivated successfully. Mar 20 18:06:19.739033 systemd[1]: session-12.scope: Deactivated successfully. Mar 20 18:06:19.741270 systemd-logind[1469]: Session 12 logged out. Waiting for processes to exit. Mar 20 18:06:19.742603 systemd[1]: Started sshd@12-10.0.0.103:22-10.0.0.1:50070.service - OpenSSH per-connection server daemon (10.0.0.1:50070). Mar 20 18:06:19.744063 systemd-logind[1469]: Removed session 12. Mar 20 18:06:19.795639 sshd[4041]: Accepted publickey for core from 10.0.0.1 port 50070 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:06:19.796911 sshd-session[4041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:06:19.801169 systemd-logind[1469]: New session 13 of user core. Mar 20 18:06:19.813610 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 20 18:06:19.925397 sshd[4044]: Connection closed by 10.0.0.1 port 50070 Mar 20 18:06:19.924843 sshd-session[4041]: pam_unix(sshd:session): session closed for user core Mar 20 18:06:19.928324 systemd[1]: sshd@12-10.0.0.103:22-10.0.0.1:50070.service: Deactivated successfully. Mar 20 18:06:19.930122 systemd[1]: session-13.scope: Deactivated successfully. Mar 20 18:06:19.930770 systemd-logind[1469]: Session 13 logged out. Waiting for processes to exit. Mar 20 18:06:19.931582 systemd-logind[1469]: Removed session 13. Mar 20 18:06:24.936769 systemd[1]: Started sshd@13-10.0.0.103:22-10.0.0.1:38340.service - OpenSSH per-connection server daemon (10.0.0.1:38340). Mar 20 18:06:24.986153 sshd[4057]: Accepted publickey for core from 10.0.0.1 port 38340 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:06:24.987343 sshd-session[4057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:06:24.991316 systemd-logind[1469]: New session 14 of user core. Mar 20 18:06:25.000512 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 20 18:06:25.113304 sshd[4059]: Connection closed by 10.0.0.1 port 38340 Mar 20 18:06:25.113622 sshd-session[4057]: pam_unix(sshd:session): session closed for user core Mar 20 18:06:25.116338 systemd[1]: sshd@13-10.0.0.103:22-10.0.0.1:38340.service: Deactivated successfully. Mar 20 18:06:25.117955 systemd[1]: session-14.scope: Deactivated successfully. Mar 20 18:06:25.119185 systemd-logind[1469]: Session 14 logged out. Waiting for processes to exit. Mar 20 18:06:25.120009 systemd-logind[1469]: Removed session 14. Mar 20 18:06:30.124643 systemd[1]: Started sshd@14-10.0.0.103:22-10.0.0.1:38350.service - OpenSSH per-connection server daemon (10.0.0.1:38350). Mar 20 18:06:30.178729 sshd[4076]: Accepted publickey for core from 10.0.0.1 port 38350 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:06:30.179943 sshd-session[4076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:06:30.183682 systemd-logind[1469]: New session 15 of user core. Mar 20 18:06:30.191511 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 20 18:06:30.301110 sshd[4078]: Connection closed by 10.0.0.1 port 38350 Mar 20 18:06:30.301260 sshd-session[4076]: pam_unix(sshd:session): session closed for user core Mar 20 18:06:30.319589 systemd[1]: sshd@14-10.0.0.103:22-10.0.0.1:38350.service: Deactivated successfully. Mar 20 18:06:30.322075 systemd[1]: session-15.scope: Deactivated successfully. Mar 20 18:06:30.323837 systemd-logind[1469]: Session 15 logged out. Waiting for processes to exit. Mar 20 18:06:30.326737 systemd[1]: Started sshd@15-10.0.0.103:22-10.0.0.1:38358.service - OpenSSH per-connection server daemon (10.0.0.1:38358). Mar 20 18:06:30.327989 systemd-logind[1469]: Removed session 15. Mar 20 18:06:30.376484 sshd[4091]: Accepted publickey for core from 10.0.0.1 port 38358 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:06:30.377523 sshd-session[4091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:06:30.381897 systemd-logind[1469]: New session 16 of user core. Mar 20 18:06:30.389535 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 20 18:06:30.603253 sshd[4094]: Connection closed by 10.0.0.1 port 38358 Mar 20 18:06:30.604545 sshd-session[4091]: pam_unix(sshd:session): session closed for user core Mar 20 18:06:30.616855 systemd[1]: sshd@15-10.0.0.103:22-10.0.0.1:38358.service: Deactivated successfully. Mar 20 18:06:30.618776 systemd[1]: session-16.scope: Deactivated successfully. Mar 20 18:06:30.619633 systemd-logind[1469]: Session 16 logged out. Waiting for processes to exit. Mar 20 18:06:30.621811 systemd[1]: Started sshd@16-10.0.0.103:22-10.0.0.1:38372.service - OpenSSH per-connection server daemon (10.0.0.1:38372). Mar 20 18:06:30.622661 systemd-logind[1469]: Removed session 16. Mar 20 18:06:30.671842 sshd[4105]: Accepted publickey for core from 10.0.0.1 port 38372 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:06:30.673043 sshd-session[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:06:30.677459 systemd-logind[1469]: New session 17 of user core. Mar 20 18:06:30.691540 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 20 18:06:31.390985 sshd[4108]: Connection closed by 10.0.0.1 port 38372 Mar 20 18:06:31.391614 sshd-session[4105]: pam_unix(sshd:session): session closed for user core Mar 20 18:06:31.400607 systemd[1]: sshd@16-10.0.0.103:22-10.0.0.1:38372.service: Deactivated successfully. Mar 20 18:06:31.402122 systemd[1]: session-17.scope: Deactivated successfully. Mar 20 18:06:31.403669 systemd-logind[1469]: Session 17 logged out. Waiting for processes to exit. Mar 20 18:06:31.407688 systemd[1]: Started sshd@17-10.0.0.103:22-10.0.0.1:38378.service - OpenSSH per-connection server daemon (10.0.0.1:38378). Mar 20 18:06:31.410200 systemd-logind[1469]: Removed session 17. Mar 20 18:06:31.461581 sshd[4130]: Accepted publickey for core from 10.0.0.1 port 38378 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:06:31.462771 sshd-session[4130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:06:31.467064 systemd-logind[1469]: New session 18 of user core. Mar 20 18:06:31.476606 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 20 18:06:31.692546 sshd[4133]: Connection closed by 10.0.0.1 port 38378 Mar 20 18:06:31.693212 sshd-session[4130]: pam_unix(sshd:session): session closed for user core Mar 20 18:06:31.704575 systemd[1]: sshd@17-10.0.0.103:22-10.0.0.1:38378.service: Deactivated successfully. Mar 20 18:06:31.706074 systemd[1]: session-18.scope: Deactivated successfully. Mar 20 18:06:31.706771 systemd-logind[1469]: Session 18 logged out. Waiting for processes to exit. Mar 20 18:06:31.708573 systemd[1]: Started sshd@18-10.0.0.103:22-10.0.0.1:38394.service - OpenSSH per-connection server daemon (10.0.0.1:38394). Mar 20 18:06:31.709351 systemd-logind[1469]: Removed session 18. Mar 20 18:06:31.761498 sshd[4143]: Accepted publickey for core from 10.0.0.1 port 38394 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:06:31.762858 sshd-session[4143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:06:31.767465 systemd-logind[1469]: New session 19 of user core. Mar 20 18:06:31.780727 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 20 18:06:31.893279 sshd[4146]: Connection closed by 10.0.0.1 port 38394 Mar 20 18:06:31.893618 sshd-session[4143]: pam_unix(sshd:session): session closed for user core Mar 20 18:06:31.896248 systemd[1]: sshd@18-10.0.0.103:22-10.0.0.1:38394.service: Deactivated successfully. Mar 20 18:06:31.897918 systemd[1]: session-19.scope: Deactivated successfully. Mar 20 18:06:31.899213 systemd-logind[1469]: Session 19 logged out. Waiting for processes to exit. Mar 20 18:06:31.900743 systemd-logind[1469]: Removed session 19. Mar 20 18:06:36.904858 systemd[1]: Started sshd@19-10.0.0.103:22-10.0.0.1:35902.service - OpenSSH per-connection server daemon (10.0.0.1:35902). Mar 20 18:06:36.959467 sshd[4165]: Accepted publickey for core from 10.0.0.1 port 35902 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:06:36.960751 sshd-session[4165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:06:36.964662 systemd-logind[1469]: New session 20 of user core. Mar 20 18:06:36.974544 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 20 18:06:37.080425 sshd[4167]: Connection closed by 10.0.0.1 port 35902 Mar 20 18:06:37.080935 sshd-session[4165]: pam_unix(sshd:session): session closed for user core Mar 20 18:06:37.084143 systemd[1]: sshd@19-10.0.0.103:22-10.0.0.1:35902.service: Deactivated successfully. Mar 20 18:06:37.085790 systemd[1]: session-20.scope: Deactivated successfully. Mar 20 18:06:37.086447 systemd-logind[1469]: Session 20 logged out. Waiting for processes to exit. Mar 20 18:06:37.087155 systemd-logind[1469]: Removed session 20. Mar 20 18:06:42.091659 systemd[1]: Started sshd@20-10.0.0.103:22-10.0.0.1:35906.service - OpenSSH per-connection server daemon (10.0.0.1:35906). Mar 20 18:06:42.141306 sshd[4180]: Accepted publickey for core from 10.0.0.1 port 35906 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:06:42.142456 sshd-session[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:06:42.145932 systemd-logind[1469]: New session 21 of user core. Mar 20 18:06:42.156701 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 20 18:06:42.260892 sshd[4182]: Connection closed by 10.0.0.1 port 35906 Mar 20 18:06:42.261377 sshd-session[4180]: pam_unix(sshd:session): session closed for user core Mar 20 18:06:42.264560 systemd[1]: sshd@20-10.0.0.103:22-10.0.0.1:35906.service: Deactivated successfully. Mar 20 18:06:42.266169 systemd[1]: session-21.scope: Deactivated successfully. Mar 20 18:06:42.267083 systemd-logind[1469]: Session 21 logged out. Waiting for processes to exit. Mar 20 18:06:42.267839 systemd-logind[1469]: Removed session 21. Mar 20 18:06:47.276962 systemd[1]: Started sshd@21-10.0.0.103:22-10.0.0.1:41966.service - OpenSSH per-connection server daemon (10.0.0.1:41966). Mar 20 18:06:47.328564 sshd[4197]: Accepted publickey for core from 10.0.0.1 port 41966 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:06:47.329629 sshd-session[4197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:06:47.333214 systemd-logind[1469]: New session 22 of user core. Mar 20 18:06:47.340525 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 20 18:06:47.445997 sshd[4199]: Connection closed by 10.0.0.1 port 41966 Mar 20 18:06:47.446549 sshd-session[4197]: pam_unix(sshd:session): session closed for user core Mar 20 18:06:47.449725 systemd[1]: sshd@21-10.0.0.103:22-10.0.0.1:41966.service: Deactivated successfully. Mar 20 18:06:47.451770 systemd[1]: session-22.scope: Deactivated successfully. Mar 20 18:06:47.452370 systemd-logind[1469]: Session 22 logged out. Waiting for processes to exit. Mar 20 18:06:47.453306 systemd-logind[1469]: Removed session 22. Mar 20 18:06:52.457578 systemd[1]: Started sshd@22-10.0.0.103:22-10.0.0.1:41968.service - OpenSSH per-connection server daemon (10.0.0.1:41968). Mar 20 18:06:52.511452 sshd[4213]: Accepted publickey for core from 10.0.0.1 port 41968 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:06:52.512625 sshd-session[4213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:06:52.519443 systemd-logind[1469]: New session 23 of user core. Mar 20 18:06:52.528527 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 20 18:06:52.638959 sshd[4215]: Connection closed by 10.0.0.1 port 41968 Mar 20 18:06:52.639564 sshd-session[4213]: pam_unix(sshd:session): session closed for user core Mar 20 18:06:52.652676 systemd[1]: sshd@22-10.0.0.103:22-10.0.0.1:41968.service: Deactivated successfully. Mar 20 18:06:52.654771 systemd[1]: session-23.scope: Deactivated successfully. Mar 20 18:06:52.655903 systemd-logind[1469]: Session 23 logged out. Waiting for processes to exit. Mar 20 18:06:52.658034 systemd[1]: Started sshd@23-10.0.0.103:22-10.0.0.1:58620.service - OpenSSH per-connection server daemon (10.0.0.1:58620). Mar 20 18:06:52.658964 systemd-logind[1469]: Removed session 23. Mar 20 18:06:52.713871 sshd[4228]: Accepted publickey for core from 10.0.0.1 port 58620 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:06:52.715024 sshd-session[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:06:52.719137 systemd-logind[1469]: New session 24 of user core. Mar 20 18:06:52.727326 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 20 18:06:54.974763 containerd[1481]: time="2025-03-20T18:06:54.974131663Z" level=info msg="StopContainer for \"3d86f0c176b1e01deadbd0048b3bee272eb68f5b7998c22285b4fcce9e97f8e0\" with timeout 30 (s)" Mar 20 18:06:54.984304 containerd[1481]: time="2025-03-20T18:06:54.984239681Z" level=info msg="Stop container \"3d86f0c176b1e01deadbd0048b3bee272eb68f5b7998c22285b4fcce9e97f8e0\" with signal terminated" Mar 20 18:06:54.991709 systemd[1]: cri-containerd-3d86f0c176b1e01deadbd0048b3bee272eb68f5b7998c22285b4fcce9e97f8e0.scope: Deactivated successfully. Mar 20 18:06:54.996886 containerd[1481]: time="2025-03-20T18:06:54.993207173Z" level=info msg="received exit event container_id:\"3d86f0c176b1e01deadbd0048b3bee272eb68f5b7998c22285b4fcce9e97f8e0\" id:\"3d86f0c176b1e01deadbd0048b3bee272eb68f5b7998c22285b4fcce9e97f8e0\" pid:3148 exited_at:{seconds:1742494014 nanos:992912651}" Mar 20 18:06:54.996886 containerd[1481]: time="2025-03-20T18:06:54.993248773Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3d86f0c176b1e01deadbd0048b3bee272eb68f5b7998c22285b4fcce9e97f8e0\" id:\"3d86f0c176b1e01deadbd0048b3bee272eb68f5b7998c22285b4fcce9e97f8e0\" pid:3148 exited_at:{seconds:1742494014 nanos:992912651}" Mar 20 18:06:55.008022 containerd[1481]: time="2025-03-20T18:06:55.007961977Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ab5e40b6ceecd2104ae51e2d4cadd041c98b45be7667adfcd5762a9ba3113783\" id:\"8b947ddb168fd92fb9cb83fc5f522b3de1a7d80597e8575acb0f1ab72d1c7d18\" pid:4252 exited_at:{seconds:1742494015 nanos:7659456}" Mar 20 18:06:55.010807 containerd[1481]: time="2025-03-20T18:06:55.010504192Z" level=info msg="StopContainer for \"ab5e40b6ceecd2104ae51e2d4cadd041c98b45be7667adfcd5762a9ba3113783\" with timeout 2 (s)" Mar 20 18:06:55.011597 containerd[1481]: time="2025-03-20T18:06:55.011517957Z" level=info msg="Stop container \"ab5e40b6ceecd2104ae51e2d4cadd041c98b45be7667adfcd5762a9ba3113783\" with signal terminated" Mar 20 18:06:55.012397 containerd[1481]: time="2025-03-20T18:06:55.012337122Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 20 18:06:55.012596 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d86f0c176b1e01deadbd0048b3bee272eb68f5b7998c22285b4fcce9e97f8e0-rootfs.mount: Deactivated successfully. Mar 20 18:06:55.020723 systemd-networkd[1404]: lxc_health: Link DOWN Mar 20 18:06:55.020729 systemd-networkd[1404]: lxc_health: Lost carrier Mar 20 18:06:55.025547 containerd[1481]: time="2025-03-20T18:06:55.025454116Z" level=info msg="StopContainer for \"3d86f0c176b1e01deadbd0048b3bee272eb68f5b7998c22285b4fcce9e97f8e0\" returns successfully" Mar 20 18:06:55.026239 containerd[1481]: time="2025-03-20T18:06:55.026015719Z" level=info msg="StopPodSandbox for \"dbdde60a0be5a026088ee52d3a006767b0a2017feb710437794a89d798a5c01a\"" Mar 20 18:06:55.026239 containerd[1481]: time="2025-03-20T18:06:55.026090960Z" level=info msg="Container to stop \"3d86f0c176b1e01deadbd0048b3bee272eb68f5b7998c22285b4fcce9e97f8e0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 18:06:55.035213 systemd[1]: cri-containerd-dbdde60a0be5a026088ee52d3a006767b0a2017feb710437794a89d798a5c01a.scope: Deactivated successfully. Mar 20 18:06:55.036876 systemd[1]: cri-containerd-ab5e40b6ceecd2104ae51e2d4cadd041c98b45be7667adfcd5762a9ba3113783.scope: Deactivated successfully. Mar 20 18:06:55.037155 systemd[1]: cri-containerd-ab5e40b6ceecd2104ae51e2d4cadd041c98b45be7667adfcd5762a9ba3113783.scope: Consumed 6.346s CPU time, 121.3M memory peak, 152K read from disk, 12.9M written to disk. Mar 20 18:06:55.039568 containerd[1481]: time="2025-03-20T18:06:55.039521075Z" level=info msg="received exit event container_id:\"ab5e40b6ceecd2104ae51e2d4cadd041c98b45be7667adfcd5762a9ba3113783\" id:\"ab5e40b6ceecd2104ae51e2d4cadd041c98b45be7667adfcd5762a9ba3113783\" pid:3261 exited_at:{seconds:1742494015 nanos:37085661}" Mar 20 18:06:55.040215 containerd[1481]: time="2025-03-20T18:06:55.039805077Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ab5e40b6ceecd2104ae51e2d4cadd041c98b45be7667adfcd5762a9ba3113783\" id:\"ab5e40b6ceecd2104ae51e2d4cadd041c98b45be7667adfcd5762a9ba3113783\" pid:3261 exited_at:{seconds:1742494015 nanos:37085661}" Mar 20 18:06:55.041072 containerd[1481]: time="2025-03-20T18:06:55.041041324Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dbdde60a0be5a026088ee52d3a006767b0a2017feb710437794a89d798a5c01a\" id:\"dbdde60a0be5a026088ee52d3a006767b0a2017feb710437794a89d798a5c01a\" pid:2824 exit_status:137 exited_at:{seconds:1742494015 nanos:40492881}" Mar 20 18:06:55.057863 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab5e40b6ceecd2104ae51e2d4cadd041c98b45be7667adfcd5762a9ba3113783-rootfs.mount: Deactivated successfully. Mar 20 18:06:55.066399 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dbdde60a0be5a026088ee52d3a006767b0a2017feb710437794a89d798a5c01a-rootfs.mount: Deactivated successfully. Mar 20 18:06:55.067279 containerd[1481]: time="2025-03-20T18:06:55.067247031Z" level=info msg="shim disconnected" id=dbdde60a0be5a026088ee52d3a006767b0a2017feb710437794a89d798a5c01a namespace=k8s.io Mar 20 18:06:55.067476 containerd[1481]: time="2025-03-20T18:06:55.067431352Z" level=warning msg="cleaning up after shim disconnected" id=dbdde60a0be5a026088ee52d3a006767b0a2017feb710437794a89d798a5c01a namespace=k8s.io Mar 20 18:06:55.067665 containerd[1481]: time="2025-03-20T18:06:55.067529073Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 20 18:06:55.070880 containerd[1481]: time="2025-03-20T18:06:55.070847932Z" level=info msg="StopContainer for \"ab5e40b6ceecd2104ae51e2d4cadd041c98b45be7667adfcd5762a9ba3113783\" returns successfully" Mar 20 18:06:55.071544 containerd[1481]: time="2025-03-20T18:06:55.071271494Z" level=info msg="StopPodSandbox for \"86c378cafc1a75d4d6bcac27d9a30e8c1d7086b31fb2d2987b804352c1243a1a\"" Mar 20 18:06:55.071544 containerd[1481]: time="2025-03-20T18:06:55.071347134Z" level=info msg="Container to stop \"2c5d8c8c677309316d1cdc4beff3f4db920939a6b605a981402c1cfe88960b19\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 18:06:55.071544 containerd[1481]: time="2025-03-20T18:06:55.071359535Z" level=info msg="Container to stop \"ab5e40b6ceecd2104ae51e2d4cadd041c98b45be7667adfcd5762a9ba3113783\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 18:06:55.071544 containerd[1481]: time="2025-03-20T18:06:55.071368295Z" level=info msg="Container to stop \"028d0b834d75dcdf53f76a2c5e9e91caf4e294886c6de3023c4cd0a3f3d99cce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 18:06:55.071544 containerd[1481]: time="2025-03-20T18:06:55.071378375Z" level=info msg="Container to stop \"4e0db1fc35db18310d902552a028b11793bddf73861f7db25a241c195813d106\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 18:06:55.071544 containerd[1481]: time="2025-03-20T18:06:55.071398815Z" level=info msg="Container to stop \"0087b6320893288c9df595f26886961a2959bda00c4da48257e1f1d47787b86c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 18:06:55.077048 systemd[1]: cri-containerd-86c378cafc1a75d4d6bcac27d9a30e8c1d7086b31fb2d2987b804352c1243a1a.scope: Deactivated successfully. Mar 20 18:06:55.088037 containerd[1481]: time="2025-03-20T18:06:55.087996228Z" level=info msg="TaskExit event in podsandbox handler container_id:\"86c378cafc1a75d4d6bcac27d9a30e8c1d7086b31fb2d2987b804352c1243a1a\" id:\"86c378cafc1a75d4d6bcac27d9a30e8c1d7086b31fb2d2987b804352c1243a1a\" pid:2752 exit_status:137 exited_at:{seconds:1742494015 nanos:83565563}" Mar 20 18:06:55.090750 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dbdde60a0be5a026088ee52d3a006767b0a2017feb710437794a89d798a5c01a-shm.mount: Deactivated successfully. Mar 20 18:06:55.091043 containerd[1481]: time="2025-03-20T18:06:55.091008685Z" level=info msg="received exit event sandbox_id:\"dbdde60a0be5a026088ee52d3a006767b0a2017feb710437794a89d798a5c01a\" exit_status:137 exited_at:{seconds:1742494015 nanos:40492881}" Mar 20 18:06:55.093583 containerd[1481]: time="2025-03-20T18:06:55.093541060Z" level=info msg="TearDown network for sandbox \"dbdde60a0be5a026088ee52d3a006767b0a2017feb710437794a89d798a5c01a\" successfully" Mar 20 18:06:55.093670 containerd[1481]: time="2025-03-20T18:06:55.093573660Z" level=info msg="StopPodSandbox for \"dbdde60a0be5a026088ee52d3a006767b0a2017feb710437794a89d798a5c01a\" returns successfully" Mar 20 18:06:55.106246 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86c378cafc1a75d4d6bcac27d9a30e8c1d7086b31fb2d2987b804352c1243a1a-rootfs.mount: Deactivated successfully. Mar 20 18:06:55.111739 containerd[1481]: time="2025-03-20T18:06:55.111277319Z" level=info msg="received exit event sandbox_id:\"86c378cafc1a75d4d6bcac27d9a30e8c1d7086b31fb2d2987b804352c1243a1a\" exit_status:137 exited_at:{seconds:1742494015 nanos:83565563}" Mar 20 18:06:55.111739 containerd[1481]: time="2025-03-20T18:06:55.111506001Z" level=info msg="TearDown network for sandbox \"86c378cafc1a75d4d6bcac27d9a30e8c1d7086b31fb2d2987b804352c1243a1a\" successfully" Mar 20 18:06:55.111739 containerd[1481]: time="2025-03-20T18:06:55.111538721Z" level=info msg="StopPodSandbox for \"86c378cafc1a75d4d6bcac27d9a30e8c1d7086b31fb2d2987b804352c1243a1a\" returns successfully" Mar 20 18:06:55.111739 containerd[1481]: time="2025-03-20T18:06:55.111615801Z" level=info msg="shim disconnected" id=86c378cafc1a75d4d6bcac27d9a30e8c1d7086b31fb2d2987b804352c1243a1a namespace=k8s.io Mar 20 18:06:55.111739 containerd[1481]: time="2025-03-20T18:06:55.111629241Z" level=warning msg="cleaning up after shim disconnected" id=86c378cafc1a75d4d6bcac27d9a30e8c1d7086b31fb2d2987b804352c1243a1a namespace=k8s.io Mar 20 18:06:55.111739 containerd[1481]: time="2025-03-20T18:06:55.111655962Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 20 18:06:55.250517 kubelet[2605]: I0320 18:06:55.250358 2605 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpbfv\" (UniqueName: \"kubernetes.io/projected/593cca11-0202-4ac9-b9dc-636b96607a81-kube-api-access-hpbfv\") pod \"593cca11-0202-4ac9-b9dc-636b96607a81\" (UID: \"593cca11-0202-4ac9-b9dc-636b96607a81\") " Mar 20 18:06:55.250517 kubelet[2605]: I0320 18:06:55.250435 2605 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fd32ef83-3bf4-44b3-b48e-09e0441573ed-hubble-tls\") pod \"fd32ef83-3bf4-44b3-b48e-09e0441573ed\" (UID: \"fd32ef83-3bf4-44b3-b48e-09e0441573ed\") " Mar 20 18:06:55.250517 kubelet[2605]: I0320 18:06:55.250463 2605 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd32ef83-3bf4-44b3-b48e-09e0441573ed-lib-modules\") pod \"fd32ef83-3bf4-44b3-b48e-09e0441573ed\" (UID: \"fd32ef83-3bf4-44b3-b48e-09e0441573ed\") " Mar 20 18:06:55.250517 kubelet[2605]: I0320 18:06:55.250480 2605 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-642f2\" (UniqueName: \"kubernetes.io/projected/fd32ef83-3bf4-44b3-b48e-09e0441573ed-kube-api-access-642f2\") pod \"fd32ef83-3bf4-44b3-b48e-09e0441573ed\" (UID: \"fd32ef83-3bf4-44b3-b48e-09e0441573ed\") " Mar 20 18:06:55.250517 kubelet[2605]: I0320 18:06:55.250498 2605 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fd32ef83-3bf4-44b3-b48e-09e0441573ed-cilium-cgroup\") pod \"fd32ef83-3bf4-44b3-b48e-09e0441573ed\" (UID: \"fd32ef83-3bf4-44b3-b48e-09e0441573ed\") " Mar 20 18:06:55.250978 kubelet[2605]: I0320 18:06:55.250536 2605 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fd32ef83-3bf4-44b3-b48e-09e0441573ed-cilium-run\") pod \"fd32ef83-3bf4-44b3-b48e-09e0441573ed\" (UID: \"fd32ef83-3bf4-44b3-b48e-09e0441573ed\") " Mar 20 18:06:55.250978 kubelet[2605]: I0320 18:06:55.250551 2605 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fd32ef83-3bf4-44b3-b48e-09e0441573ed-cni-path\") pod \"fd32ef83-3bf4-44b3-b48e-09e0441573ed\" (UID: \"fd32ef83-3bf4-44b3-b48e-09e0441573ed\") " Mar 20 18:06:55.250978 kubelet[2605]: I0320 18:06:55.250568 2605 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fd32ef83-3bf4-44b3-b48e-09e0441573ed-clustermesh-secrets\") pod \"fd32ef83-3bf4-44b3-b48e-09e0441573ed\" (UID: \"fd32ef83-3bf4-44b3-b48e-09e0441573ed\") " Mar 20 18:06:55.250978 kubelet[2605]: I0320 18:06:55.250582 2605 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fd32ef83-3bf4-44b3-b48e-09e0441573ed-bpf-maps\") pod \"fd32ef83-3bf4-44b3-b48e-09e0441573ed\" (UID: \"fd32ef83-3bf4-44b3-b48e-09e0441573ed\") " Mar 20 18:06:55.250978 kubelet[2605]: I0320 18:06:55.250596 2605 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fd32ef83-3bf4-44b3-b48e-09e0441573ed-host-proc-sys-kernel\") pod \"fd32ef83-3bf4-44b3-b48e-09e0441573ed\" (UID: \"fd32ef83-3bf4-44b3-b48e-09e0441573ed\") " Mar 20 18:06:55.250978 kubelet[2605]: I0320 18:06:55.250650 2605 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fd32ef83-3bf4-44b3-b48e-09e0441573ed-hostproc\") pod \"fd32ef83-3bf4-44b3-b48e-09e0441573ed\" (UID: \"fd32ef83-3bf4-44b3-b48e-09e0441573ed\") " Mar 20 18:06:55.251124 kubelet[2605]: I0320 18:06:55.250673 2605 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd32ef83-3bf4-44b3-b48e-09e0441573ed-xtables-lock\") pod \"fd32ef83-3bf4-44b3-b48e-09e0441573ed\" (UID: \"fd32ef83-3bf4-44b3-b48e-09e0441573ed\") " Mar 20 18:06:55.251124 kubelet[2605]: I0320 18:06:55.250690 2605 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fd32ef83-3bf4-44b3-b48e-09e0441573ed-host-proc-sys-net\") pod \"fd32ef83-3bf4-44b3-b48e-09e0441573ed\" (UID: \"fd32ef83-3bf4-44b3-b48e-09e0441573ed\") " Mar 20 18:06:55.251945 kubelet[2605]: I0320 18:06:55.251692 2605 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fd32ef83-3bf4-44b3-b48e-09e0441573ed-etc-cni-netd\") pod \"fd32ef83-3bf4-44b3-b48e-09e0441573ed\" (UID: \"fd32ef83-3bf4-44b3-b48e-09e0441573ed\") " Mar 20 18:06:55.251945 kubelet[2605]: I0320 18:06:55.251830 2605 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/593cca11-0202-4ac9-b9dc-636b96607a81-cilium-config-path\") pod \"593cca11-0202-4ac9-b9dc-636b96607a81\" (UID: \"593cca11-0202-4ac9-b9dc-636b96607a81\") " Mar 20 18:06:55.251945 kubelet[2605]: I0320 18:06:55.251855 2605 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fd32ef83-3bf4-44b3-b48e-09e0441573ed-cilium-config-path\") pod \"fd32ef83-3bf4-44b3-b48e-09e0441573ed\" (UID: \"fd32ef83-3bf4-44b3-b48e-09e0441573ed\") " Mar 20 18:06:55.256030 kubelet[2605]: I0320 18:06:55.255724 2605 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd32ef83-3bf4-44b3-b48e-09e0441573ed-cni-path" (OuterVolumeSpecName: "cni-path") pod "fd32ef83-3bf4-44b3-b48e-09e0441573ed" (UID: "fd32ef83-3bf4-44b3-b48e-09e0441573ed"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 18:06:55.256030 kubelet[2605]: I0320 18:06:55.255986 2605 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd32ef83-3bf4-44b3-b48e-09e0441573ed-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "fd32ef83-3bf4-44b3-b48e-09e0441573ed" (UID: "fd32ef83-3bf4-44b3-b48e-09e0441573ed"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 18:06:55.256282 kubelet[2605]: I0320 18:06:55.256232 2605 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd32ef83-3bf4-44b3-b48e-09e0441573ed-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fd32ef83-3bf4-44b3-b48e-09e0441573ed" (UID: "fd32ef83-3bf4-44b3-b48e-09e0441573ed"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 18:06:55.256395 kubelet[2605]: I0320 18:06:55.256312 2605 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd32ef83-3bf4-44b3-b48e-09e0441573ed-hostproc" (OuterVolumeSpecName: "hostproc") pod "fd32ef83-3bf4-44b3-b48e-09e0441573ed" (UID: "fd32ef83-3bf4-44b3-b48e-09e0441573ed"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 18:06:55.256395 kubelet[2605]: I0320 18:06:55.256330 2605 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd32ef83-3bf4-44b3-b48e-09e0441573ed-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "fd32ef83-3bf4-44b3-b48e-09e0441573ed" (UID: "fd32ef83-3bf4-44b3-b48e-09e0441573ed"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 18:06:55.256395 kubelet[2605]: I0320 18:06:55.256366 2605 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd32ef83-3bf4-44b3-b48e-09e0441573ed-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "fd32ef83-3bf4-44b3-b48e-09e0441573ed" (UID: "fd32ef83-3bf4-44b3-b48e-09e0441573ed"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 18:06:55.256395 kubelet[2605]: I0320 18:06:55.256393 2605 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd32ef83-3bf4-44b3-b48e-09e0441573ed-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "fd32ef83-3bf4-44b3-b48e-09e0441573ed" (UID: "fd32ef83-3bf4-44b3-b48e-09e0441573ed"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 18:06:55.256754 kubelet[2605]: I0320 18:06:55.256713 2605 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593cca11-0202-4ac9-b9dc-636b96607a81-kube-api-access-hpbfv" (OuterVolumeSpecName: "kube-api-access-hpbfv") pod "593cca11-0202-4ac9-b9dc-636b96607a81" (UID: "593cca11-0202-4ac9-b9dc-636b96607a81"). InnerVolumeSpecName "kube-api-access-hpbfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 18:06:55.256812 kubelet[2605]: I0320 18:06:55.256767 2605 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd32ef83-3bf4-44b3-b48e-09e0441573ed-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "fd32ef83-3bf4-44b3-b48e-09e0441573ed" (UID: "fd32ef83-3bf4-44b3-b48e-09e0441573ed"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 18:06:55.258525 kubelet[2605]: I0320 18:06:55.258258 2605 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/593cca11-0202-4ac9-b9dc-636b96607a81-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "593cca11-0202-4ac9-b9dc-636b96607a81" (UID: "593cca11-0202-4ac9-b9dc-636b96607a81"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 18:06:55.258525 kubelet[2605]: I0320 18:06:55.258313 2605 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd32ef83-3bf4-44b3-b48e-09e0441573ed-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "fd32ef83-3bf4-44b3-b48e-09e0441573ed" (UID: "fd32ef83-3bf4-44b3-b48e-09e0441573ed"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 18:06:55.258525 kubelet[2605]: I0320 18:06:55.258329 2605 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd32ef83-3bf4-44b3-b48e-09e0441573ed-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "fd32ef83-3bf4-44b3-b48e-09e0441573ed" (UID: "fd32ef83-3bf4-44b3-b48e-09e0441573ed"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 18:06:55.258525 kubelet[2605]: I0320 18:06:55.258347 2605 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd32ef83-3bf4-44b3-b48e-09e0441573ed-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "fd32ef83-3bf4-44b3-b48e-09e0441573ed" (UID: "fd32ef83-3bf4-44b3-b48e-09e0441573ed"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 18:06:55.258736 kubelet[2605]: I0320 18:06:55.258702 2605 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd32ef83-3bf4-44b3-b48e-09e0441573ed-kube-api-access-642f2" (OuterVolumeSpecName: "kube-api-access-642f2") pod "fd32ef83-3bf4-44b3-b48e-09e0441573ed" (UID: "fd32ef83-3bf4-44b3-b48e-09e0441573ed"). InnerVolumeSpecName "kube-api-access-642f2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 18:06:55.258827 kubelet[2605]: I0320 18:06:55.258804 2605 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd32ef83-3bf4-44b3-b48e-09e0441573ed-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "fd32ef83-3bf4-44b3-b48e-09e0441573ed" (UID: "fd32ef83-3bf4-44b3-b48e-09e0441573ed"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 18:06:55.259913 kubelet[2605]: I0320 18:06:55.259885 2605 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd32ef83-3bf4-44b3-b48e-09e0441573ed-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "fd32ef83-3bf4-44b3-b48e-09e0441573ed" (UID: "fd32ef83-3bf4-44b3-b48e-09e0441573ed"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 18:06:55.352181 kubelet[2605]: I0320 18:06:55.352144 2605 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fd32ef83-3bf4-44b3-b48e-09e0441573ed-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 20 18:06:55.352463 kubelet[2605]: I0320 18:06:55.352357 2605 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fd32ef83-3bf4-44b3-b48e-09e0441573ed-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 20 18:06:55.352463 kubelet[2605]: I0320 18:06:55.352373 2605 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fd32ef83-3bf4-44b3-b48e-09e0441573ed-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 20 18:06:55.352463 kubelet[2605]: I0320 18:06:55.352413 2605 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fd32ef83-3bf4-44b3-b48e-09e0441573ed-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 20 18:06:55.352463 kubelet[2605]: I0320 18:06:55.352423 2605 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fd32ef83-3bf4-44b3-b48e-09e0441573ed-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 20 18:06:55.352463 kubelet[2605]: I0320 18:06:55.352434 2605 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fd32ef83-3bf4-44b3-b48e-09e0441573ed-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 20 18:06:55.352696 kubelet[2605]: I0320 18:06:55.352442 2605 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fd32ef83-3bf4-44b3-b48e-09e0441573ed-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 20 18:06:55.352696 kubelet[2605]: I0320 18:06:55.352613 2605 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd32ef83-3bf4-44b3-b48e-09e0441573ed-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 20 18:06:55.352696 kubelet[2605]: I0320 18:06:55.352622 2605 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fd32ef83-3bf4-44b3-b48e-09e0441573ed-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 20 18:06:55.352696 kubelet[2605]: I0320 18:06:55.352630 2605 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fd32ef83-3bf4-44b3-b48e-09e0441573ed-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 20 18:06:55.352696 kubelet[2605]: I0320 18:06:55.352642 2605 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/593cca11-0202-4ac9-b9dc-636b96607a81-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 20 18:06:55.352696 kubelet[2605]: I0320 18:06:55.352650 2605 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fd32ef83-3bf4-44b3-b48e-09e0441573ed-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 20 18:06:55.352696 kubelet[2605]: I0320 18:06:55.352658 2605 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fd32ef83-3bf4-44b3-b48e-09e0441573ed-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 20 18:06:55.352696 kubelet[2605]: I0320 18:06:55.352665 2605 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd32ef83-3bf4-44b3-b48e-09e0441573ed-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 20 18:06:55.352903 kubelet[2605]: I0320 18:06:55.352672 2605 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hpbfv\" (UniqueName: \"kubernetes.io/projected/593cca11-0202-4ac9-b9dc-636b96607a81-kube-api-access-hpbfv\") on node \"localhost\" DevicePath \"\"" Mar 20 18:06:55.352903 kubelet[2605]: I0320 18:06:55.352681 2605 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-642f2\" (UniqueName: \"kubernetes.io/projected/fd32ef83-3bf4-44b3-b48e-09e0441573ed-kube-api-access-642f2\") on node \"localhost\" DevicePath \"\"" Mar 20 18:06:55.901995 kubelet[2605]: I0320 18:06:55.901579 2605 scope.go:117] "RemoveContainer" containerID="3d86f0c176b1e01deadbd0048b3bee272eb68f5b7998c22285b4fcce9e97f8e0" Mar 20 18:06:55.905225 containerd[1481]: time="2025-03-20T18:06:55.905190792Z" level=info msg="RemoveContainer for \"3d86f0c176b1e01deadbd0048b3bee272eb68f5b7998c22285b4fcce9e97f8e0\"" Mar 20 18:06:55.907396 systemd[1]: Removed slice kubepods-besteffort-pod593cca11_0202_4ac9_b9dc_636b96607a81.slice - libcontainer container kubepods-besteffort-pod593cca11_0202_4ac9_b9dc_636b96607a81.slice. Mar 20 18:06:55.914244 containerd[1481]: time="2025-03-20T18:06:55.914140323Z" level=info msg="RemoveContainer for \"3d86f0c176b1e01deadbd0048b3bee272eb68f5b7998c22285b4fcce9e97f8e0\" returns successfully" Mar 20 18:06:55.914366 kubelet[2605]: I0320 18:06:55.914335 2605 scope.go:117] "RemoveContainer" containerID="3d86f0c176b1e01deadbd0048b3bee272eb68f5b7998c22285b4fcce9e97f8e0" Mar 20 18:06:55.914703 containerd[1481]: time="2025-03-20T18:06:55.914668326Z" level=error msg="ContainerStatus for \"3d86f0c176b1e01deadbd0048b3bee272eb68f5b7998c22285b4fcce9e97f8e0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3d86f0c176b1e01deadbd0048b3bee272eb68f5b7998c22285b4fcce9e97f8e0\": not found" Mar 20 18:06:55.915024 kubelet[2605]: E0320 18:06:55.914992 2605 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3d86f0c176b1e01deadbd0048b3bee272eb68f5b7998c22285b4fcce9e97f8e0\": not found" containerID="3d86f0c176b1e01deadbd0048b3bee272eb68f5b7998c22285b4fcce9e97f8e0" Mar 20 18:06:55.915094 kubelet[2605]: I0320 18:06:55.915023 2605 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3d86f0c176b1e01deadbd0048b3bee272eb68f5b7998c22285b4fcce9e97f8e0"} err="failed to get container status \"3d86f0c176b1e01deadbd0048b3bee272eb68f5b7998c22285b4fcce9e97f8e0\": rpc error: code = NotFound desc = an error occurred when try to find container \"3d86f0c176b1e01deadbd0048b3bee272eb68f5b7998c22285b4fcce9e97f8e0\": not found" Mar 20 18:06:55.915124 kubelet[2605]: I0320 18:06:55.915096 2605 scope.go:117] "RemoveContainer" containerID="ab5e40b6ceecd2104ae51e2d4cadd041c98b45be7667adfcd5762a9ba3113783" Mar 20 18:06:55.916488 systemd[1]: Removed slice kubepods-burstable-podfd32ef83_3bf4_44b3_b48e_09e0441573ed.slice - libcontainer container kubepods-burstable-podfd32ef83_3bf4_44b3_b48e_09e0441573ed.slice. Mar 20 18:06:55.916593 containerd[1481]: time="2025-03-20T18:06:55.916543496Z" level=info msg="RemoveContainer for \"ab5e40b6ceecd2104ae51e2d4cadd041c98b45be7667adfcd5762a9ba3113783\"" Mar 20 18:06:55.916790 systemd[1]: kubepods-burstable-podfd32ef83_3bf4_44b3_b48e_09e0441573ed.slice: Consumed 6.512s CPU time, 121.6M memory peak, 1.3M read from disk, 12.9M written to disk. Mar 20 18:06:55.920131 containerd[1481]: time="2025-03-20T18:06:55.920101196Z" level=info msg="RemoveContainer for \"ab5e40b6ceecd2104ae51e2d4cadd041c98b45be7667adfcd5762a9ba3113783\" returns successfully" Mar 20 18:06:55.920426 kubelet[2605]: I0320 18:06:55.920319 2605 scope.go:117] "RemoveContainer" containerID="0087b6320893288c9df595f26886961a2959bda00c4da48257e1f1d47787b86c" Mar 20 18:06:55.922027 containerd[1481]: time="2025-03-20T18:06:55.922002247Z" level=info msg="RemoveContainer for \"0087b6320893288c9df595f26886961a2959bda00c4da48257e1f1d47787b86c\"" Mar 20 18:06:55.926299 containerd[1481]: time="2025-03-20T18:06:55.925783868Z" level=info msg="RemoveContainer for \"0087b6320893288c9df595f26886961a2959bda00c4da48257e1f1d47787b86c\" returns successfully" Mar 20 18:06:55.926423 kubelet[2605]: I0320 18:06:55.925902 2605 scope.go:117] "RemoveContainer" containerID="2c5d8c8c677309316d1cdc4beff3f4db920939a6b605a981402c1cfe88960b19" Mar 20 18:06:55.929077 containerd[1481]: time="2025-03-20T18:06:55.928985766Z" level=info msg="RemoveContainer for \"2c5d8c8c677309316d1cdc4beff3f4db920939a6b605a981402c1cfe88960b19\"" Mar 20 18:06:55.932500 containerd[1481]: time="2025-03-20T18:06:55.932425026Z" level=info msg="RemoveContainer for \"2c5d8c8c677309316d1cdc4beff3f4db920939a6b605a981402c1cfe88960b19\" returns successfully" Mar 20 18:06:55.932686 kubelet[2605]: I0320 18:06:55.932661 2605 scope.go:117] "RemoveContainer" containerID="4e0db1fc35db18310d902552a028b11793bddf73861f7db25a241c195813d106" Mar 20 18:06:55.934400 containerd[1481]: time="2025-03-20T18:06:55.934365637Z" level=info msg="RemoveContainer for \"4e0db1fc35db18310d902552a028b11793bddf73861f7db25a241c195813d106\"" Mar 20 18:06:55.937725 containerd[1481]: time="2025-03-20T18:06:55.937655655Z" level=info msg="RemoveContainer for \"4e0db1fc35db18310d902552a028b11793bddf73861f7db25a241c195813d106\" returns successfully" Mar 20 18:06:55.937898 kubelet[2605]: I0320 18:06:55.937840 2605 scope.go:117] "RemoveContainer" containerID="028d0b834d75dcdf53f76a2c5e9e91caf4e294886c6de3023c4cd0a3f3d99cce" Mar 20 18:06:55.939738 containerd[1481]: time="2025-03-20T18:06:55.939296985Z" level=info msg="RemoveContainer for \"028d0b834d75dcdf53f76a2c5e9e91caf4e294886c6de3023c4cd0a3f3d99cce\"" Mar 20 18:06:55.941853 containerd[1481]: time="2025-03-20T18:06:55.941831479Z" level=info msg="RemoveContainer for \"028d0b834d75dcdf53f76a2c5e9e91caf4e294886c6de3023c4cd0a3f3d99cce\" returns successfully" Mar 20 18:06:55.942103 kubelet[2605]: I0320 18:06:55.942084 2605 scope.go:117] "RemoveContainer" containerID="ab5e40b6ceecd2104ae51e2d4cadd041c98b45be7667adfcd5762a9ba3113783" Mar 20 18:06:55.942450 containerd[1481]: time="2025-03-20T18:06:55.942361002Z" level=error msg="ContainerStatus for \"ab5e40b6ceecd2104ae51e2d4cadd041c98b45be7667adfcd5762a9ba3113783\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ab5e40b6ceecd2104ae51e2d4cadd041c98b45be7667adfcd5762a9ba3113783\": not found" Mar 20 18:06:55.942601 kubelet[2605]: E0320 18:06:55.942579 2605 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ab5e40b6ceecd2104ae51e2d4cadd041c98b45be7667adfcd5762a9ba3113783\": not found" containerID="ab5e40b6ceecd2104ae51e2d4cadd041c98b45be7667adfcd5762a9ba3113783" Mar 20 18:06:55.942669 kubelet[2605]: I0320 18:06:55.942607 2605 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ab5e40b6ceecd2104ae51e2d4cadd041c98b45be7667adfcd5762a9ba3113783"} err="failed to get container status \"ab5e40b6ceecd2104ae51e2d4cadd041c98b45be7667adfcd5762a9ba3113783\": rpc error: code = NotFound desc = an error occurred when try to find container \"ab5e40b6ceecd2104ae51e2d4cadd041c98b45be7667adfcd5762a9ba3113783\": not found" Mar 20 18:06:55.942669 kubelet[2605]: I0320 18:06:55.942634 2605 scope.go:117] "RemoveContainer" containerID="0087b6320893288c9df595f26886961a2959bda00c4da48257e1f1d47787b86c" Mar 20 18:06:55.942841 containerd[1481]: time="2025-03-20T18:06:55.942780324Z" level=error msg="ContainerStatus for \"0087b6320893288c9df595f26886961a2959bda00c4da48257e1f1d47787b86c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0087b6320893288c9df595f26886961a2959bda00c4da48257e1f1d47787b86c\": not found" Mar 20 18:06:55.942981 kubelet[2605]: E0320 18:06:55.942964 2605 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0087b6320893288c9df595f26886961a2959bda00c4da48257e1f1d47787b86c\": not found" containerID="0087b6320893288c9df595f26886961a2959bda00c4da48257e1f1d47787b86c" Mar 20 18:06:55.942981 kubelet[2605]: I0320 18:06:55.942983 2605 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0087b6320893288c9df595f26886961a2959bda00c4da48257e1f1d47787b86c"} err="failed to get container status \"0087b6320893288c9df595f26886961a2959bda00c4da48257e1f1d47787b86c\": rpc error: code = NotFound desc = an error occurred when try to find container \"0087b6320893288c9df595f26886961a2959bda00c4da48257e1f1d47787b86c\": not found" Mar 20 18:06:55.943139 kubelet[2605]: I0320 18:06:55.942994 2605 scope.go:117] "RemoveContainer" containerID="2c5d8c8c677309316d1cdc4beff3f4db920939a6b605a981402c1cfe88960b19" Mar 20 18:06:55.943179 containerd[1481]: time="2025-03-20T18:06:55.943101526Z" level=error msg="ContainerStatus for \"2c5d8c8c677309316d1cdc4beff3f4db920939a6b605a981402c1cfe88960b19\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2c5d8c8c677309316d1cdc4beff3f4db920939a6b605a981402c1cfe88960b19\": not found" Mar 20 18:06:55.943204 kubelet[2605]: E0320 18:06:55.943187 2605 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2c5d8c8c677309316d1cdc4beff3f4db920939a6b605a981402c1cfe88960b19\": not found" containerID="2c5d8c8c677309316d1cdc4beff3f4db920939a6b605a981402c1cfe88960b19" Mar 20 18:06:55.943229 kubelet[2605]: I0320 18:06:55.943207 2605 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2c5d8c8c677309316d1cdc4beff3f4db920939a6b605a981402c1cfe88960b19"} err="failed to get container status \"2c5d8c8c677309316d1cdc4beff3f4db920939a6b605a981402c1cfe88960b19\": rpc error: code = NotFound desc = an error occurred when try to find container \"2c5d8c8c677309316d1cdc4beff3f4db920939a6b605a981402c1cfe88960b19\": not found" Mar 20 18:06:55.943229 kubelet[2605]: I0320 18:06:55.943221 2605 scope.go:117] "RemoveContainer" containerID="4e0db1fc35db18310d902552a028b11793bddf73861f7db25a241c195813d106" Mar 20 18:06:55.943500 containerd[1481]: time="2025-03-20T18:06:55.943407208Z" level=error msg="ContainerStatus for \"4e0db1fc35db18310d902552a028b11793bddf73861f7db25a241c195813d106\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4e0db1fc35db18310d902552a028b11793bddf73861f7db25a241c195813d106\": not found" Mar 20 18:06:55.943745 kubelet[2605]: E0320 18:06:55.943509 2605 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4e0db1fc35db18310d902552a028b11793bddf73861f7db25a241c195813d106\": not found" containerID="4e0db1fc35db18310d902552a028b11793bddf73861f7db25a241c195813d106" Mar 20 18:06:55.943745 kubelet[2605]: I0320 18:06:55.943530 2605 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4e0db1fc35db18310d902552a028b11793bddf73861f7db25a241c195813d106"} err="failed to get container status \"4e0db1fc35db18310d902552a028b11793bddf73861f7db25a241c195813d106\": rpc error: code = NotFound desc = an error occurred when try to find container \"4e0db1fc35db18310d902552a028b11793bddf73861f7db25a241c195813d106\": not found" Mar 20 18:06:55.943745 kubelet[2605]: I0320 18:06:55.943545 2605 scope.go:117] "RemoveContainer" containerID="028d0b834d75dcdf53f76a2c5e9e91caf4e294886c6de3023c4cd0a3f3d99cce" Mar 20 18:06:55.944014 kubelet[2605]: E0320 18:06:55.943890 2605 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"028d0b834d75dcdf53f76a2c5e9e91caf4e294886c6de3023c4cd0a3f3d99cce\": not found" containerID="028d0b834d75dcdf53f76a2c5e9e91caf4e294886c6de3023c4cd0a3f3d99cce" Mar 20 18:06:55.944014 kubelet[2605]: I0320 18:06:55.943905 2605 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"028d0b834d75dcdf53f76a2c5e9e91caf4e294886c6de3023c4cd0a3f3d99cce"} err="failed to get container status \"028d0b834d75dcdf53f76a2c5e9e91caf4e294886c6de3023c4cd0a3f3d99cce\": rpc error: code = NotFound desc = an error occurred when try to find container \"028d0b834d75dcdf53f76a2c5e9e91caf4e294886c6de3023c4cd0a3f3d99cce\": not found" Mar 20 18:06:55.944065 containerd[1481]: time="2025-03-20T18:06:55.943797730Z" level=error msg="ContainerStatus for \"028d0b834d75dcdf53f76a2c5e9e91caf4e294886c6de3023c4cd0a3f3d99cce\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"028d0b834d75dcdf53f76a2c5e9e91caf4e294886c6de3023c4cd0a3f3d99cce\": not found" Mar 20 18:06:56.012338 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-86c378cafc1a75d4d6bcac27d9a30e8c1d7086b31fb2d2987b804352c1243a1a-shm.mount: Deactivated successfully. Mar 20 18:06:56.012477 systemd[1]: var-lib-kubelet-pods-593cca11\x2d0202\x2d4ac9\x2db9dc\x2d636b96607a81-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhpbfv.mount: Deactivated successfully. Mar 20 18:06:56.012535 systemd[1]: var-lib-kubelet-pods-fd32ef83\x2d3bf4\x2d44b3\x2db48e\x2d09e0441573ed-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d642f2.mount: Deactivated successfully. Mar 20 18:06:56.012585 systemd[1]: var-lib-kubelet-pods-fd32ef83\x2d3bf4\x2d44b3\x2db48e\x2d09e0441573ed-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 20 18:06:56.012647 systemd[1]: var-lib-kubelet-pods-fd32ef83\x2d3bf4\x2d44b3\x2db48e\x2d09e0441573ed-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 20 18:06:56.692877 kubelet[2605]: I0320 18:06:56.692832 2605 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593cca11-0202-4ac9-b9dc-636b96607a81" path="/var/lib/kubelet/pods/593cca11-0202-4ac9-b9dc-636b96607a81/volumes" Mar 20 18:06:56.693502 kubelet[2605]: I0320 18:06:56.693202 2605 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd32ef83-3bf4-44b3-b48e-09e0441573ed" path="/var/lib/kubelet/pods/fd32ef83-3bf4-44b3-b48e-09e0441573ed/volumes" Mar 20 18:06:56.742663 kubelet[2605]: E0320 18:06:56.742620 2605 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 20 18:06:56.933745 sshd[4231]: Connection closed by 10.0.0.1 port 58620 Mar 20 18:06:56.934434 sshd-session[4228]: pam_unix(sshd:session): session closed for user core Mar 20 18:06:56.947645 systemd[1]: sshd@23-10.0.0.103:22-10.0.0.1:58620.service: Deactivated successfully. Mar 20 18:06:56.949192 systemd[1]: session-24.scope: Deactivated successfully. Mar 20 18:06:56.949373 systemd[1]: session-24.scope: Consumed 1.588s CPU time, 29.2M memory peak. Mar 20 18:06:56.949865 systemd-logind[1469]: Session 24 logged out. Waiting for processes to exit. Mar 20 18:06:56.951555 systemd[1]: Started sshd@24-10.0.0.103:22-10.0.0.1:58634.service - OpenSSH per-connection server daemon (10.0.0.1:58634). Mar 20 18:06:56.952488 systemd-logind[1469]: Removed session 24. Mar 20 18:06:57.008024 sshd[4387]: Accepted publickey for core from 10.0.0.1 port 58634 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:06:57.009344 sshd-session[4387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:06:57.013550 systemd-logind[1469]: New session 25 of user core. Mar 20 18:06:57.019554 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 20 18:06:57.649201 sshd[4390]: Connection closed by 10.0.0.1 port 58634 Mar 20 18:06:57.648469 sshd-session[4387]: pam_unix(sshd:session): session closed for user core Mar 20 18:06:57.661471 kubelet[2605]: I0320 18:06:57.660127 2605 memory_manager.go:355] "RemoveStaleState removing state" podUID="593cca11-0202-4ac9-b9dc-636b96607a81" containerName="cilium-operator" Mar 20 18:06:57.661471 kubelet[2605]: I0320 18:06:57.660151 2605 memory_manager.go:355] "RemoveStaleState removing state" podUID="fd32ef83-3bf4-44b3-b48e-09e0441573ed" containerName="cilium-agent" Mar 20 18:06:57.662704 systemd[1]: sshd@24-10.0.0.103:22-10.0.0.1:58634.service: Deactivated successfully. Mar 20 18:06:57.665040 kubelet[2605]: I0320 18:06:57.665007 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6b22de2-fbc8-427a-b49d-451a94544536-lib-modules\") pod \"cilium-jc9qn\" (UID: \"b6b22de2-fbc8-427a-b49d-451a94544536\") " pod="kube-system/cilium-jc9qn" Mar 20 18:06:57.665195 kubelet[2605]: I0320 18:06:57.665045 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b6b22de2-fbc8-427a-b49d-451a94544536-clustermesh-secrets\") pod \"cilium-jc9qn\" (UID: \"b6b22de2-fbc8-427a-b49d-451a94544536\") " pod="kube-system/cilium-jc9qn" Mar 20 18:06:57.665195 kubelet[2605]: I0320 18:06:57.665064 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b6b22de2-fbc8-427a-b49d-451a94544536-cilium-cgroup\") pod \"cilium-jc9qn\" (UID: \"b6b22de2-fbc8-427a-b49d-451a94544536\") " pod="kube-system/cilium-jc9qn" Mar 20 18:06:57.665195 kubelet[2605]: I0320 18:06:57.665080 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b6b22de2-fbc8-427a-b49d-451a94544536-cilium-run\") pod \"cilium-jc9qn\" (UID: \"b6b22de2-fbc8-427a-b49d-451a94544536\") " pod="kube-system/cilium-jc9qn" Mar 20 18:06:57.665195 kubelet[2605]: I0320 18:06:57.665096 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b6b22de2-fbc8-427a-b49d-451a94544536-hostproc\") pod \"cilium-jc9qn\" (UID: \"b6b22de2-fbc8-427a-b49d-451a94544536\") " pod="kube-system/cilium-jc9qn" Mar 20 18:06:57.665195 kubelet[2605]: I0320 18:06:57.665110 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b6b22de2-fbc8-427a-b49d-451a94544536-cilium-config-path\") pod \"cilium-jc9qn\" (UID: \"b6b22de2-fbc8-427a-b49d-451a94544536\") " pod="kube-system/cilium-jc9qn" Mar 20 18:06:57.665195 kubelet[2605]: I0320 18:06:57.665136 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b6b22de2-fbc8-427a-b49d-451a94544536-host-proc-sys-net\") pod \"cilium-jc9qn\" (UID: \"b6b22de2-fbc8-427a-b49d-451a94544536\") " pod="kube-system/cilium-jc9qn" Mar 20 18:06:57.665356 kubelet[2605]: I0320 18:06:57.665149 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b6b22de2-fbc8-427a-b49d-451a94544536-hubble-tls\") pod \"cilium-jc9qn\" (UID: \"b6b22de2-fbc8-427a-b49d-451a94544536\") " pod="kube-system/cilium-jc9qn" Mar 20 18:06:57.665356 kubelet[2605]: I0320 18:06:57.665164 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b6b22de2-fbc8-427a-b49d-451a94544536-cni-path\") pod \"cilium-jc9qn\" (UID: \"b6b22de2-fbc8-427a-b49d-451a94544536\") " pod="kube-system/cilium-jc9qn" Mar 20 18:06:57.665356 kubelet[2605]: I0320 18:06:57.665178 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b6b22de2-fbc8-427a-b49d-451a94544536-host-proc-sys-kernel\") pod \"cilium-jc9qn\" (UID: \"b6b22de2-fbc8-427a-b49d-451a94544536\") " pod="kube-system/cilium-jc9qn" Mar 20 18:06:57.665356 kubelet[2605]: I0320 18:06:57.665193 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7b86\" (UniqueName: \"kubernetes.io/projected/b6b22de2-fbc8-427a-b49d-451a94544536-kube-api-access-r7b86\") pod \"cilium-jc9qn\" (UID: \"b6b22de2-fbc8-427a-b49d-451a94544536\") " pod="kube-system/cilium-jc9qn" Mar 20 18:06:57.665356 kubelet[2605]: I0320 18:06:57.665210 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b6b22de2-fbc8-427a-b49d-451a94544536-etc-cni-netd\") pod \"cilium-jc9qn\" (UID: \"b6b22de2-fbc8-427a-b49d-451a94544536\") " pod="kube-system/cilium-jc9qn" Mar 20 18:06:57.665356 kubelet[2605]: I0320 18:06:57.665239 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b6b22de2-fbc8-427a-b49d-451a94544536-bpf-maps\") pod \"cilium-jc9qn\" (UID: \"b6b22de2-fbc8-427a-b49d-451a94544536\") " pod="kube-system/cilium-jc9qn" Mar 20 18:06:57.665583 kubelet[2605]: I0320 18:06:57.665253 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b6b22de2-fbc8-427a-b49d-451a94544536-xtables-lock\") pod \"cilium-jc9qn\" (UID: \"b6b22de2-fbc8-427a-b49d-451a94544536\") " pod="kube-system/cilium-jc9qn" Mar 20 18:06:57.665583 kubelet[2605]: I0320 18:06:57.665268 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b6b22de2-fbc8-427a-b49d-451a94544536-cilium-ipsec-secrets\") pod \"cilium-jc9qn\" (UID: \"b6b22de2-fbc8-427a-b49d-451a94544536\") " pod="kube-system/cilium-jc9qn" Mar 20 18:06:57.665985 systemd[1]: session-25.scope: Deactivated successfully. Mar 20 18:06:57.668119 systemd-logind[1469]: Session 25 logged out. Waiting for processes to exit. Mar 20 18:06:57.673921 systemd[1]: Started sshd@25-10.0.0.103:22-10.0.0.1:58642.service - OpenSSH per-connection server daemon (10.0.0.1:58642). Mar 20 18:06:57.685449 systemd-logind[1469]: Removed session 25. Mar 20 18:06:57.699615 systemd[1]: Created slice kubepods-burstable-podb6b22de2_fbc8_427a_b49d_451a94544536.slice - libcontainer container kubepods-burstable-podb6b22de2_fbc8_427a_b49d_451a94544536.slice. Mar 20 18:06:57.737493 sshd[4401]: Accepted publickey for core from 10.0.0.1 port 58642 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:06:57.738668 sshd-session[4401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:06:57.742552 systemd-logind[1469]: New session 26 of user core. Mar 20 18:06:57.749534 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 20 18:06:57.797589 sshd[4404]: Connection closed by 10.0.0.1 port 58642 Mar 20 18:06:57.797436 sshd-session[4401]: pam_unix(sshd:session): session closed for user core Mar 20 18:06:57.815804 systemd[1]: sshd@25-10.0.0.103:22-10.0.0.1:58642.service: Deactivated successfully. Mar 20 18:06:57.817584 systemd[1]: session-26.scope: Deactivated successfully. Mar 20 18:06:57.819064 systemd-logind[1469]: Session 26 logged out. Waiting for processes to exit. Mar 20 18:06:57.820471 systemd[1]: Started sshd@26-10.0.0.103:22-10.0.0.1:58646.service - OpenSSH per-connection server daemon (10.0.0.1:58646). Mar 20 18:06:57.821376 systemd-logind[1469]: Removed session 26. Mar 20 18:06:57.874809 sshd[4414]: Accepted publickey for core from 10.0.0.1 port 58646 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 18:06:57.875896 sshd-session[4414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 18:06:57.879987 systemd-logind[1469]: New session 27 of user core. Mar 20 18:06:57.887534 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 20 18:06:57.924328 kubelet[2605]: I0320 18:06:57.924174 2605 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-20T18:06:57Z","lastTransitionTime":"2025-03-20T18:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 20 18:06:58.003453 containerd[1481]: time="2025-03-20T18:06:58.003374576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jc9qn,Uid:b6b22de2-fbc8-427a-b49d-451a94544536,Namespace:kube-system,Attempt:0,}" Mar 20 18:06:58.018031 containerd[1481]: time="2025-03-20T18:06:58.017991492Z" level=info msg="connecting to shim d85f0b42f1676b2f634c5900c896293e99a8767794e8019b2e3265fd38a5729f" address="unix:///run/containerd/s/3533625b1507de608268d99a15678f7b46dcc63b46a3d8047079f073071d0751" namespace=k8s.io protocol=ttrpc version=3 Mar 20 18:06:58.042738 systemd[1]: Started cri-containerd-d85f0b42f1676b2f634c5900c896293e99a8767794e8019b2e3265fd38a5729f.scope - libcontainer container d85f0b42f1676b2f634c5900c896293e99a8767794e8019b2e3265fd38a5729f. Mar 20 18:06:58.063190 containerd[1481]: time="2025-03-20T18:06:58.063070886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jc9qn,Uid:b6b22de2-fbc8-427a-b49d-451a94544536,Namespace:kube-system,Attempt:0,} returns sandbox id \"d85f0b42f1676b2f634c5900c896293e99a8767794e8019b2e3265fd38a5729f\"" Mar 20 18:06:58.065752 containerd[1481]: time="2025-03-20T18:06:58.065728860Z" level=info msg="CreateContainer within sandbox \"d85f0b42f1676b2f634c5900c896293e99a8767794e8019b2e3265fd38a5729f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 20 18:06:58.071445 containerd[1481]: time="2025-03-20T18:06:58.071418730Z" level=info msg="Container e8a60f463ae62b538e5c8a20c7b385db1fb29cc0d55957e0abc1eea43422ba1f: CDI devices from CRI Config.CDIDevices: []" Mar 20 18:06:58.077869 containerd[1481]: time="2025-03-20T18:06:58.077826843Z" level=info msg="CreateContainer within sandbox \"d85f0b42f1676b2f634c5900c896293e99a8767794e8019b2e3265fd38a5729f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e8a60f463ae62b538e5c8a20c7b385db1fb29cc0d55957e0abc1eea43422ba1f\"" Mar 20 18:06:58.078311 containerd[1481]: time="2025-03-20T18:06:58.078220285Z" level=info msg="StartContainer for \"e8a60f463ae62b538e5c8a20c7b385db1fb29cc0d55957e0abc1eea43422ba1f\"" Mar 20 18:06:58.079087 containerd[1481]: time="2025-03-20T18:06:58.079056090Z" level=info msg="connecting to shim e8a60f463ae62b538e5c8a20c7b385db1fb29cc0d55957e0abc1eea43422ba1f" address="unix:///run/containerd/s/3533625b1507de608268d99a15678f7b46dcc63b46a3d8047079f073071d0751" protocol=ttrpc version=3 Mar 20 18:06:58.096541 systemd[1]: Started cri-containerd-e8a60f463ae62b538e5c8a20c7b385db1fb29cc0d55957e0abc1eea43422ba1f.scope - libcontainer container e8a60f463ae62b538e5c8a20c7b385db1fb29cc0d55957e0abc1eea43422ba1f. Mar 20 18:06:58.119609 containerd[1481]: time="2025-03-20T18:06:58.119559021Z" level=info msg="StartContainer for \"e8a60f463ae62b538e5c8a20c7b385db1fb29cc0d55957e0abc1eea43422ba1f\" returns successfully" Mar 20 18:06:58.135476 systemd[1]: cri-containerd-e8a60f463ae62b538e5c8a20c7b385db1fb29cc0d55957e0abc1eea43422ba1f.scope: Deactivated successfully. Mar 20 18:06:58.136894 containerd[1481]: time="2025-03-20T18:06:58.136859391Z" level=info msg="received exit event container_id:\"e8a60f463ae62b538e5c8a20c7b385db1fb29cc0d55957e0abc1eea43422ba1f\" id:\"e8a60f463ae62b538e5c8a20c7b385db1fb29cc0d55957e0abc1eea43422ba1f\" pid:4481 exited_at:{seconds:1742494018 nanos:136484989}" Mar 20 18:06:58.137118 containerd[1481]: time="2025-03-20T18:06:58.137099912Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e8a60f463ae62b538e5c8a20c7b385db1fb29cc0d55957e0abc1eea43422ba1f\" id:\"e8a60f463ae62b538e5c8a20c7b385db1fb29cc0d55957e0abc1eea43422ba1f\" pid:4481 exited_at:{seconds:1742494018 nanos:136484989}" Mar 20 18:06:58.922346 containerd[1481]: time="2025-03-20T18:06:58.921703556Z" level=info msg="CreateContainer within sandbox \"d85f0b42f1676b2f634c5900c896293e99a8767794e8019b2e3265fd38a5729f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 20 18:06:58.929203 containerd[1481]: time="2025-03-20T18:06:58.929168235Z" level=info msg="Container 2ad6b0cd564ec5bb7cdd2622ed2700ebbb301751351ac7a6eca41225788abd36: CDI devices from CRI Config.CDIDevices: []" Mar 20 18:06:58.937100 containerd[1481]: time="2025-03-20T18:06:58.937056636Z" level=info msg="CreateContainer within sandbox \"d85f0b42f1676b2f634c5900c896293e99a8767794e8019b2e3265fd38a5729f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2ad6b0cd564ec5bb7cdd2622ed2700ebbb301751351ac7a6eca41225788abd36\"" Mar 20 18:06:58.938013 containerd[1481]: time="2025-03-20T18:06:58.937987601Z" level=info msg="StartContainer for \"2ad6b0cd564ec5bb7cdd2622ed2700ebbb301751351ac7a6eca41225788abd36\"" Mar 20 18:06:58.939007 containerd[1481]: time="2025-03-20T18:06:58.938971526Z" level=info msg="connecting to shim 2ad6b0cd564ec5bb7cdd2622ed2700ebbb301751351ac7a6eca41225788abd36" address="unix:///run/containerd/s/3533625b1507de608268d99a15678f7b46dcc63b46a3d8047079f073071d0751" protocol=ttrpc version=3 Mar 20 18:06:58.960584 systemd[1]: Started cri-containerd-2ad6b0cd564ec5bb7cdd2622ed2700ebbb301751351ac7a6eca41225788abd36.scope - libcontainer container 2ad6b0cd564ec5bb7cdd2622ed2700ebbb301751351ac7a6eca41225788abd36. Mar 20 18:06:58.986925 containerd[1481]: time="2025-03-20T18:06:58.986832095Z" level=info msg="StartContainer for \"2ad6b0cd564ec5bb7cdd2622ed2700ebbb301751351ac7a6eca41225788abd36\" returns successfully" Mar 20 18:06:58.995617 systemd[1]: cri-containerd-2ad6b0cd564ec5bb7cdd2622ed2700ebbb301751351ac7a6eca41225788abd36.scope: Deactivated successfully. Mar 20 18:06:58.996712 containerd[1481]: time="2025-03-20T18:06:58.996540706Z" level=info msg="received exit event container_id:\"2ad6b0cd564ec5bb7cdd2622ed2700ebbb301751351ac7a6eca41225788abd36\" id:\"2ad6b0cd564ec5bb7cdd2622ed2700ebbb301751351ac7a6eca41225788abd36\" pid:4526 exited_at:{seconds:1742494018 nanos:996218464}" Mar 20 18:06:58.996712 containerd[1481]: time="2025-03-20T18:06:58.996683307Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2ad6b0cd564ec5bb7cdd2622ed2700ebbb301751351ac7a6eca41225788abd36\" id:\"2ad6b0cd564ec5bb7cdd2622ed2700ebbb301751351ac7a6eca41225788abd36\" pid:4526 exited_at:{seconds:1742494018 nanos:996218464}" Mar 20 18:06:59.770601 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ad6b0cd564ec5bb7cdd2622ed2700ebbb301751351ac7a6eca41225788abd36-rootfs.mount: Deactivated successfully. Mar 20 18:06:59.925807 containerd[1481]: time="2025-03-20T18:06:59.925750219Z" level=info msg="CreateContainer within sandbox \"d85f0b42f1676b2f634c5900c896293e99a8767794e8019b2e3265fd38a5729f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 20 18:06:59.935571 containerd[1481]: time="2025-03-20T18:06:59.935344628Z" level=info msg="Container 8f73e2dcb5dd8f2b027dd46f7a1bb65fdf5f567247754593155b0e9af8982c08: CDI devices from CRI Config.CDIDevices: []" Mar 20 18:06:59.957816 containerd[1481]: time="2025-03-20T18:06:59.957774542Z" level=info msg="CreateContainer within sandbox \"d85f0b42f1676b2f634c5900c896293e99a8767794e8019b2e3265fd38a5729f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8f73e2dcb5dd8f2b027dd46f7a1bb65fdf5f567247754593155b0e9af8982c08\"" Mar 20 18:06:59.958750 containerd[1481]: time="2025-03-20T18:06:59.958719546Z" level=info msg="StartContainer for \"8f73e2dcb5dd8f2b027dd46f7a1bb65fdf5f567247754593155b0e9af8982c08\"" Mar 20 18:06:59.960092 containerd[1481]: time="2025-03-20T18:06:59.960056153Z" level=info msg="connecting to shim 8f73e2dcb5dd8f2b027dd46f7a1bb65fdf5f567247754593155b0e9af8982c08" address="unix:///run/containerd/s/3533625b1507de608268d99a15678f7b46dcc63b46a3d8047079f073071d0751" protocol=ttrpc version=3 Mar 20 18:06:59.985550 systemd[1]: Started cri-containerd-8f73e2dcb5dd8f2b027dd46f7a1bb65fdf5f567247754593155b0e9af8982c08.scope - libcontainer container 8f73e2dcb5dd8f2b027dd46f7a1bb65fdf5f567247754593155b0e9af8982c08. Mar 20 18:07:00.031868 containerd[1481]: time="2025-03-20T18:07:00.031754753Z" level=info msg="StartContainer for \"8f73e2dcb5dd8f2b027dd46f7a1bb65fdf5f567247754593155b0e9af8982c08\" returns successfully" Mar 20 18:07:00.032818 systemd[1]: cri-containerd-8f73e2dcb5dd8f2b027dd46f7a1bb65fdf5f567247754593155b0e9af8982c08.scope: Deactivated successfully. Mar 20 18:07:00.034579 containerd[1481]: time="2025-03-20T18:07:00.034479886Z" level=info msg="received exit event container_id:\"8f73e2dcb5dd8f2b027dd46f7a1bb65fdf5f567247754593155b0e9af8982c08\" id:\"8f73e2dcb5dd8f2b027dd46f7a1bb65fdf5f567247754593155b0e9af8982c08\" pid:4569 exited_at:{seconds:1742494020 nanos:34191925}" Mar 20 18:07:00.034579 containerd[1481]: time="2025-03-20T18:07:00.034562607Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8f73e2dcb5dd8f2b027dd46f7a1bb65fdf5f567247754593155b0e9af8982c08\" id:\"8f73e2dcb5dd8f2b027dd46f7a1bb65fdf5f567247754593155b0e9af8982c08\" pid:4569 exited_at:{seconds:1742494020 nanos:34191925}" Mar 20 18:07:00.770728 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f73e2dcb5dd8f2b027dd46f7a1bb65fdf5f567247754593155b0e9af8982c08-rootfs.mount: Deactivated successfully. Mar 20 18:07:00.931036 containerd[1481]: time="2025-03-20T18:07:00.930942477Z" level=info msg="CreateContainer within sandbox \"d85f0b42f1676b2f634c5900c896293e99a8767794e8019b2e3265fd38a5729f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 20 18:07:00.937023 containerd[1481]: time="2025-03-20T18:07:00.936777746Z" level=info msg="Container 679df0262b4cdea47a8caf9ed345db36c5224a326ecbb104e4ef4b1b25ff47f3: CDI devices from CRI Config.CDIDevices: []" Mar 20 18:07:00.939996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2165624748.mount: Deactivated successfully. Mar 20 18:07:00.946269 containerd[1481]: time="2025-03-20T18:07:00.945893191Z" level=info msg="CreateContainer within sandbox \"d85f0b42f1676b2f634c5900c896293e99a8767794e8019b2e3265fd38a5729f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"679df0262b4cdea47a8caf9ed345db36c5224a326ecbb104e4ef4b1b25ff47f3\"" Mar 20 18:07:00.946532 containerd[1481]: time="2025-03-20T18:07:00.946403953Z" level=info msg="StartContainer for \"679df0262b4cdea47a8caf9ed345db36c5224a326ecbb104e4ef4b1b25ff47f3\"" Mar 20 18:07:00.948023 containerd[1481]: time="2025-03-20T18:07:00.947881080Z" level=info msg="connecting to shim 679df0262b4cdea47a8caf9ed345db36c5224a326ecbb104e4ef4b1b25ff47f3" address="unix:///run/containerd/s/3533625b1507de608268d99a15678f7b46dcc63b46a3d8047079f073071d0751" protocol=ttrpc version=3 Mar 20 18:07:00.969538 systemd[1]: Started cri-containerd-679df0262b4cdea47a8caf9ed345db36c5224a326ecbb104e4ef4b1b25ff47f3.scope - libcontainer container 679df0262b4cdea47a8caf9ed345db36c5224a326ecbb104e4ef4b1b25ff47f3. Mar 20 18:07:00.991710 systemd[1]: cri-containerd-679df0262b4cdea47a8caf9ed345db36c5224a326ecbb104e4ef4b1b25ff47f3.scope: Deactivated successfully. Mar 20 18:07:00.992546 containerd[1481]: time="2025-03-20T18:07:00.992496101Z" level=info msg="TaskExit event in podsandbox handler container_id:\"679df0262b4cdea47a8caf9ed345db36c5224a326ecbb104e4ef4b1b25ff47f3\" id:\"679df0262b4cdea47a8caf9ed345db36c5224a326ecbb104e4ef4b1b25ff47f3\" pid:4607 exited_at:{seconds:1742494020 nanos:991555256}" Mar 20 18:07:00.993021 containerd[1481]: time="2025-03-20T18:07:00.992912823Z" level=info msg="received exit event container_id:\"679df0262b4cdea47a8caf9ed345db36c5224a326ecbb104e4ef4b1b25ff47f3\" id:\"679df0262b4cdea47a8caf9ed345db36c5224a326ecbb104e4ef4b1b25ff47f3\" pid:4607 exited_at:{seconds:1742494020 nanos:991555256}" Mar 20 18:07:00.995017 containerd[1481]: time="2025-03-20T18:07:00.994992433Z" level=info msg="StartContainer for \"679df0262b4cdea47a8caf9ed345db36c5224a326ecbb104e4ef4b1b25ff47f3\" returns successfully" Mar 20 18:07:01.008732 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-679df0262b4cdea47a8caf9ed345db36c5224a326ecbb104e4ef4b1b25ff47f3-rootfs.mount: Deactivated successfully. Mar 20 18:07:01.744164 kubelet[2605]: E0320 18:07:01.744105 2605 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 20 18:07:01.936729 containerd[1481]: time="2025-03-20T18:07:01.936260007Z" level=info msg="CreateContainer within sandbox \"d85f0b42f1676b2f634c5900c896293e99a8767794e8019b2e3265fd38a5729f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 20 18:07:01.943671 containerd[1481]: time="2025-03-20T18:07:01.943585283Z" level=info msg="Container 2da06a74e20d8b87b5ea76009681977029295590191c52927bee50cb6e0aab4a: CDI devices from CRI Config.CDIDevices: []" Mar 20 18:07:01.949121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3243222410.mount: Deactivated successfully. Mar 20 18:07:01.953298 containerd[1481]: time="2025-03-20T18:07:01.953246929Z" level=info msg="CreateContainer within sandbox \"d85f0b42f1676b2f634c5900c896293e99a8767794e8019b2e3265fd38a5729f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2da06a74e20d8b87b5ea76009681977029295590191c52927bee50cb6e0aab4a\"" Mar 20 18:07:01.955935 containerd[1481]: time="2025-03-20T18:07:01.955886462Z" level=info msg="StartContainer for \"2da06a74e20d8b87b5ea76009681977029295590191c52927bee50cb6e0aab4a\"" Mar 20 18:07:01.964677 containerd[1481]: time="2025-03-20T18:07:01.964554424Z" level=info msg="connecting to shim 2da06a74e20d8b87b5ea76009681977029295590191c52927bee50cb6e0aab4a" address="unix:///run/containerd/s/3533625b1507de608268d99a15678f7b46dcc63b46a3d8047079f073071d0751" protocol=ttrpc version=3 Mar 20 18:07:01.983534 systemd[1]: Started cri-containerd-2da06a74e20d8b87b5ea76009681977029295590191c52927bee50cb6e0aab4a.scope - libcontainer container 2da06a74e20d8b87b5ea76009681977029295590191c52927bee50cb6e0aab4a. Mar 20 18:07:02.012532 containerd[1481]: time="2025-03-20T18:07:02.012002731Z" level=info msg="StartContainer for \"2da06a74e20d8b87b5ea76009681977029295590191c52927bee50cb6e0aab4a\" returns successfully" Mar 20 18:07:02.072340 containerd[1481]: time="2025-03-20T18:07:02.072031253Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2da06a74e20d8b87b5ea76009681977029295590191c52927bee50cb6e0aab4a\" id:\"3ab8a36326ae84db9769ba678240098345beb870c861f50c0d813cc9cff47e33\" pid:4675 exited_at:{seconds:1742494022 nanos:71413130}" Mar 20 18:07:02.275435 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Mar 20 18:07:02.952687 kubelet[2605]: I0320 18:07:02.952631 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jc9qn" podStartSLOduration=5.9526158670000004 podStartE2EDuration="5.952615867s" podCreationTimestamp="2025-03-20 18:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 18:07:02.951761823 +0000 UTC m=+86.339109916" watchObservedRunningTime="2025-03-20 18:07:02.952615867 +0000 UTC m=+86.339963960" Mar 20 18:07:04.251366 containerd[1481]: time="2025-03-20T18:07:04.251317507Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2da06a74e20d8b87b5ea76009681977029295590191c52927bee50cb6e0aab4a\" id:\"9a15a31e8f60ee4a97a09157b6b020f9c8ff25b14d608c2f9cfa2987a1d61b30\" pid:4952 exit_status:1 exited_at:{seconds:1742494024 nanos:250792465}" Mar 20 18:07:05.155185 systemd-networkd[1404]: lxc_health: Link UP Mar 20 18:07:05.155845 systemd-networkd[1404]: lxc_health: Gained carrier Mar 20 18:07:06.349090 containerd[1481]: time="2025-03-20T18:07:06.349032801Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2da06a74e20d8b87b5ea76009681977029295590191c52927bee50cb6e0aab4a\" id:\"bd202be2d71a7f51165d048d746ffa63d55aaf13b76d5686616a02aca77e53f0\" pid:5212 exited_at:{seconds:1742494026 nanos:348642480}" Mar 20 18:07:06.412522 systemd-networkd[1404]: lxc_health: Gained IPv6LL Mar 20 18:07:08.480209 containerd[1481]: time="2025-03-20T18:07:08.480170605Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2da06a74e20d8b87b5ea76009681977029295590191c52927bee50cb6e0aab4a\" id:\"eac6471f774b077398ac2e6a8853734e842b4f9c3e4cb34305eace138cc68da6\" pid:5239 exited_at:{seconds:1742494028 nanos:479444362}" Mar 20 18:07:10.573691 containerd[1481]: time="2025-03-20T18:07:10.573638374Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2da06a74e20d8b87b5ea76009681977029295590191c52927bee50cb6e0aab4a\" id:\"911cef8c3d836794ab745f546f876811e2713bd21ed58e13b1c1ed324ac2e434\" pid:5270 exited_at:{seconds:1742494030 nanos:573327493}" Mar 20 18:07:10.577403 sshd[4417]: Connection closed by 10.0.0.1 port 58646 Mar 20 18:07:10.577888 sshd-session[4414]: pam_unix(sshd:session): session closed for user core Mar 20 18:07:10.580549 systemd[1]: sshd@26-10.0.0.103:22-10.0.0.1:58646.service: Deactivated successfully. Mar 20 18:07:10.582246 systemd[1]: session-27.scope: Deactivated successfully. Mar 20 18:07:10.584038 systemd-logind[1469]: Session 27 logged out. Waiting for processes to exit. Mar 20 18:07:10.585101 systemd-logind[1469]: Removed session 27.