Mar 19 11:43:30.889098 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 19 11:43:30.889118 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Wed Mar 19 10:15:40 -00 2025 Mar 19 11:43:30.889128 kernel: KASLR enabled Mar 19 11:43:30.889133 kernel: efi: EFI v2.7 by EDK II Mar 19 11:43:30.889139 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Mar 19 11:43:30.889144 kernel: random: crng init done Mar 19 11:43:30.889151 kernel: secureboot: Secure boot disabled Mar 19 11:43:30.889156 kernel: ACPI: Early table checksum verification disabled Mar 19 11:43:30.889162 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Mar 19 11:43:30.889169 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Mar 19 11:43:30.889175 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:43:30.889180 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:43:30.889191 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:43:30.889197 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:43:30.889204 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:43:30.889211 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:43:30.889217 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:43:30.889223 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:43:30.889229 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:43:30.889235 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Mar 19 11:43:30.889252 kernel: NUMA: Failed to initialise from firmware Mar 19 11:43:30.889259 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Mar 19 11:43:30.889265 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Mar 19 11:43:30.889271 kernel: Zone ranges: Mar 19 11:43:30.889277 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Mar 19 11:43:30.889286 kernel: DMA32 empty Mar 19 11:43:30.889292 kernel: Normal empty Mar 19 11:43:30.889298 kernel: Movable zone start for each node Mar 19 11:43:30.889303 kernel: Early memory node ranges Mar 19 11:43:30.889321 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Mar 19 11:43:30.889331 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Mar 19 11:43:30.889337 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Mar 19 11:43:30.889343 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Mar 19 11:43:30.889349 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Mar 19 11:43:30.889361 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Mar 19 11:43:30.889368 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Mar 19 11:43:30.889374 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Mar 19 11:43:30.889382 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Mar 19 11:43:30.889390 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Mar 19 11:43:30.889398 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Mar 19 11:43:30.889411 kernel: psci: probing for conduit method from ACPI. Mar 19 11:43:30.889418 kernel: psci: PSCIv1.1 detected in firmware. Mar 19 11:43:30.889425 kernel: psci: Using standard PSCI v0.2 function IDs Mar 19 11:43:30.889432 kernel: psci: Trusted OS migration not required Mar 19 11:43:30.889439 kernel: psci: SMC Calling Convention v1.1 Mar 19 11:43:30.889445 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Mar 19 11:43:30.889452 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 19 11:43:30.889458 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 19 11:43:30.889465 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Mar 19 11:43:30.889471 kernel: Detected PIPT I-cache on CPU0 Mar 19 11:43:30.889478 kernel: CPU features: detected: GIC system register CPU interface Mar 19 11:43:30.889484 kernel: CPU features: detected: Hardware dirty bit management Mar 19 11:43:30.889490 kernel: CPU features: detected: Spectre-v4 Mar 19 11:43:30.889498 kernel: CPU features: detected: Spectre-BHB Mar 19 11:43:30.889504 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 19 11:43:30.889511 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 19 11:43:30.889517 kernel: CPU features: detected: ARM erratum 1418040 Mar 19 11:43:30.889523 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 19 11:43:30.889530 kernel: alternatives: applying boot alternatives Mar 19 11:43:30.889537 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=41cc5bdd62754423bbb4bbec0e0356d6a1ab5d0ac0f2396a30318f9fb189e7eb Mar 19 11:43:30.889544 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 19 11:43:30.889550 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 19 11:43:30.889557 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 19 11:43:30.889563 kernel: Fallback order for Node 0: 0 Mar 19 11:43:30.889570 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Mar 19 11:43:30.889577 kernel: Policy zone: DMA Mar 19 11:43:30.889583 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 19 11:43:30.889589 kernel: software IO TLB: area num 4. Mar 19 11:43:30.889598 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Mar 19 11:43:30.889605 kernel: Memory: 2387540K/2572288K available (10304K kernel code, 2186K rwdata, 8096K rodata, 38336K init, 897K bss, 184748K reserved, 0K cma-reserved) Mar 19 11:43:30.889611 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 19 11:43:30.889618 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 19 11:43:30.889625 kernel: rcu: RCU event tracing is enabled. Mar 19 11:43:30.889631 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 19 11:43:30.889638 kernel: Trampoline variant of Tasks RCU enabled. Mar 19 11:43:30.889644 kernel: Tracing variant of Tasks RCU enabled. Mar 19 11:43:30.889652 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 19 11:43:30.889658 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 19 11:43:30.889664 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 19 11:43:30.889671 kernel: GICv3: 256 SPIs implemented Mar 19 11:43:30.889677 kernel: GICv3: 0 Extended SPIs implemented Mar 19 11:43:30.889683 kernel: Root IRQ handler: gic_handle_irq Mar 19 11:43:30.889689 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 19 11:43:30.889695 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Mar 19 11:43:30.889702 kernel: ITS [mem 0x08080000-0x0809ffff] Mar 19 11:43:30.889708 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Mar 19 11:43:30.889715 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Mar 19 11:43:30.889722 kernel: GICv3: using LPI property table @0x00000000400f0000 Mar 19 11:43:30.889729 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Mar 19 11:43:30.889735 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 19 11:43:30.889742 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 19 11:43:30.889748 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 19 11:43:30.889755 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 19 11:43:30.889761 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 19 11:43:30.889767 kernel: arm-pv: using stolen time PV Mar 19 11:43:30.889774 kernel: Console: colour dummy device 80x25 Mar 19 11:43:30.889781 kernel: ACPI: Core revision 20230628 Mar 19 11:43:30.889787 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 19 11:43:30.889795 kernel: pid_max: default: 32768 minimum: 301 Mar 19 11:43:30.889802 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 19 11:43:30.889808 kernel: landlock: Up and running. Mar 19 11:43:30.889815 kernel: SELinux: Initializing. Mar 19 11:43:30.889821 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 19 11:43:30.889828 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 19 11:43:30.889834 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 19 11:43:30.889841 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 19 11:43:30.889848 kernel: rcu: Hierarchical SRCU implementation. Mar 19 11:43:30.889856 kernel: rcu: Max phase no-delay instances is 400. Mar 19 11:43:30.889862 kernel: Platform MSI: ITS@0x8080000 domain created Mar 19 11:43:30.889869 kernel: PCI/MSI: ITS@0x8080000 domain created Mar 19 11:43:30.889875 kernel: Remapping and enabling EFI services. Mar 19 11:43:30.889882 kernel: smp: Bringing up secondary CPUs ... Mar 19 11:43:30.889888 kernel: Detected PIPT I-cache on CPU1 Mar 19 11:43:30.889895 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Mar 19 11:43:30.889902 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Mar 19 11:43:30.889908 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 19 11:43:30.889916 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 19 11:43:30.889923 kernel: Detected PIPT I-cache on CPU2 Mar 19 11:43:30.889933 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Mar 19 11:43:30.889944 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Mar 19 11:43:30.889952 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 19 11:43:30.889958 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Mar 19 11:43:30.889965 kernel: Detected PIPT I-cache on CPU3 Mar 19 11:43:30.889972 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Mar 19 11:43:30.889979 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Mar 19 11:43:30.889988 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 19 11:43:30.889994 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Mar 19 11:43:30.890001 kernel: smp: Brought up 1 node, 4 CPUs Mar 19 11:43:30.890008 kernel: SMP: Total of 4 processors activated. Mar 19 11:43:30.890015 kernel: CPU features: detected: 32-bit EL0 Support Mar 19 11:43:30.890022 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 19 11:43:30.890031 kernel: CPU features: detected: Common not Private translations Mar 19 11:43:30.890038 kernel: CPU features: detected: CRC32 instructions Mar 19 11:43:30.890053 kernel: CPU features: detected: Enhanced Virtualization Traps Mar 19 11:43:30.890060 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 19 11:43:30.890067 kernel: CPU features: detected: LSE atomic instructions Mar 19 11:43:30.890074 kernel: CPU features: detected: Privileged Access Never Mar 19 11:43:30.890080 kernel: CPU features: detected: RAS Extension Support Mar 19 11:43:30.890087 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 19 11:43:30.890094 kernel: CPU: All CPU(s) started at EL1 Mar 19 11:43:30.890101 kernel: alternatives: applying system-wide alternatives Mar 19 11:43:30.890108 kernel: devtmpfs: initialized Mar 19 11:43:30.890115 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 19 11:43:30.890124 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 19 11:43:30.890131 kernel: pinctrl core: initialized pinctrl subsystem Mar 19 11:43:30.890138 kernel: SMBIOS 3.0.0 present. Mar 19 11:43:30.890145 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Mar 19 11:43:30.890153 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 19 11:43:30.890160 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 19 11:43:30.890167 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 19 11:43:30.890174 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 19 11:43:30.890182 kernel: audit: initializing netlink subsys (disabled) Mar 19 11:43:30.890189 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Mar 19 11:43:30.890196 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 19 11:43:30.890202 kernel: cpuidle: using governor menu Mar 19 11:43:30.890209 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 19 11:43:30.890216 kernel: ASID allocator initialised with 32768 entries Mar 19 11:43:30.890223 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 19 11:43:30.890230 kernel: Serial: AMBA PL011 UART driver Mar 19 11:43:30.890237 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 19 11:43:30.890249 kernel: Modules: 0 pages in range for non-PLT usage Mar 19 11:43:30.890258 kernel: Modules: 509280 pages in range for PLT usage Mar 19 11:43:30.890265 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 19 11:43:30.890271 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 19 11:43:30.890278 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 19 11:43:30.890285 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 19 11:43:30.890292 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 19 11:43:30.890299 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 19 11:43:30.890306 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 19 11:43:30.890313 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 19 11:43:30.890321 kernel: ACPI: Added _OSI(Module Device) Mar 19 11:43:30.890328 kernel: ACPI: Added _OSI(Processor Device) Mar 19 11:43:30.890335 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 19 11:43:30.890342 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 19 11:43:30.890349 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 19 11:43:30.890360 kernel: ACPI: Interpreter enabled Mar 19 11:43:30.890367 kernel: ACPI: Using GIC for interrupt routing Mar 19 11:43:30.890374 kernel: ACPI: MCFG table detected, 1 entries Mar 19 11:43:30.890380 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Mar 19 11:43:30.890389 kernel: printk: console [ttyAMA0] enabled Mar 19 11:43:30.890396 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 19 11:43:30.890528 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 19 11:43:30.890619 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 19 11:43:30.890698 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 19 11:43:30.890766 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Mar 19 11:43:30.890832 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Mar 19 11:43:30.890843 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Mar 19 11:43:30.890850 kernel: PCI host bridge to bus 0000:00 Mar 19 11:43:30.890923 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Mar 19 11:43:30.890984 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 19 11:43:30.891049 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Mar 19 11:43:30.891108 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 19 11:43:30.891187 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Mar 19 11:43:30.891287 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Mar 19 11:43:30.891368 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Mar 19 11:43:30.891440 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Mar 19 11:43:30.891508 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Mar 19 11:43:30.891574 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Mar 19 11:43:30.891640 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Mar 19 11:43:30.891712 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Mar 19 11:43:30.891783 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Mar 19 11:43:30.891842 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 19 11:43:30.891901 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Mar 19 11:43:30.891910 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 19 11:43:30.891917 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 19 11:43:30.891924 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 19 11:43:30.891931 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 19 11:43:30.891940 kernel: iommu: Default domain type: Translated Mar 19 11:43:30.891947 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 19 11:43:30.891954 kernel: efivars: Registered efivars operations Mar 19 11:43:30.891961 kernel: vgaarb: loaded Mar 19 11:43:30.891968 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 19 11:43:30.891975 kernel: VFS: Disk quotas dquot_6.6.0 Mar 19 11:43:30.891982 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 19 11:43:30.891989 kernel: pnp: PnP ACPI init Mar 19 11:43:30.892064 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Mar 19 11:43:30.892076 kernel: pnp: PnP ACPI: found 1 devices Mar 19 11:43:30.892083 kernel: NET: Registered PF_INET protocol family Mar 19 11:43:30.892090 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 19 11:43:30.892097 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 19 11:43:30.892108 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 19 11:43:30.892115 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 19 11:43:30.892122 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 19 11:43:30.892129 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 19 11:43:30.892136 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 19 11:43:30.892145 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 19 11:43:30.892152 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 19 11:43:30.892159 kernel: PCI: CLS 0 bytes, default 64 Mar 19 11:43:30.892166 kernel: kvm [1]: HYP mode not available Mar 19 11:43:30.892173 kernel: Initialise system trusted keyrings Mar 19 11:43:30.892180 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 19 11:43:30.892187 kernel: Key type asymmetric registered Mar 19 11:43:30.892193 kernel: Asymmetric key parser 'x509' registered Mar 19 11:43:30.892200 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 19 11:43:30.892208 kernel: io scheduler mq-deadline registered Mar 19 11:43:30.892215 kernel: io scheduler kyber registered Mar 19 11:43:30.892222 kernel: io scheduler bfq registered Mar 19 11:43:30.892229 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 19 11:43:30.892236 kernel: ACPI: button: Power Button [PWRB] Mar 19 11:43:30.892304 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 19 11:43:30.892400 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Mar 19 11:43:30.892411 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 19 11:43:30.892418 kernel: thunder_xcv, ver 1.0 Mar 19 11:43:30.892428 kernel: thunder_bgx, ver 1.0 Mar 19 11:43:30.892436 kernel: nicpf, ver 1.0 Mar 19 11:43:30.892443 kernel: nicvf, ver 1.0 Mar 19 11:43:30.892518 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 19 11:43:30.892583 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-19T11:43:30 UTC (1742384610) Mar 19 11:43:30.892592 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 19 11:43:30.892600 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Mar 19 11:43:30.892607 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 19 11:43:30.892616 kernel: watchdog: Hard watchdog permanently disabled Mar 19 11:43:30.892623 kernel: NET: Registered PF_INET6 protocol family Mar 19 11:43:30.892629 kernel: Segment Routing with IPv6 Mar 19 11:43:30.892636 kernel: In-situ OAM (IOAM) with IPv6 Mar 19 11:43:30.892643 kernel: NET: Registered PF_PACKET protocol family Mar 19 11:43:30.892650 kernel: Key type dns_resolver registered Mar 19 11:43:30.892657 kernel: registered taskstats version 1 Mar 19 11:43:30.892664 kernel: Loading compiled-in X.509 certificates Mar 19 11:43:30.892671 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 36392d496708ee63c4af5364493015d5256162ff' Mar 19 11:43:30.892679 kernel: Key type .fscrypt registered Mar 19 11:43:30.892686 kernel: Key type fscrypt-provisioning registered Mar 19 11:43:30.892693 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 19 11:43:30.892700 kernel: ima: Allocated hash algorithm: sha1 Mar 19 11:43:30.892707 kernel: ima: No architecture policies found Mar 19 11:43:30.892714 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 19 11:43:30.892720 kernel: clk: Disabling unused clocks Mar 19 11:43:30.892728 kernel: Freeing unused kernel memory: 38336K Mar 19 11:43:30.892734 kernel: Run /init as init process Mar 19 11:43:30.892743 kernel: with arguments: Mar 19 11:43:30.892749 kernel: /init Mar 19 11:43:30.892756 kernel: with environment: Mar 19 11:43:30.892763 kernel: HOME=/ Mar 19 11:43:30.892770 kernel: TERM=linux Mar 19 11:43:30.892776 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 19 11:43:30.892784 systemd[1]: Successfully made /usr/ read-only. Mar 19 11:43:30.892794 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 19 11:43:30.892803 systemd[1]: Detected virtualization kvm. Mar 19 11:43:30.892810 systemd[1]: Detected architecture arm64. Mar 19 11:43:30.892817 systemd[1]: Running in initrd. Mar 19 11:43:30.892824 systemd[1]: No hostname configured, using default hostname. Mar 19 11:43:30.892832 systemd[1]: Hostname set to . Mar 19 11:43:30.892839 systemd[1]: Initializing machine ID from VM UUID. Mar 19 11:43:30.892847 systemd[1]: Queued start job for default target initrd.target. Mar 19 11:43:30.892854 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 19 11:43:30.892863 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 19 11:43:30.892871 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 19 11:43:30.892879 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 19 11:43:30.892887 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 19 11:43:30.892895 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 19 11:43:30.892903 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 19 11:43:30.892912 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 19 11:43:30.892920 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 19 11:43:30.892927 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 19 11:43:30.892935 systemd[1]: Reached target paths.target - Path Units. Mar 19 11:43:30.892942 systemd[1]: Reached target slices.target - Slice Units. Mar 19 11:43:30.892949 systemd[1]: Reached target swap.target - Swaps. Mar 19 11:43:30.892957 systemd[1]: Reached target timers.target - Timer Units. Mar 19 11:43:30.892964 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 19 11:43:30.892972 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 19 11:43:30.892985 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 19 11:43:30.892994 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 19 11:43:30.893001 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 19 11:43:30.893008 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 19 11:43:30.893016 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 19 11:43:30.893023 systemd[1]: Reached target sockets.target - Socket Units. Mar 19 11:43:30.893031 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 19 11:43:30.893038 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 19 11:43:30.893047 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 19 11:43:30.893055 systemd[1]: Starting systemd-fsck-usr.service... Mar 19 11:43:30.893062 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 19 11:43:30.893070 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 19 11:43:30.893077 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:43:30.893084 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 19 11:43:30.893092 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 19 11:43:30.893101 systemd[1]: Finished systemd-fsck-usr.service. Mar 19 11:43:30.893109 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 19 11:43:30.893117 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:43:30.893125 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 19 11:43:30.893148 systemd-journald[239]: Collecting audit messages is disabled. Mar 19 11:43:30.893168 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 19 11:43:30.893179 systemd-journald[239]: Journal started Mar 19 11:43:30.893196 systemd-journald[239]: Runtime Journal (/run/log/journal/087327576a8b482c8c06e77b3efa8b78) is 5.9M, max 47.3M, 41.4M free. Mar 19 11:43:30.884513 systemd-modules-load[240]: Inserted module 'overlay' Mar 19 11:43:30.895956 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 19 11:43:30.899022 systemd[1]: Started systemd-journald.service - Journal Service. Mar 19 11:43:30.899079 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 19 11:43:30.902608 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 19 11:43:30.904939 kernel: Bridge firewalling registered Mar 19 11:43:30.903812 systemd-modules-load[240]: Inserted module 'br_netfilter' Mar 19 11:43:30.907276 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 19 11:43:30.908757 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 19 11:43:30.910612 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 19 11:43:30.911467 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 19 11:43:30.914444 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:43:30.916063 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 19 11:43:30.919279 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:43:30.921328 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 19 11:43:30.930855 dracut-cmdline[272]: dracut-dracut-053 Mar 19 11:43:30.934154 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=41cc5bdd62754423bbb4bbec0e0356d6a1ab5d0ac0f2396a30318f9fb189e7eb Mar 19 11:43:30.954353 systemd-resolved[277]: Positive Trust Anchors: Mar 19 11:43:30.954376 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 19 11:43:30.954406 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 19 11:43:30.958958 systemd-resolved[277]: Defaulting to hostname 'linux'. Mar 19 11:43:30.959884 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 19 11:43:30.962511 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 19 11:43:31.006273 kernel: SCSI subsystem initialized Mar 19 11:43:31.010257 kernel: Loading iSCSI transport class v2.0-870. Mar 19 11:43:31.020262 kernel: iscsi: registered transport (tcp) Mar 19 11:43:31.030266 kernel: iscsi: registered transport (qla4xxx) Mar 19 11:43:31.030299 kernel: QLogic iSCSI HBA Driver Mar 19 11:43:31.070646 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 19 11:43:31.084457 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 19 11:43:31.102287 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 19 11:43:31.102321 kernel: device-mapper: uevent: version 1.0.3 Mar 19 11:43:31.106273 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 19 11:43:31.150271 kernel: raid6: neonx8 gen() 15723 MB/s Mar 19 11:43:31.167265 kernel: raid6: neonx4 gen() 15729 MB/s Mar 19 11:43:31.184256 kernel: raid6: neonx2 gen() 13126 MB/s Mar 19 11:43:31.201256 kernel: raid6: neonx1 gen() 10435 MB/s Mar 19 11:43:31.218257 kernel: raid6: int64x8 gen() 6760 MB/s Mar 19 11:43:31.235255 kernel: raid6: int64x4 gen() 7311 MB/s Mar 19 11:43:31.252263 kernel: raid6: int64x2 gen() 6080 MB/s Mar 19 11:43:31.269255 kernel: raid6: int64x1 gen() 5027 MB/s Mar 19 11:43:31.269268 kernel: raid6: using algorithm neonx4 gen() 15729 MB/s Mar 19 11:43:31.286266 kernel: raid6: .... xor() 12383 MB/s, rmw enabled Mar 19 11:43:31.286290 kernel: raid6: using neon recovery algorithm Mar 19 11:43:31.291481 kernel: xor: measuring software checksum speed Mar 19 11:43:31.291498 kernel: 8regs : 21567 MB/sec Mar 19 11:43:31.291512 kernel: 32regs : 21704 MB/sec Mar 19 11:43:31.292408 kernel: arm64_neon : 27965 MB/sec Mar 19 11:43:31.292434 kernel: xor: using function: arm64_neon (27965 MB/sec) Mar 19 11:43:31.343278 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 19 11:43:31.353272 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 19 11:43:31.362419 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 19 11:43:31.374989 systemd-udevd[460]: Using default interface naming scheme 'v255'. Mar 19 11:43:31.379696 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 19 11:43:31.382973 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 19 11:43:31.395919 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Mar 19 11:43:31.420235 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 19 11:43:31.426377 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 19 11:43:31.470022 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 19 11:43:31.479390 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 19 11:43:31.489229 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 19 11:43:31.490501 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 19 11:43:31.492012 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 19 11:43:31.493918 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 19 11:43:31.501388 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 19 11:43:31.509953 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 19 11:43:31.524545 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Mar 19 11:43:31.534686 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 19 11:43:31.534799 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 19 11:43:31.534810 kernel: GPT:9289727 != 19775487 Mar 19 11:43:31.534819 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 19 11:43:31.534830 kernel: GPT:9289727 != 19775487 Mar 19 11:43:31.534839 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 19 11:43:31.534848 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 19 11:43:31.530997 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 19 11:43:31.531095 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:43:31.535046 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 19 11:43:31.536793 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 19 11:43:31.536916 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:43:31.540872 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:43:31.552511 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:43:31.558599 kernel: BTRFS: device fsid 7c80927c-98c3-4e81-a933-b7f5e1234bd2 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (510) Mar 19 11:43:31.558638 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (520) Mar 19 11:43:31.563945 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:43:31.572854 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 19 11:43:31.580443 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 19 11:43:31.596360 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 19 11:43:31.602284 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 19 11:43:31.603136 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 19 11:43:31.612435 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 19 11:43:31.613929 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 19 11:43:31.619724 disk-uuid[551]: Primary Header is updated. Mar 19 11:43:31.619724 disk-uuid[551]: Secondary Entries is updated. Mar 19 11:43:31.619724 disk-uuid[551]: Secondary Header is updated. Mar 19 11:43:31.628272 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 19 11:43:31.629575 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:43:32.636275 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 19 11:43:32.636841 disk-uuid[552]: The operation has completed successfully. Mar 19 11:43:32.661709 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 19 11:43:32.661802 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 19 11:43:32.696402 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 19 11:43:32.699049 sh[573]: Success Mar 19 11:43:32.712260 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 19 11:43:32.740727 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 19 11:43:32.753586 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 19 11:43:32.755699 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 19 11:43:32.765002 kernel: BTRFS info (device dm-0): first mount of filesystem 7c80927c-98c3-4e81-a933-b7f5e1234bd2 Mar 19 11:43:32.765039 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 19 11:43:32.765050 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 19 11:43:32.765059 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 19 11:43:32.765571 kernel: BTRFS info (device dm-0): using free space tree Mar 19 11:43:32.769143 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 19 11:43:32.770236 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 19 11:43:32.778382 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 19 11:43:32.779676 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 19 11:43:32.788517 kernel: BTRFS info (device vda6): first mount of filesystem eeeb6da4-27b1-474b-8015-c667e85f7b18 Mar 19 11:43:32.788564 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 19 11:43:32.788575 kernel: BTRFS info (device vda6): using free space tree Mar 19 11:43:32.791314 kernel: BTRFS info (device vda6): auto enabling async discard Mar 19 11:43:32.799302 kernel: BTRFS info (device vda6): last unmount of filesystem eeeb6da4-27b1-474b-8015-c667e85f7b18 Mar 19 11:43:32.805330 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 19 11:43:32.810409 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 19 11:43:32.870785 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 19 11:43:32.876403 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 19 11:43:32.886422 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 19 11:43:32.911095 ignition[669]: Ignition 2.20.0 Mar 19 11:43:32.911105 ignition[669]: Stage: fetch-offline Mar 19 11:43:32.911138 ignition[669]: no configs at "/usr/lib/ignition/base.d" Mar 19 11:43:32.911146 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 19 11:43:32.911315 ignition[669]: parsed url from cmdline: "" Mar 19 11:43:32.911318 ignition[669]: no config URL provided Mar 19 11:43:32.911323 ignition[669]: reading system config file "/usr/lib/ignition/user.ign" Mar 19 11:43:32.911330 ignition[669]: no config at "/usr/lib/ignition/user.ign" Mar 19 11:43:32.911362 ignition[669]: op(1): [started] loading QEMU firmware config module Mar 19 11:43:32.917061 systemd-networkd[764]: lo: Link UP Mar 19 11:43:32.911369 ignition[669]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 19 11:43:32.917064 systemd-networkd[764]: lo: Gained carrier Mar 19 11:43:32.917899 systemd-networkd[764]: Enumeration completed Mar 19 11:43:32.918328 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 19 11:43:32.923128 ignition[669]: op(1): [finished] loading QEMU firmware config module Mar 19 11:43:32.918452 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:43:32.918456 systemd-networkd[764]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 19 11:43:32.919404 systemd-networkd[764]: eth0: Link UP Mar 19 11:43:32.919407 systemd-networkd[764]: eth0: Gained carrier Mar 19 11:43:32.919413 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:43:32.920618 systemd[1]: Reached target network.target - Network. Mar 19 11:43:32.953292 systemd-networkd[764]: eth0: DHCPv4 address 10.0.0.94/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 19 11:43:32.966339 ignition[669]: parsing config with SHA512: a2f3465b3d99381de8845190605c7c622faa08eccfd07377cc0f5deb2afd5631491b7b018451dfd47e0e2fc1a97c971d486c12985731f57586b48b676c60dc91 Mar 19 11:43:32.972073 unknown[669]: fetched base config from "system" Mar 19 11:43:32.972092 unknown[669]: fetched user config from "qemu" Mar 19 11:43:32.973138 ignition[669]: fetch-offline: fetch-offline passed Mar 19 11:43:32.973229 ignition[669]: Ignition finished successfully Mar 19 11:43:32.975832 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 19 11:43:32.976864 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 19 11:43:32.987419 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 19 11:43:32.999215 ignition[772]: Ignition 2.20.0 Mar 19 11:43:32.999225 ignition[772]: Stage: kargs Mar 19 11:43:32.999402 ignition[772]: no configs at "/usr/lib/ignition/base.d" Mar 19 11:43:32.999413 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 19 11:43:33.000274 ignition[772]: kargs: kargs passed Mar 19 11:43:33.000317 ignition[772]: Ignition finished successfully Mar 19 11:43:33.003674 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 19 11:43:33.013441 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 19 11:43:33.022970 ignition[781]: Ignition 2.20.0 Mar 19 11:43:33.022979 ignition[781]: Stage: disks Mar 19 11:43:33.023124 ignition[781]: no configs at "/usr/lib/ignition/base.d" Mar 19 11:43:33.025301 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 19 11:43:33.023133 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 19 11:43:33.026362 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 19 11:43:33.023988 ignition[781]: disks: disks passed Mar 19 11:43:33.027537 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 19 11:43:33.024030 ignition[781]: Ignition finished successfully Mar 19 11:43:33.028988 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 19 11:43:33.030320 systemd[1]: Reached target sysinit.target - System Initialization. Mar 19 11:43:33.031360 systemd[1]: Reached target basic.target - Basic System. Mar 19 11:43:33.039493 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 19 11:43:33.050411 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 19 11:43:33.053499 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 19 11:43:33.055305 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 19 11:43:33.099283 kernel: EXT4-fs (vda9): mounted filesystem 45bb9a4a-80dc-4ce4-9ca9-c4944d8ff0e6 r/w with ordered data mode. Quota mode: none. Mar 19 11:43:33.099754 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 19 11:43:33.100730 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 19 11:43:33.111315 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 19 11:43:33.112747 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 19 11:43:33.113827 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 19 11:43:33.113866 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 19 11:43:33.119353 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (800) Mar 19 11:43:33.119380 kernel: BTRFS info (device vda6): first mount of filesystem eeeb6da4-27b1-474b-8015-c667e85f7b18 Mar 19 11:43:33.113887 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 19 11:43:33.123472 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 19 11:43:33.123490 kernel: BTRFS info (device vda6): using free space tree Mar 19 11:43:33.123500 kernel: BTRFS info (device vda6): auto enabling async discard Mar 19 11:43:33.121204 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 19 11:43:33.122911 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 19 11:43:33.125044 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 19 11:43:33.157863 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Mar 19 11:43:33.160894 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Mar 19 11:43:33.164049 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Mar 19 11:43:33.167813 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Mar 19 11:43:33.239547 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 19 11:43:33.246403 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 19 11:43:33.247695 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 19 11:43:33.252283 kernel: BTRFS info (device vda6): last unmount of filesystem eeeb6da4-27b1-474b-8015-c667e85f7b18 Mar 19 11:43:33.269298 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 19 11:43:33.270715 ignition[914]: INFO : Ignition 2.20.0 Mar 19 11:43:33.270715 ignition[914]: INFO : Stage: mount Mar 19 11:43:33.270715 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 19 11:43:33.270715 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 19 11:43:33.270715 ignition[914]: INFO : mount: mount passed Mar 19 11:43:33.270715 ignition[914]: INFO : Ignition finished successfully Mar 19 11:43:33.271657 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 19 11:43:33.281346 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 19 11:43:33.871800 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 19 11:43:33.879504 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 19 11:43:33.885567 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (927) Mar 19 11:43:33.885603 kernel: BTRFS info (device vda6): first mount of filesystem eeeb6da4-27b1-474b-8015-c667e85f7b18 Mar 19 11:43:33.885614 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 19 11:43:33.886708 kernel: BTRFS info (device vda6): using free space tree Mar 19 11:43:33.889278 kernel: BTRFS info (device vda6): auto enabling async discard Mar 19 11:43:33.889717 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 19 11:43:33.904812 ignition[944]: INFO : Ignition 2.20.0 Mar 19 11:43:33.904812 ignition[944]: INFO : Stage: files Mar 19 11:43:33.906086 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 19 11:43:33.906086 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 19 11:43:33.906086 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Mar 19 11:43:33.908754 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 19 11:43:33.908754 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 19 11:43:33.911435 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 19 11:43:33.912461 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 19 11:43:33.912461 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 19 11:43:33.911925 unknown[944]: wrote ssh authorized keys file for user: core Mar 19 11:43:33.915136 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 19 11:43:33.915136 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Mar 19 11:43:33.995619 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 19 11:43:34.138408 systemd-networkd[764]: eth0: Gained IPv6LL Mar 19 11:43:34.164204 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 19 11:43:34.164204 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 19 11:43:34.166905 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 19 11:43:34.496191 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 19 11:43:34.544761 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 19 11:43:34.546273 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 19 11:43:34.546273 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 19 11:43:34.546273 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 19 11:43:34.546273 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 19 11:43:34.546273 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 19 11:43:34.546273 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 19 11:43:34.546273 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 19 11:43:34.546273 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 19 11:43:34.546273 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 19 11:43:34.546273 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 19 11:43:34.546273 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 19 11:43:34.546273 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 19 11:43:34.546273 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 19 11:43:34.546273 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Mar 19 11:43:34.770340 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 19 11:43:34.969579 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 19 11:43:34.969579 ignition[944]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 19 11:43:34.972681 ignition[944]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 19 11:43:34.972681 ignition[944]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 19 11:43:34.972681 ignition[944]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 19 11:43:34.972681 ignition[944]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 19 11:43:34.972681 ignition[944]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 19 11:43:34.972681 ignition[944]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 19 11:43:34.972681 ignition[944]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 19 11:43:34.972681 ignition[944]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 19 11:43:34.986740 ignition[944]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 19 11:43:34.989684 ignition[944]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 19 11:43:34.991425 ignition[944]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 19 11:43:34.991425 ignition[944]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 19 11:43:34.991425 ignition[944]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 19 11:43:34.991425 ignition[944]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 19 11:43:34.991425 ignition[944]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 19 11:43:34.991425 ignition[944]: INFO : files: files passed Mar 19 11:43:34.991425 ignition[944]: INFO : Ignition finished successfully Mar 19 11:43:34.992125 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 19 11:43:35.006469 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 19 11:43:35.008531 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 19 11:43:35.009628 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 19 11:43:35.009700 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 19 11:43:35.015577 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory Mar 19 11:43:35.017556 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 19 11:43:35.017556 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 19 11:43:35.020138 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 19 11:43:35.020724 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 19 11:43:35.022226 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 19 11:43:35.033468 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 19 11:43:35.049381 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 19 11:43:35.049490 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 19 11:43:35.051652 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 19 11:43:35.052560 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 19 11:43:35.053878 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 19 11:43:35.054668 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 19 11:43:35.068800 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 19 11:43:35.081454 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 19 11:43:35.089392 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 19 11:43:35.090316 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 19 11:43:35.091878 systemd[1]: Stopped target timers.target - Timer Units. Mar 19 11:43:35.093161 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 19 11:43:35.093296 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 19 11:43:35.095151 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 19 11:43:35.096662 systemd[1]: Stopped target basic.target - Basic System. Mar 19 11:43:35.097937 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 19 11:43:35.099187 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 19 11:43:35.100759 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 19 11:43:35.102155 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 19 11:43:35.103499 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 19 11:43:35.105387 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 19 11:43:35.107014 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 19 11:43:35.108253 systemd[1]: Stopped target swap.target - Swaps. Mar 19 11:43:35.109468 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 19 11:43:35.109584 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 19 11:43:35.111293 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 19 11:43:35.112885 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 19 11:43:35.114451 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 19 11:43:35.115321 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 19 11:43:35.116891 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 19 11:43:35.117000 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 19 11:43:35.119333 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 19 11:43:35.119457 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 19 11:43:35.120898 systemd[1]: Stopped target paths.target - Path Units. Mar 19 11:43:35.122034 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 19 11:43:35.125332 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 19 11:43:35.127174 systemd[1]: Stopped target slices.target - Slice Units. Mar 19 11:43:35.127926 systemd[1]: Stopped target sockets.target - Socket Units. Mar 19 11:43:35.129037 systemd[1]: iscsid.socket: Deactivated successfully. Mar 19 11:43:35.129119 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 19 11:43:35.130200 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 19 11:43:35.130292 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 19 11:43:35.131381 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 19 11:43:35.131486 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 19 11:43:35.132778 systemd[1]: ignition-files.service: Deactivated successfully. Mar 19 11:43:35.132878 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 19 11:43:35.148474 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 19 11:43:35.149158 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 19 11:43:35.149303 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 19 11:43:35.152482 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 19 11:43:35.153877 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 19 11:43:35.154754 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 19 11:43:35.156672 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 19 11:43:35.157530 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 19 11:43:35.161553 ignition[1000]: INFO : Ignition 2.20.0 Mar 19 11:43:35.161553 ignition[1000]: INFO : Stage: umount Mar 19 11:43:35.161553 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 19 11:43:35.161553 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 19 11:43:35.165826 ignition[1000]: INFO : umount: umount passed Mar 19 11:43:35.165826 ignition[1000]: INFO : Ignition finished successfully Mar 19 11:43:35.162600 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 19 11:43:35.162686 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 19 11:43:35.164584 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 19 11:43:35.165049 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 19 11:43:35.165122 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 19 11:43:35.168380 systemd[1]: Stopped target network.target - Network. Mar 19 11:43:35.169119 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 19 11:43:35.169179 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 19 11:43:35.170819 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 19 11:43:35.170865 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 19 11:43:35.172022 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 19 11:43:35.172063 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 19 11:43:35.173271 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 19 11:43:35.173319 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 19 11:43:35.174884 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 19 11:43:35.176117 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 19 11:43:35.179537 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 19 11:43:35.179660 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 19 11:43:35.182441 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 19 11:43:35.182684 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 19 11:43:35.182721 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 19 11:43:35.185122 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 19 11:43:35.188835 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 19 11:43:35.188954 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 19 11:43:35.191315 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 19 11:43:35.191535 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 19 11:43:35.191563 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 19 11:43:35.204347 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 19 11:43:35.205050 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 19 11:43:35.205104 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 19 11:43:35.206562 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 19 11:43:35.206604 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:43:35.208986 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 19 11:43:35.209028 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 19 11:43:35.210662 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 19 11:43:35.213532 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 19 11:43:35.220561 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 19 11:43:35.220693 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 19 11:43:35.225925 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 19 11:43:35.226062 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 19 11:43:35.227750 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 19 11:43:35.227792 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 19 11:43:35.229185 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 19 11:43:35.229219 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 19 11:43:35.230609 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 19 11:43:35.230654 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 19 11:43:35.232605 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 19 11:43:35.232645 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 19 11:43:35.234521 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 19 11:43:35.234562 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:43:35.249493 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 19 11:43:35.250317 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 19 11:43:35.250381 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 19 11:43:35.252891 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 19 11:43:35.252935 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:43:35.255706 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 19 11:43:35.255791 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 19 11:43:35.257117 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 19 11:43:35.257189 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 19 11:43:35.259054 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 19 11:43:35.260574 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 19 11:43:35.260641 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 19 11:43:35.262899 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 19 11:43:35.272990 systemd[1]: Switching root. Mar 19 11:43:35.307156 systemd-journald[239]: Journal stopped Mar 19 11:43:36.052588 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Mar 19 11:43:36.052667 kernel: SELinux: policy capability network_peer_controls=1 Mar 19 11:43:36.052680 kernel: SELinux: policy capability open_perms=1 Mar 19 11:43:36.052690 kernel: SELinux: policy capability extended_socket_class=1 Mar 19 11:43:36.052700 kernel: SELinux: policy capability always_check_network=0 Mar 19 11:43:36.052710 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 19 11:43:36.052720 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 19 11:43:36.052730 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 19 11:43:36.052740 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 19 11:43:36.052753 kernel: audit: type=1403 audit(1742384615.460:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 19 11:43:36.052764 systemd[1]: Successfully loaded SELinux policy in 29.749ms. Mar 19 11:43:36.052780 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.558ms. Mar 19 11:43:36.052792 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 19 11:43:36.052803 systemd[1]: Detected virtualization kvm. Mar 19 11:43:36.052814 systemd[1]: Detected architecture arm64. Mar 19 11:43:36.052824 systemd[1]: Detected first boot. Mar 19 11:43:36.052834 systemd[1]: Initializing machine ID from VM UUID. Mar 19 11:43:36.052845 zram_generator::config[1046]: No configuration found. Mar 19 11:43:36.052857 kernel: NET: Registered PF_VSOCK protocol family Mar 19 11:43:36.052868 systemd[1]: Populated /etc with preset unit settings. Mar 19 11:43:36.052881 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 19 11:43:36.052892 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 19 11:43:36.052902 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 19 11:43:36.052913 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 19 11:43:36.052924 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 19 11:43:36.052934 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 19 11:43:36.052947 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 19 11:43:36.052957 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 19 11:43:36.052968 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 19 11:43:36.052978 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 19 11:43:36.052989 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 19 11:43:36.052999 systemd[1]: Created slice user.slice - User and Session Slice. Mar 19 11:43:36.053010 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 19 11:43:36.053020 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 19 11:43:36.053099 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 19 11:43:36.053111 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 19 11:43:36.053122 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 19 11:43:36.053132 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 19 11:43:36.053143 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 19 11:43:36.053153 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 19 11:43:36.053164 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 19 11:43:36.053177 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 19 11:43:36.053189 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 19 11:43:36.053199 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 19 11:43:36.053210 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 19 11:43:36.053220 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 19 11:43:36.053231 systemd[1]: Reached target slices.target - Slice Units. Mar 19 11:43:36.053251 systemd[1]: Reached target swap.target - Swaps. Mar 19 11:43:36.053265 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 19 11:43:36.053276 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 19 11:43:36.053286 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 19 11:43:36.053300 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 19 11:43:36.053311 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 19 11:43:36.053321 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 19 11:43:36.053332 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 19 11:43:36.053342 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 19 11:43:36.053353 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 19 11:43:36.053370 systemd[1]: Mounting media.mount - External Media Directory... Mar 19 11:43:36.053382 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 19 11:43:36.053392 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 19 11:43:36.053405 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 19 11:43:36.053415 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 19 11:43:36.053427 systemd[1]: Reached target machines.target - Containers. Mar 19 11:43:36.053437 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 19 11:43:36.053448 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 11:43:36.053458 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 19 11:43:36.053468 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 19 11:43:36.053478 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 11:43:36.053490 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 19 11:43:36.053501 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 11:43:36.053511 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 19 11:43:36.053521 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 11:43:36.053532 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 19 11:43:36.053542 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 19 11:43:36.053552 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 19 11:43:36.053562 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 19 11:43:36.053572 kernel: fuse: init (API version 7.39) Mar 19 11:43:36.053584 systemd[1]: Stopped systemd-fsck-usr.service. Mar 19 11:43:36.053595 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 11:43:36.053604 kernel: loop: module loaded Mar 19 11:43:36.053614 kernel: ACPI: bus type drm_connector registered Mar 19 11:43:36.053623 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 19 11:43:36.053633 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 19 11:43:36.053644 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 19 11:43:36.053655 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 19 11:43:36.053665 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 19 11:43:36.053677 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 19 11:43:36.053717 systemd-journald[1122]: Collecting audit messages is disabled. Mar 19 11:43:36.053748 systemd[1]: verity-setup.service: Deactivated successfully. Mar 19 11:43:36.053761 systemd[1]: Stopped verity-setup.service. Mar 19 11:43:36.053772 systemd-journald[1122]: Journal started Mar 19 11:43:36.053793 systemd-journald[1122]: Runtime Journal (/run/log/journal/087327576a8b482c8c06e77b3efa8b78) is 5.9M, max 47.3M, 41.4M free. Mar 19 11:43:35.849130 systemd[1]: Queued start job for default target multi-user.target. Mar 19 11:43:35.867410 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 19 11:43:35.867815 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 19 11:43:36.057260 systemd[1]: Started systemd-journald.service - Journal Service. Mar 19 11:43:36.057844 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 19 11:43:36.058761 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 19 11:43:36.059787 systemd[1]: Mounted media.mount - External Media Directory. Mar 19 11:43:36.060712 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 19 11:43:36.061729 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 19 11:43:36.062703 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 19 11:43:36.064474 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 19 11:43:36.065715 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 19 11:43:36.066932 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 19 11:43:36.067136 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 19 11:43:36.068434 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 11:43:36.068599 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 11:43:36.069697 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 19 11:43:36.069866 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 19 11:43:36.071642 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 11:43:36.071849 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 11:43:36.073046 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 19 11:43:36.073221 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 19 11:43:36.074324 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 11:43:36.074505 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 11:43:36.075900 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 19 11:43:36.077150 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 19 11:43:36.079598 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 19 11:43:36.081279 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 19 11:43:36.095050 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 19 11:43:36.111566 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 19 11:43:36.113625 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 19 11:43:36.114629 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 19 11:43:36.114673 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 19 11:43:36.116667 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 19 11:43:36.118828 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 19 11:43:36.120897 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 19 11:43:36.121877 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 11:43:36.123260 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 19 11:43:36.125003 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 19 11:43:36.126103 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 19 11:43:36.129465 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 19 11:43:36.130548 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 19 11:43:36.132270 systemd-journald[1122]: Time spent on flushing to /var/log/journal/087327576a8b482c8c06e77b3efa8b78 is 14.852ms for 868 entries. Mar 19 11:43:36.132270 systemd-journald[1122]: System Journal (/var/log/journal/087327576a8b482c8c06e77b3efa8b78) is 8M, max 195.6M, 187.6M free. Mar 19 11:43:36.153734 systemd-journald[1122]: Received client request to flush runtime journal. Mar 19 11:43:36.133532 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 19 11:43:36.136757 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 19 11:43:36.153784 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 19 11:43:36.158835 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 19 11:43:36.160279 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 19 11:43:36.161355 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 19 11:43:36.164374 kernel: loop0: detected capacity change from 0 to 123192 Mar 19 11:43:36.164320 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 19 11:43:36.166235 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 19 11:43:36.167715 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 19 11:43:36.169077 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:43:36.174709 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 19 11:43:36.186261 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 19 11:43:36.188414 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 19 11:43:36.190677 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 19 11:43:36.192081 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 19 11:43:36.204445 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 19 11:43:36.207620 udevadm[1180]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 19 11:43:36.211931 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 19 11:43:36.222501 kernel: loop1: detected capacity change from 0 to 113512 Mar 19 11:43:36.230595 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Mar 19 11:43:36.230615 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Mar 19 11:43:36.234871 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 19 11:43:36.244312 kernel: loop2: detected capacity change from 0 to 189592 Mar 19 11:43:36.281382 kernel: loop3: detected capacity change from 0 to 123192 Mar 19 11:43:36.286982 kernel: loop4: detected capacity change from 0 to 113512 Mar 19 11:43:36.291383 kernel: loop5: detected capacity change from 0 to 189592 Mar 19 11:43:36.297283 (sd-merge)[1189]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 19 11:43:36.297691 (sd-merge)[1189]: Merged extensions into '/usr'. Mar 19 11:43:36.301592 systemd[1]: Reload requested from client PID 1164 ('systemd-sysext') (unit systemd-sysext.service)... Mar 19 11:43:36.301616 systemd[1]: Reloading... Mar 19 11:43:36.358197 zram_generator::config[1214]: No configuration found. Mar 19 11:43:36.412824 ldconfig[1159]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 19 11:43:36.465384 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:43:36.514409 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 19 11:43:36.514781 systemd[1]: Reloading finished in 212 ms. Mar 19 11:43:36.531869 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 19 11:43:36.533128 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 19 11:43:36.547636 systemd[1]: Starting ensure-sysext.service... Mar 19 11:43:36.550368 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 19 11:43:36.564338 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 19 11:43:36.564564 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 19 11:43:36.565176 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 19 11:43:36.565401 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Mar 19 11:43:36.565449 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Mar 19 11:43:36.567939 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot. Mar 19 11:43:36.567952 systemd-tmpfiles[1253]: Skipping /boot Mar 19 11:43:36.577047 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot. Mar 19 11:43:36.577065 systemd-tmpfiles[1253]: Skipping /boot Mar 19 11:43:36.578784 systemd[1]: Reload requested from client PID 1252 ('systemctl') (unit ensure-sysext.service)... Mar 19 11:43:36.578799 systemd[1]: Reloading... Mar 19 11:43:36.626443 zram_generator::config[1282]: No configuration found. Mar 19 11:43:36.700832 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:43:36.750854 systemd[1]: Reloading finished in 171 ms. Mar 19 11:43:36.762898 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 19 11:43:36.778688 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 19 11:43:36.788750 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 19 11:43:36.791275 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 19 11:43:36.793442 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 19 11:43:36.796937 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 19 11:43:36.810888 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 19 11:43:36.824567 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 19 11:43:36.830602 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 19 11:43:36.838412 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 11:43:36.848584 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 11:43:36.850549 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 11:43:36.855541 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 11:43:36.856414 systemd-udevd[1327]: Using default interface naming scheme 'v255'. Mar 19 11:43:36.856653 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 11:43:36.856824 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 11:43:36.859607 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 19 11:43:36.867616 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 19 11:43:36.871969 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 19 11:43:36.874879 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 11:43:36.875058 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 11:43:36.876737 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 19 11:43:36.878607 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 11:43:36.878764 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 11:43:36.882649 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 11:43:36.882725 augenrules[1351]: No rules Mar 19 11:43:36.882804 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 11:43:36.884470 systemd[1]: audit-rules.service: Deactivated successfully. Mar 19 11:43:36.884643 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 19 11:43:36.886170 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 19 11:43:36.890168 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 19 11:43:36.908878 systemd[1]: Finished ensure-sysext.service. Mar 19 11:43:36.923508 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 19 11:43:36.925518 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 11:43:36.926528 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 11:43:36.930499 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 19 11:43:36.933441 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 11:43:36.936439 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 11:43:36.938447 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 11:43:36.938496 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 11:43:36.940225 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 19 11:43:36.947417 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 19 11:43:36.948521 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 19 11:43:36.949891 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 19 11:43:36.951728 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 11:43:36.951925 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 11:43:36.955292 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1373) Mar 19 11:43:36.956670 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 19 11:43:36.958302 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 19 11:43:36.958504 augenrules[1381]: /sbin/augenrules: No change Mar 19 11:43:36.959597 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 11:43:36.959758 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 11:43:36.966927 augenrules[1413]: No rules Mar 19 11:43:36.965341 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 11:43:36.965517 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 11:43:36.967518 systemd[1]: audit-rules.service: Deactivated successfully. Mar 19 11:43:36.967714 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 19 11:43:36.971987 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 19 11:43:36.999274 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 19 11:43:36.999346 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 19 11:43:37.022571 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 19 11:43:37.039087 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 19 11:43:37.040828 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 19 11:43:37.041734 systemd-networkd[1398]: lo: Link UP Mar 19 11:43:37.041740 systemd-networkd[1398]: lo: Gained carrier Mar 19 11:43:37.042263 systemd[1]: Reached target time-set.target - System Time Set. Mar 19 11:43:37.042815 systemd-networkd[1398]: Enumeration completed Mar 19 11:43:37.043309 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 19 11:43:37.047590 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:43:37.047599 systemd-networkd[1398]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 19 11:43:37.048028 systemd-networkd[1398]: eth0: Link UP Mar 19 11:43:37.048032 systemd-networkd[1398]: eth0: Gained carrier Mar 19 11:43:37.048045 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:43:37.050688 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 19 11:43:37.053958 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 19 11:43:37.056793 systemd-resolved[1321]: Positive Trust Anchors: Mar 19 11:43:37.058747 systemd-resolved[1321]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 19 11:43:37.058787 systemd-resolved[1321]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 19 11:43:37.069322 systemd-networkd[1398]: eth0: DHCPv4 address 10.0.0.94/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 19 11:43:37.070476 systemd-timesyncd[1400]: Network configuration changed, trying to establish connection. Mar 19 11:43:37.071077 systemd-resolved[1321]: Defaulting to hostname 'linux'. Mar 19 11:43:37.071385 systemd-timesyncd[1400]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 19 11:43:37.071536 systemd-timesyncd[1400]: Initial clock synchronization to Wed 2025-03-19 11:43:37.256370 UTC. Mar 19 11:43:37.071989 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 19 11:43:37.074068 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 19 11:43:37.077726 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 19 11:43:37.078742 systemd[1]: Reached target network.target - Network. Mar 19 11:43:37.079984 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 19 11:43:37.111509 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:43:37.121541 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 19 11:43:37.124464 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 19 11:43:37.144773 lvm[1441]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 19 11:43:37.155037 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:43:37.191228 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 19 11:43:37.192880 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 19 11:43:37.194381 systemd[1]: Reached target sysinit.target - System Initialization. Mar 19 11:43:37.195426 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 19 11:43:37.196532 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 19 11:43:37.197813 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 19 11:43:37.198890 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 19 11:43:37.199954 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 19 11:43:37.201014 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 19 11:43:37.201114 systemd[1]: Reached target paths.target - Path Units. Mar 19 11:43:37.201928 systemd[1]: Reached target timers.target - Timer Units. Mar 19 11:43:37.203786 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 19 11:43:37.205933 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 19 11:43:37.208919 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 19 11:43:37.210088 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 19 11:43:37.211107 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 19 11:43:37.213964 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 19 11:43:37.215142 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 19 11:43:37.217139 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 19 11:43:37.218576 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 19 11:43:37.219457 systemd[1]: Reached target sockets.target - Socket Units. Mar 19 11:43:37.220177 systemd[1]: Reached target basic.target - Basic System. Mar 19 11:43:37.220991 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 19 11:43:37.221027 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 19 11:43:37.221958 systemd[1]: Starting containerd.service - containerd container runtime... Mar 19 11:43:37.223773 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 19 11:43:37.226379 lvm[1448]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 19 11:43:37.226385 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 19 11:43:37.229144 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 19 11:43:37.231379 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 19 11:43:37.232642 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 19 11:43:37.236106 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 19 11:43:37.240445 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 19 11:43:37.242641 jq[1451]: false Mar 19 11:43:37.242688 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 19 11:43:37.247427 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 19 11:43:37.248535 extend-filesystems[1452]: Found loop3 Mar 19 11:43:37.249670 extend-filesystems[1452]: Found loop4 Mar 19 11:43:37.249670 extend-filesystems[1452]: Found loop5 Mar 19 11:43:37.249670 extend-filesystems[1452]: Found vda Mar 19 11:43:37.249670 extend-filesystems[1452]: Found vda1 Mar 19 11:43:37.249670 extend-filesystems[1452]: Found vda2 Mar 19 11:43:37.249670 extend-filesystems[1452]: Found vda3 Mar 19 11:43:37.249670 extend-filesystems[1452]: Found usr Mar 19 11:43:37.249670 extend-filesystems[1452]: Found vda4 Mar 19 11:43:37.249670 extend-filesystems[1452]: Found vda6 Mar 19 11:43:37.249670 extend-filesystems[1452]: Found vda7 Mar 19 11:43:37.249670 extend-filesystems[1452]: Found vda9 Mar 19 11:43:37.249670 extend-filesystems[1452]: Checking size of /dev/vda9 Mar 19 11:43:37.249864 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 19 11:43:37.256186 dbus-daemon[1450]: [system] SELinux support is enabled Mar 19 11:43:37.250291 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 19 11:43:37.251632 systemd[1]: Starting update-engine.service - Update Engine... Mar 19 11:43:37.256399 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 19 11:43:37.268051 jq[1467]: true Mar 19 11:43:37.259911 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 19 11:43:37.264287 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 19 11:43:37.283674 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 19 11:43:37.283870 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 19 11:43:37.284152 systemd[1]: motdgen.service: Deactivated successfully. Mar 19 11:43:37.284327 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 19 11:43:37.286159 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 19 11:43:37.286377 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 19 11:43:37.293389 extend-filesystems[1452]: Resized partition /dev/vda9 Mar 19 11:43:37.300062 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1358) Mar 19 11:43:37.299403 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 19 11:43:37.299432 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 19 11:43:37.300778 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 19 11:43:37.300812 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 19 11:43:37.310496 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 19 11:43:37.310564 extend-filesystems[1485]: resize2fs 1.47.1 (20-May-2024) Mar 19 11:43:37.321312 update_engine[1464]: I20250319 11:43:37.309873 1464 main.cc:92] Flatcar Update Engine starting Mar 19 11:43:37.321047 (ntainerd)[1476]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 19 11:43:37.321671 tar[1474]: linux-arm64/helm Mar 19 11:43:37.322843 systemd[1]: Started update-engine.service - Update Engine. Mar 19 11:43:37.323050 update_engine[1464]: I20250319 11:43:37.322820 1464 update_check_scheduler.cc:74] Next update check in 2m55s Mar 19 11:43:37.323826 jq[1475]: true Mar 19 11:43:37.330475 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 19 11:43:37.347297 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 19 11:43:37.366518 systemd-logind[1460]: Watching system buttons on /dev/input/event0 (Power Button) Mar 19 11:43:37.367049 extend-filesystems[1485]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 19 11:43:37.367049 extend-filesystems[1485]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 19 11:43:37.367049 extend-filesystems[1485]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 19 11:43:37.373742 extend-filesystems[1452]: Resized filesystem in /dev/vda9 Mar 19 11:43:37.368461 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 19 11:43:37.368778 systemd-logind[1460]: New seat seat0. Mar 19 11:43:37.368835 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 19 11:43:37.373419 systemd[1]: Started systemd-logind.service - User Login Management. Mar 19 11:43:37.385639 bash[1504]: Updated "/home/core/.ssh/authorized_keys" Mar 19 11:43:37.387820 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 19 11:43:37.393154 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 19 11:43:37.434277 locksmithd[1490]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 19 11:43:37.539239 containerd[1476]: time="2025-03-19T11:43:37.539118240Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 19 11:43:37.565819 containerd[1476]: time="2025-03-19T11:43:37.565735000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:43:37.567252 containerd[1476]: time="2025-03-19T11:43:37.567175280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:43:37.567252 containerd[1476]: time="2025-03-19T11:43:37.567207280Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 19 11:43:37.567252 containerd[1476]: time="2025-03-19T11:43:37.567224360Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 19 11:43:37.567452 containerd[1476]: time="2025-03-19T11:43:37.567413560Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 19 11:43:37.567452 containerd[1476]: time="2025-03-19T11:43:37.567439040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 19 11:43:37.567510 containerd[1476]: time="2025-03-19T11:43:37.567494000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:43:37.567532 containerd[1476]: time="2025-03-19T11:43:37.567509240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:43:37.567717 containerd[1476]: time="2025-03-19T11:43:37.567697760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:43:37.567747 containerd[1476]: time="2025-03-19T11:43:37.567717880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 19 11:43:37.567747 containerd[1476]: time="2025-03-19T11:43:37.567730800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:43:37.567747 containerd[1476]: time="2025-03-19T11:43:37.567740320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 19 11:43:37.567851 containerd[1476]: time="2025-03-19T11:43:37.567807880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:43:37.568027 containerd[1476]: time="2025-03-19T11:43:37.568008240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:43:37.568148 containerd[1476]: time="2025-03-19T11:43:37.568131040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:43:37.568170 containerd[1476]: time="2025-03-19T11:43:37.568148960Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 19 11:43:37.568234 containerd[1476]: time="2025-03-19T11:43:37.568218840Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 19 11:43:37.568306 containerd[1476]: time="2025-03-19T11:43:37.568290960Z" level=info msg="metadata content store policy set" policy=shared Mar 19 11:43:37.571347 containerd[1476]: time="2025-03-19T11:43:37.571302560Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 19 11:43:37.571347 containerd[1476]: time="2025-03-19T11:43:37.571353240Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 19 11:43:37.571430 containerd[1476]: time="2025-03-19T11:43:37.571376880Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 19 11:43:37.571430 containerd[1476]: time="2025-03-19T11:43:37.571392480Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 19 11:43:37.571430 containerd[1476]: time="2025-03-19T11:43:37.571406680Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 19 11:43:37.571592 containerd[1476]: time="2025-03-19T11:43:37.571566920Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 19 11:43:37.571847 containerd[1476]: time="2025-03-19T11:43:37.571827720Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 19 11:43:37.571949 containerd[1476]: time="2025-03-19T11:43:37.571929000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 19 11:43:37.571973 containerd[1476]: time="2025-03-19T11:43:37.571949320Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 19 11:43:37.571973 containerd[1476]: time="2025-03-19T11:43:37.571965360Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 19 11:43:37.572005 containerd[1476]: time="2025-03-19T11:43:37.571978440Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 19 11:43:37.572005 containerd[1476]: time="2025-03-19T11:43:37.571991840Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 19 11:43:37.572041 containerd[1476]: time="2025-03-19T11:43:37.572004000Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 19 11:43:37.572041 containerd[1476]: time="2025-03-19T11:43:37.572017680Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 19 11:43:37.572041 containerd[1476]: time="2025-03-19T11:43:37.572032200Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 19 11:43:37.572100 containerd[1476]: time="2025-03-19T11:43:37.572044400Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 19 11:43:37.572100 containerd[1476]: time="2025-03-19T11:43:37.572064520Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 19 11:43:37.572100 containerd[1476]: time="2025-03-19T11:43:37.572093200Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 19 11:43:37.572146 containerd[1476]: time="2025-03-19T11:43:37.572113800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 19 11:43:37.572146 containerd[1476]: time="2025-03-19T11:43:37.572128200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 19 11:43:37.572146 containerd[1476]: time="2025-03-19T11:43:37.572140120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 19 11:43:37.572199 containerd[1476]: time="2025-03-19T11:43:37.572151280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 19 11:43:37.572199 containerd[1476]: time="2025-03-19T11:43:37.572163200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 19 11:43:37.572199 containerd[1476]: time="2025-03-19T11:43:37.572175160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 19 11:43:37.572199 containerd[1476]: time="2025-03-19T11:43:37.572186040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 19 11:43:37.572423 containerd[1476]: time="2025-03-19T11:43:37.572199440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 19 11:43:37.572423 containerd[1476]: time="2025-03-19T11:43:37.572212160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 19 11:43:37.572423 containerd[1476]: time="2025-03-19T11:43:37.572225280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 19 11:43:37.572423 containerd[1476]: time="2025-03-19T11:43:37.572236440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 19 11:43:37.572423 containerd[1476]: time="2025-03-19T11:43:37.572263160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 19 11:43:37.572423 containerd[1476]: time="2025-03-19T11:43:37.572276680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 19 11:43:37.572423 containerd[1476]: time="2025-03-19T11:43:37.572291080Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 19 11:43:37.572423 containerd[1476]: time="2025-03-19T11:43:37.572311120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 19 11:43:37.572423 containerd[1476]: time="2025-03-19T11:43:37.572328320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 19 11:43:37.572423 containerd[1476]: time="2025-03-19T11:43:37.572339400Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 19 11:43:37.573105 containerd[1476]: time="2025-03-19T11:43:37.573063160Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 19 11:43:37.573105 containerd[1476]: time="2025-03-19T11:43:37.573095680Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 19 11:43:37.573105 containerd[1476]: time="2025-03-19T11:43:37.573106080Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 19 11:43:37.573191 containerd[1476]: time="2025-03-19T11:43:37.573120440Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 19 11:43:37.573191 containerd[1476]: time="2025-03-19T11:43:37.573133360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 19 11:43:37.573191 containerd[1476]: time="2025-03-19T11:43:37.573152960Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 19 11:43:37.573191 containerd[1476]: time="2025-03-19T11:43:37.573163120Z" level=info msg="NRI interface is disabled by configuration." Mar 19 11:43:37.573191 containerd[1476]: time="2025-03-19T11:43:37.573173800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 19 11:43:37.573561 containerd[1476]: time="2025-03-19T11:43:37.573496640Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 19 11:43:37.573561 containerd[1476]: time="2025-03-19T11:43:37.573556840Z" level=info msg="Connect containerd service" Mar 19 11:43:37.573695 containerd[1476]: time="2025-03-19T11:43:37.573589200Z" level=info msg="using legacy CRI server" Mar 19 11:43:37.573695 containerd[1476]: time="2025-03-19T11:43:37.573596160Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 19 11:43:37.573880 containerd[1476]: time="2025-03-19T11:43:37.573860880Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 19 11:43:37.575553 containerd[1476]: time="2025-03-19T11:43:37.575514520Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 19 11:43:37.576059 containerd[1476]: time="2025-03-19T11:43:37.575874200Z" level=info msg="Start subscribing containerd event" Mar 19 11:43:37.576059 containerd[1476]: time="2025-03-19T11:43:37.575938680Z" level=info msg="Start recovering state" Mar 19 11:43:37.576059 containerd[1476]: time="2025-03-19T11:43:37.576025920Z" level=info msg="Start event monitor" Mar 19 11:43:37.576059 containerd[1476]: time="2025-03-19T11:43:37.576044680Z" level=info msg="Start snapshots syncer" Mar 19 11:43:37.577011 containerd[1476]: time="2025-03-19T11:43:37.576029520Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 19 11:43:37.577011 containerd[1476]: time="2025-03-19T11:43:37.576379040Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 19 11:43:37.577741 containerd[1476]: time="2025-03-19T11:43:37.577716960Z" level=info msg="Start cni network conf syncer for default" Mar 19 11:43:37.577815 containerd[1476]: time="2025-03-19T11:43:37.577801960Z" level=info msg="Start streaming server" Mar 19 11:43:37.578939 containerd[1476]: time="2025-03-19T11:43:37.577981920Z" level=info msg="containerd successfully booted in 0.039794s" Mar 19 11:43:37.578082 systemd[1]: Started containerd.service - containerd container runtime. Mar 19 11:43:37.676772 tar[1474]: linux-arm64/LICENSE Mar 19 11:43:37.676980 tar[1474]: linux-arm64/README.md Mar 19 11:43:37.688600 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 19 11:43:38.365306 systemd-networkd[1398]: eth0: Gained IPv6LL Mar 19 11:43:38.372297 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 19 11:43:38.374137 systemd[1]: Reached target network-online.target - Network is Online. Mar 19 11:43:38.384534 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 19 11:43:38.386745 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:43:38.388576 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 19 11:43:38.406692 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 19 11:43:38.407520 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 19 11:43:38.411314 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 19 11:43:38.413975 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 19 11:43:38.904166 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:43:38.907722 (kubelet)[1547]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:43:39.369965 kubelet[1547]: E0319 11:43:39.369856 1547 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:43:39.371851 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:43:39.371976 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:43:39.372602 systemd[1]: kubelet.service: Consumed 808ms CPU time, 234.7M memory peak. Mar 19 11:43:39.495924 sshd_keygen[1471]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 19 11:43:39.515354 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 19 11:43:39.527536 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 19 11:43:39.532946 systemd[1]: issuegen.service: Deactivated successfully. Mar 19 11:43:39.533144 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 19 11:43:39.535574 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 19 11:43:39.549291 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 19 11:43:39.561631 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 19 11:43:39.563457 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 19 11:43:39.564528 systemd[1]: Reached target getty.target - Login Prompts. Mar 19 11:43:39.565340 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 19 11:43:39.566195 systemd[1]: Startup finished in 521ms (kernel) + 4.768s (initrd) + 4.137s (userspace) = 9.426s. Mar 19 11:43:43.746950 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 19 11:43:43.748152 systemd[1]: Started sshd@0-10.0.0.94:22-10.0.0.1:58890.service - OpenSSH per-connection server daemon (10.0.0.1:58890). Mar 19 11:43:43.803648 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 58890 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:43:43.805250 sshd-session[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:43:43.814878 systemd-logind[1460]: New session 1 of user core. Mar 19 11:43:43.815811 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 19 11:43:43.824484 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 19 11:43:43.832709 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 19 11:43:43.834701 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 19 11:43:43.840448 (systemd)[1581]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 19 11:43:43.842329 systemd-logind[1460]: New session c1 of user core. Mar 19 11:43:43.945073 systemd[1581]: Queued start job for default target default.target. Mar 19 11:43:43.953240 systemd[1581]: Created slice app.slice - User Application Slice. Mar 19 11:43:43.953298 systemd[1581]: Reached target paths.target - Paths. Mar 19 11:43:43.953338 systemd[1581]: Reached target timers.target - Timers. Mar 19 11:43:43.954559 systemd[1581]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 19 11:43:43.963719 systemd[1581]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 19 11:43:43.963780 systemd[1581]: Reached target sockets.target - Sockets. Mar 19 11:43:43.963816 systemd[1581]: Reached target basic.target - Basic System. Mar 19 11:43:43.963848 systemd[1581]: Reached target default.target - Main User Target. Mar 19 11:43:43.963873 systemd[1581]: Startup finished in 116ms. Mar 19 11:43:43.964107 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 19 11:43:43.965719 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 19 11:43:44.031459 systemd[1]: Started sshd@1-10.0.0.94:22-10.0.0.1:58896.service - OpenSSH per-connection server daemon (10.0.0.1:58896). Mar 19 11:43:44.072543 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 58896 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:43:44.073680 sshd-session[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:43:44.077016 systemd-logind[1460]: New session 2 of user core. Mar 19 11:43:44.092471 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 19 11:43:44.142282 sshd[1594]: Connection closed by 10.0.0.1 port 58896 Mar 19 11:43:44.142701 sshd-session[1592]: pam_unix(sshd:session): session closed for user core Mar 19 11:43:44.154214 systemd[1]: Started sshd@2-10.0.0.94:22-10.0.0.1:58898.service - OpenSSH per-connection server daemon (10.0.0.1:58898). Mar 19 11:43:44.154635 systemd[1]: sshd@1-10.0.0.94:22-10.0.0.1:58896.service: Deactivated successfully. Mar 19 11:43:44.155840 systemd[1]: session-2.scope: Deactivated successfully. Mar 19 11:43:44.158476 systemd-logind[1460]: Session 2 logged out. Waiting for processes to exit. Mar 19 11:43:44.159575 systemd-logind[1460]: Removed session 2. Mar 19 11:43:44.193771 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 58898 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:43:44.194865 sshd-session[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:43:44.198310 systemd-logind[1460]: New session 3 of user core. Mar 19 11:43:44.209446 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 19 11:43:44.256971 sshd[1602]: Connection closed by 10.0.0.1 port 58898 Mar 19 11:43:44.257349 sshd-session[1597]: pam_unix(sshd:session): session closed for user core Mar 19 11:43:44.266137 systemd[1]: sshd@2-10.0.0.94:22-10.0.0.1:58898.service: Deactivated successfully. Mar 19 11:43:44.267418 systemd[1]: session-3.scope: Deactivated successfully. Mar 19 11:43:44.268926 systemd-logind[1460]: Session 3 logged out. Waiting for processes to exit. Mar 19 11:43:44.285484 systemd[1]: Started sshd@3-10.0.0.94:22-10.0.0.1:58912.service - OpenSSH per-connection server daemon (10.0.0.1:58912). Mar 19 11:43:44.286409 systemd-logind[1460]: Removed session 3. Mar 19 11:43:44.322258 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 58912 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:43:44.323373 sshd-session[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:43:44.327139 systemd-logind[1460]: New session 4 of user core. Mar 19 11:43:44.338382 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 19 11:43:44.388835 sshd[1610]: Connection closed by 10.0.0.1 port 58912 Mar 19 11:43:44.389241 sshd-session[1607]: pam_unix(sshd:session): session closed for user core Mar 19 11:43:44.406306 systemd[1]: sshd@3-10.0.0.94:22-10.0.0.1:58912.service: Deactivated successfully. Mar 19 11:43:44.408267 systemd[1]: session-4.scope: Deactivated successfully. Mar 19 11:43:44.408974 systemd-logind[1460]: Session 4 logged out. Waiting for processes to exit. Mar 19 11:43:44.424614 systemd[1]: Started sshd@4-10.0.0.94:22-10.0.0.1:58924.service - OpenSSH per-connection server daemon (10.0.0.1:58924). Mar 19 11:43:44.425524 systemd-logind[1460]: Removed session 4. Mar 19 11:43:44.461057 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 58924 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:43:44.462082 sshd-session[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:43:44.465958 systemd-logind[1460]: New session 5 of user core. Mar 19 11:43:44.485401 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 19 11:43:44.548993 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 19 11:43:44.549560 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 11:43:44.563108 sudo[1619]: pam_unix(sudo:session): session closed for user root Mar 19 11:43:44.564483 sshd[1618]: Connection closed by 10.0.0.1 port 58924 Mar 19 11:43:44.564896 sshd-session[1615]: pam_unix(sshd:session): session closed for user core Mar 19 11:43:44.578327 systemd[1]: sshd@4-10.0.0.94:22-10.0.0.1:58924.service: Deactivated successfully. Mar 19 11:43:44.579655 systemd[1]: session-5.scope: Deactivated successfully. Mar 19 11:43:44.580359 systemd-logind[1460]: Session 5 logged out. Waiting for processes to exit. Mar 19 11:43:44.590627 systemd[1]: Started sshd@5-10.0.0.94:22-10.0.0.1:58936.service - OpenSSH per-connection server daemon (10.0.0.1:58936). Mar 19 11:43:44.591921 systemd-logind[1460]: Removed session 5. Mar 19 11:43:44.628570 sshd[1624]: Accepted publickey for core from 10.0.0.1 port 58936 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:43:44.629706 sshd-session[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:43:44.633828 systemd-logind[1460]: New session 6 of user core. Mar 19 11:43:44.644398 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 19 11:43:44.695614 sudo[1629]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 19 11:43:44.695891 sudo[1629]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 11:43:44.698703 sudo[1629]: pam_unix(sudo:session): session closed for user root Mar 19 11:43:44.703223 sudo[1628]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 19 11:43:44.703747 sudo[1628]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 11:43:44.718713 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 19 11:43:44.740550 augenrules[1651]: No rules Mar 19 11:43:44.741562 systemd[1]: audit-rules.service: Deactivated successfully. Mar 19 11:43:44.742338 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 19 11:43:44.743181 sudo[1628]: pam_unix(sudo:session): session closed for user root Mar 19 11:43:44.744305 sshd[1627]: Connection closed by 10.0.0.1 port 58936 Mar 19 11:43:44.744678 sshd-session[1624]: pam_unix(sshd:session): session closed for user core Mar 19 11:43:44.754306 systemd[1]: sshd@5-10.0.0.94:22-10.0.0.1:58936.service: Deactivated successfully. Mar 19 11:43:44.755854 systemd[1]: session-6.scope: Deactivated successfully. Mar 19 11:43:44.757135 systemd-logind[1460]: Session 6 logged out. Waiting for processes to exit. Mar 19 11:43:44.766621 systemd[1]: Started sshd@6-10.0.0.94:22-10.0.0.1:58948.service - OpenSSH per-connection server daemon (10.0.0.1:58948). Mar 19 11:43:44.767578 systemd-logind[1460]: Removed session 6. Mar 19 11:43:44.803453 sshd[1659]: Accepted publickey for core from 10.0.0.1 port 58948 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:43:44.804755 sshd-session[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:43:44.809085 systemd-logind[1460]: New session 7 of user core. Mar 19 11:43:44.819458 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 19 11:43:44.870060 sudo[1664]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 19 11:43:44.870646 sudo[1664]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 11:43:45.217601 (dockerd)[1684]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 19 11:43:45.217727 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 19 11:43:45.482951 dockerd[1684]: time="2025-03-19T11:43:45.482829475Z" level=info msg="Starting up" Mar 19 11:43:45.626103 dockerd[1684]: time="2025-03-19T11:43:45.625839808Z" level=info msg="Loading containers: start." Mar 19 11:43:45.756300 kernel: Initializing XFRM netlink socket Mar 19 11:43:45.813248 systemd-networkd[1398]: docker0: Link UP Mar 19 11:43:45.853342 dockerd[1684]: time="2025-03-19T11:43:45.853294039Z" level=info msg="Loading containers: done." Mar 19 11:43:45.866142 dockerd[1684]: time="2025-03-19T11:43:45.866093598Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 19 11:43:45.866304 dockerd[1684]: time="2025-03-19T11:43:45.866177998Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Mar 19 11:43:45.866401 dockerd[1684]: time="2025-03-19T11:43:45.866370411Z" level=info msg="Daemon has completed initialization" Mar 19 11:43:45.892036 dockerd[1684]: time="2025-03-19T11:43:45.891981677Z" level=info msg="API listen on /run/docker.sock" Mar 19 11:43:45.892139 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 19 11:43:46.479512 containerd[1476]: time="2025-03-19T11:43:46.479460424Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\"" Mar 19 11:43:47.044351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1355719417.mount: Deactivated successfully. Mar 19 11:43:48.004414 containerd[1476]: time="2025-03-19T11:43:48.004320075Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.7: active requests=0, bytes read=25552768" Mar 19 11:43:48.004736 containerd[1476]: time="2025-03-19T11:43:48.004431228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:43:48.007320 containerd[1476]: time="2025-03-19T11:43:48.007287030Z" level=info msg="ImageCreate event name:\"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:43:48.008628 containerd[1476]: time="2025-03-19T11:43:48.008514860Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.7\" with image id \"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\", size \"25549566\" in 1.529010669s" Mar 19 11:43:48.008628 containerd[1476]: time="2025-03-19T11:43:48.008554902Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\" returns image reference \"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\"" Mar 19 11:43:48.009233 containerd[1476]: time="2025-03-19T11:43:48.009211838Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\"" Mar 19 11:43:48.009526 containerd[1476]: time="2025-03-19T11:43:48.009502718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:43:49.159378 containerd[1476]: time="2025-03-19T11:43:49.159314015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:43:49.160150 containerd[1476]: time="2025-03-19T11:43:49.160110677Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.7: active requests=0, bytes read=22458980" Mar 19 11:43:49.160739 containerd[1476]: time="2025-03-19T11:43:49.160707479Z" level=info msg="ImageCreate event name:\"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:43:49.164735 containerd[1476]: time="2025-03-19T11:43:49.164696699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:43:49.165582 containerd[1476]: time="2025-03-19T11:43:49.165415508Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.7\" with image id \"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\", size \"23899774\" in 1.156175385s" Mar 19 11:43:49.165582 containerd[1476]: time="2025-03-19T11:43:49.165447035Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\" returns image reference \"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\"" Mar 19 11:43:49.165887 containerd[1476]: time="2025-03-19T11:43:49.165830708Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\"" Mar 19 11:43:49.438018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 19 11:43:49.448449 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:43:49.544732 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:43:49.548198 (kubelet)[1945]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:43:49.586307 kubelet[1945]: E0319 11:43:49.586232 1945 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:43:49.589433 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:43:49.589583 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:43:49.590053 systemd[1]: kubelet.service: Consumed 128ms CPU time, 97.3M memory peak. Mar 19 11:43:50.280179 containerd[1476]: time="2025-03-19T11:43:50.279983683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:43:50.280804 containerd[1476]: time="2025-03-19T11:43:50.280755785Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.7: active requests=0, bytes read=17125831" Mar 19 11:43:50.284346 containerd[1476]: time="2025-03-19T11:43:50.284308188Z" level=info msg="ImageCreate event name:\"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:43:50.289353 containerd[1476]: time="2025-03-19T11:43:50.289307986Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:43:50.290495 containerd[1476]: time="2025-03-19T11:43:50.290457319Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.7\" with image id \"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\", size \"18566643\" in 1.124592476s" Mar 19 11:43:50.290537 containerd[1476]: time="2025-03-19T11:43:50.290491678Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\" returns image reference \"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\"" Mar 19 11:43:50.290952 containerd[1476]: time="2025-03-19T11:43:50.290926657Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\"" Mar 19 11:43:51.239503 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1445192304.mount: Deactivated successfully. Mar 19 11:43:51.516019 containerd[1476]: time="2025-03-19T11:43:51.515880937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:43:51.516726 containerd[1476]: time="2025-03-19T11:43:51.516675846Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.7: active requests=0, bytes read=26871917" Mar 19 11:43:51.517234 containerd[1476]: time="2025-03-19T11:43:51.517200577Z" level=info msg="ImageCreate event name:\"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:43:51.519209 containerd[1476]: time="2025-03-19T11:43:51.519160659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:43:51.520000 containerd[1476]: time="2025-03-19T11:43:51.519953319Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.7\" with image id \"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\", repo tag \"registry.k8s.io/kube-proxy:v1.31.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\", size \"26870934\" in 1.228989857s" Mar 19 11:43:51.520000 containerd[1476]: time="2025-03-19T11:43:51.519992358Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\" returns image reference \"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\"" Mar 19 11:43:51.520669 containerd[1476]: time="2025-03-19T11:43:51.520638222Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 19 11:43:52.068885 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3140048247.mount: Deactivated successfully. Mar 19 11:43:52.647333 containerd[1476]: time="2025-03-19T11:43:52.647273418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:43:52.647736 containerd[1476]: time="2025-03-19T11:43:52.647690220Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Mar 19 11:43:52.648740 containerd[1476]: time="2025-03-19T11:43:52.648706715Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:43:52.653605 containerd[1476]: time="2025-03-19T11:43:52.653572620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:43:52.654782 containerd[1476]: time="2025-03-19T11:43:52.654723071Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.134050113s" Mar 19 11:43:52.654782 containerd[1476]: time="2025-03-19T11:43:52.654763414Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Mar 19 11:43:52.655222 containerd[1476]: time="2025-03-19T11:43:52.655196434Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 19 11:43:53.092766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4109001043.mount: Deactivated successfully. Mar 19 11:43:53.096339 containerd[1476]: time="2025-03-19T11:43:53.096292278Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:43:53.097061 containerd[1476]: time="2025-03-19T11:43:53.097016332Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Mar 19 11:43:53.097669 containerd[1476]: time="2025-03-19T11:43:53.097636061Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:43:53.100234 containerd[1476]: time="2025-03-19T11:43:53.100186963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:43:53.100846 containerd[1476]: time="2025-03-19T11:43:53.100731698Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 445.505883ms" Mar 19 11:43:53.100846 containerd[1476]: time="2025-03-19T11:43:53.100756696Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 19 11:43:53.101189 containerd[1476]: time="2025-03-19T11:43:53.101170143Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Mar 19 11:43:53.614294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount157594389.mount: Deactivated successfully. Mar 19 11:43:55.181925 containerd[1476]: time="2025-03-19T11:43:55.181862752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:43:55.182402 containerd[1476]: time="2025-03-19T11:43:55.182359697Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406427" Mar 19 11:43:55.183508 containerd[1476]: time="2025-03-19T11:43:55.183474797Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:43:55.186768 containerd[1476]: time="2025-03-19T11:43:55.186713040Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:43:55.188092 containerd[1476]: time="2025-03-19T11:43:55.188069917Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.086800912s" Mar 19 11:43:55.188163 containerd[1476]: time="2025-03-19T11:43:55.188097463Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Mar 19 11:43:59.688878 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 19 11:43:59.698595 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:43:59.821107 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:43:59.824375 (kubelet)[2099]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:43:59.855558 kubelet[2099]: E0319 11:43:59.855504 2099 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:43:59.857922 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:43:59.858071 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:43:59.858376 systemd[1]: kubelet.service: Consumed 114ms CPU time, 96.4M memory peak. Mar 19 11:44:00.156161 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:44:00.156309 systemd[1]: kubelet.service: Consumed 114ms CPU time, 96.4M memory peak. Mar 19 11:44:00.169457 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:44:00.189738 systemd[1]: Reload requested from client PID 2114 ('systemctl') (unit session-7.scope)... Mar 19 11:44:00.189756 systemd[1]: Reloading... Mar 19 11:44:00.259346 zram_generator::config[2161]: No configuration found. Mar 19 11:44:00.343528 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:44:00.417221 systemd[1]: Reloading finished in 227 ms. Mar 19 11:44:00.457343 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:44:00.459688 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:44:00.461122 systemd[1]: kubelet.service: Deactivated successfully. Mar 19 11:44:00.461359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:44:00.461396 systemd[1]: kubelet.service: Consumed 76ms CPU time, 82.4M memory peak. Mar 19 11:44:00.462997 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:44:00.560036 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:44:00.564564 (kubelet)[2205]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 19 11:44:00.602970 kubelet[2205]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:44:00.602970 kubelet[2205]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 19 11:44:00.602970 kubelet[2205]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:44:00.603320 kubelet[2205]: I0319 11:44:00.603145 2205 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 19 11:44:02.078103 kubelet[2205]: I0319 11:44:02.078041 2205 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 19 11:44:02.078103 kubelet[2205]: I0319 11:44:02.078089 2205 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 19 11:44:02.078529 kubelet[2205]: I0319 11:44:02.078346 2205 server.go:929] "Client rotation is on, will bootstrap in background" Mar 19 11:44:02.119158 kubelet[2205]: E0319 11:44:02.119084 2205 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.94:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:44:02.120096 kubelet[2205]: I0319 11:44:02.119877 2205 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 19 11:44:02.128155 kubelet[2205]: E0319 11:44:02.128104 2205 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 19 11:44:02.128155 kubelet[2205]: I0319 11:44:02.128149 2205 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 19 11:44:02.131790 kubelet[2205]: I0319 11:44:02.131766 2205 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 19 11:44:02.132087 kubelet[2205]: I0319 11:44:02.132065 2205 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 19 11:44:02.132231 kubelet[2205]: I0319 11:44:02.132189 2205 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 19 11:44:02.132409 kubelet[2205]: I0319 11:44:02.132223 2205 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 19 11:44:02.132501 kubelet[2205]: I0319 11:44:02.132415 2205 topology_manager.go:138] "Creating topology manager with none policy" Mar 19 11:44:02.132501 kubelet[2205]: I0319 11:44:02.132426 2205 container_manager_linux.go:300] "Creating device plugin manager" Mar 19 11:44:02.132640 kubelet[2205]: I0319 11:44:02.132615 2205 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:44:02.134150 kubelet[2205]: I0319 11:44:02.134118 2205 kubelet.go:408] "Attempting to sync node with API server" Mar 19 11:44:02.134150 kubelet[2205]: I0319 11:44:02.134149 2205 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 19 11:44:02.134815 kubelet[2205]: I0319 11:44:02.134485 2205 kubelet.go:314] "Adding apiserver pod source" Mar 19 11:44:02.134815 kubelet[2205]: I0319 11:44:02.134504 2205 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 19 11:44:02.135945 kubelet[2205]: W0319 11:44:02.135831 2205 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.94:6443: connect: connection refused Mar 19 11:44:02.135945 kubelet[2205]: E0319 11:44:02.135895 2205 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:44:02.136844 kubelet[2205]: I0319 11:44:02.136766 2205 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 19 11:44:02.136965 kubelet[2205]: W0319 11:44:02.136836 2205 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.94:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.94:6443: connect: connection refused Mar 19 11:44:02.136965 kubelet[2205]: E0319 11:44:02.136882 2205 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.94:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:44:02.138970 kubelet[2205]: I0319 11:44:02.138903 2205 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 19 11:44:02.142095 kubelet[2205]: W0319 11:44:02.142066 2205 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 19 11:44:02.146499 kubelet[2205]: I0319 11:44:02.146362 2205 server.go:1269] "Started kubelet" Mar 19 11:44:02.147087 kubelet[2205]: I0319 11:44:02.146601 2205 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 19 11:44:02.147087 kubelet[2205]: I0319 11:44:02.146724 2205 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 19 11:44:02.147597 kubelet[2205]: I0319 11:44:02.147573 2205 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 19 11:44:02.148387 kubelet[2205]: I0319 11:44:02.148360 2205 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 19 11:44:02.148554 kubelet[2205]: I0319 11:44:02.148515 2205 server.go:460] "Adding debug handlers to kubelet server" Mar 19 11:44:02.149586 kubelet[2205]: I0319 11:44:02.149561 2205 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 19 11:44:02.150334 kubelet[2205]: E0319 11:44:02.148781 2205 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.94:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.94:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182e3199b93a66e6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-19 11:44:02.146330342 +0000 UTC m=+1.578759221,LastTimestamp:2025-03-19 11:44:02.146330342 +0000 UTC m=+1.578759221,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 19 11:44:02.150885 kubelet[2205]: I0319 11:44:02.150856 2205 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 19 11:44:02.150951 kubelet[2205]: E0319 11:44:02.150928 2205 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 19 11:44:02.150977 kubelet[2205]: I0319 11:44:02.150969 2205 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 19 11:44:02.151256 kubelet[2205]: I0319 11:44:02.151095 2205 reconciler.go:26] "Reconciler: start to sync state" Mar 19 11:44:02.151323 kubelet[2205]: E0319 11:44:02.151277 2205 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 19 11:44:02.151505 kubelet[2205]: E0319 11:44:02.151456 2205 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.94:6443: connect: connection refused" interval="200ms" Mar 19 11:44:02.151505 kubelet[2205]: I0319 11:44:02.151490 2205 factory.go:221] Registration of the systemd container factory successfully Mar 19 11:44:02.151571 kubelet[2205]: I0319 11:44:02.151559 2205 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 19 11:44:02.152230 kubelet[2205]: W0319 11:44:02.152092 2205 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.94:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.94:6443: connect: connection refused Mar 19 11:44:02.152511 kubelet[2205]: E0319 11:44:02.152445 2205 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.94:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:44:02.152591 kubelet[2205]: I0319 11:44:02.152571 2205 factory.go:221] Registration of the containerd container factory successfully Mar 19 11:44:02.164062 kubelet[2205]: I0319 11:44:02.163981 2205 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 19 11:44:02.165231 kubelet[2205]: I0319 11:44:02.165204 2205 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 19 11:44:02.165231 kubelet[2205]: I0319 11:44:02.165229 2205 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 19 11:44:02.165424 kubelet[2205]: I0319 11:44:02.165325 2205 kubelet.go:2321] "Starting kubelet main sync loop" Mar 19 11:44:02.165424 kubelet[2205]: E0319 11:44:02.165364 2205 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 19 11:44:02.166136 kubelet[2205]: I0319 11:44:02.165931 2205 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 19 11:44:02.166136 kubelet[2205]: I0319 11:44:02.165948 2205 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 19 11:44:02.166136 kubelet[2205]: I0319 11:44:02.165965 2205 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:44:02.166136 kubelet[2205]: W0319 11:44:02.165959 2205 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.94:6443: connect: connection refused Mar 19 11:44:02.166136 kubelet[2205]: E0319 11:44:02.166013 2205 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:44:02.251559 kubelet[2205]: E0319 11:44:02.251520 2205 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 19 11:44:02.265822 kubelet[2205]: E0319 11:44:02.265790 2205 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 19 11:44:02.352450 kubelet[2205]: E0319 11:44:02.352370 2205 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 19 11:44:02.353200 kubelet[2205]: E0319 11:44:02.352610 2205 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.94:6443: connect: connection refused" interval="400ms" Mar 19 11:44:02.370203 kubelet[2205]: I0319 11:44:02.370161 2205 policy_none.go:49] "None policy: Start" Mar 19 11:44:02.370988 kubelet[2205]: I0319 11:44:02.370956 2205 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 19 11:44:02.370988 kubelet[2205]: I0319 11:44:02.370981 2205 state_mem.go:35] "Initializing new in-memory state store" Mar 19 11:44:02.377755 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 19 11:44:02.392842 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 19 11:44:02.404648 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 19 11:44:02.406040 kubelet[2205]: I0319 11:44:02.405985 2205 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 19 11:44:02.406693 kubelet[2205]: I0319 11:44:02.406178 2205 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 19 11:44:02.406693 kubelet[2205]: I0319 11:44:02.406196 2205 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 19 11:44:02.406693 kubelet[2205]: I0319 11:44:02.406479 2205 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 19 11:44:02.407855 kubelet[2205]: E0319 11:44:02.407832 2205 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 19 11:44:02.474098 systemd[1]: Created slice kubepods-burstable-podc1dd749d8488daed43bbe081d7679478.slice - libcontainer container kubepods-burstable-podc1dd749d8488daed43bbe081d7679478.slice. Mar 19 11:44:02.486375 systemd[1]: Created slice kubepods-burstable-pod60762308083b5ef6c837b1be48ec53d6.slice - libcontainer container kubepods-burstable-pod60762308083b5ef6c837b1be48ec53d6.slice. Mar 19 11:44:02.501597 systemd[1]: Created slice kubepods-burstable-pod6f32907a07e55aea05abdc5cd284a8d5.slice - libcontainer container kubepods-burstable-pod6f32907a07e55aea05abdc5cd284a8d5.slice. Mar 19 11:44:02.508227 kubelet[2205]: I0319 11:44:02.508163 2205 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 19 11:44:02.508624 kubelet[2205]: E0319 11:44:02.508583 2205 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.94:6443/api/v1/nodes\": dial tcp 10.0.0.94:6443: connect: connection refused" node="localhost" Mar 19 11:44:02.552966 kubelet[2205]: I0319 11:44:02.552932 2205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:44:02.553047 kubelet[2205]: I0319 11:44:02.552983 2205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:44:02.553047 kubelet[2205]: I0319 11:44:02.553004 2205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:44:02.553047 kubelet[2205]: I0319 11:44:02.553021 2205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c1dd749d8488daed43bbe081d7679478-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c1dd749d8488daed43bbe081d7679478\") " pod="kube-system/kube-apiserver-localhost" Mar 19 11:44:02.553047 kubelet[2205]: I0319 11:44:02.553037 2205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c1dd749d8488daed43bbe081d7679478-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c1dd749d8488daed43bbe081d7679478\") " pod="kube-system/kube-apiserver-localhost" Mar 19 11:44:02.553128 kubelet[2205]: I0319 11:44:02.553052 2205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:44:02.553128 kubelet[2205]: I0319 11:44:02.553068 2205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:44:02.553128 kubelet[2205]: I0319 11:44:02.553083 2205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f32907a07e55aea05abdc5cd284a8d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6f32907a07e55aea05abdc5cd284a8d5\") " pod="kube-system/kube-scheduler-localhost" Mar 19 11:44:02.553128 kubelet[2205]: I0319 11:44:02.553098 2205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c1dd749d8488daed43bbe081d7679478-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c1dd749d8488daed43bbe081d7679478\") " pod="kube-system/kube-apiserver-localhost" Mar 19 11:44:02.710312 kubelet[2205]: I0319 11:44:02.710286 2205 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 19 11:44:02.710638 kubelet[2205]: E0319 11:44:02.710606 2205 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.94:6443/api/v1/nodes\": dial tcp 10.0.0.94:6443: connect: connection refused" node="localhost" Mar 19 11:44:02.754221 kubelet[2205]: E0319 11:44:02.754177 2205 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.94:6443: connect: connection refused" interval="800ms" Mar 19 11:44:02.786251 containerd[1476]: time="2025-03-19T11:44:02.786207319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c1dd749d8488daed43bbe081d7679478,Namespace:kube-system,Attempt:0,}" Mar 19 11:44:02.800655 containerd[1476]: time="2025-03-19T11:44:02.800614233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:60762308083b5ef6c837b1be48ec53d6,Namespace:kube-system,Attempt:0,}" Mar 19 11:44:02.804262 containerd[1476]: time="2025-03-19T11:44:02.804150750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6f32907a07e55aea05abdc5cd284a8d5,Namespace:kube-system,Attempt:0,}" Mar 19 11:44:03.112359 kubelet[2205]: I0319 11:44:03.111922 2205 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 19 11:44:03.112359 kubelet[2205]: E0319 11:44:03.112287 2205 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.94:6443/api/v1/nodes\": dial tcp 10.0.0.94:6443: connect: connection refused" node="localhost" Mar 19 11:44:03.192355 kubelet[2205]: W0319 11:44:03.192287 2205 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.94:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.94:6443: connect: connection refused Mar 19 11:44:03.192355 kubelet[2205]: E0319 11:44:03.192352 2205 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.94:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:44:03.203019 kubelet[2205]: W0319 11:44:03.202961 2205 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.94:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.94:6443: connect: connection refused Mar 19 11:44:03.203080 kubelet[2205]: E0319 11:44:03.203023 2205 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.94:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:44:03.208953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount795180344.mount: Deactivated successfully. Mar 19 11:44:03.212494 containerd[1476]: time="2025-03-19T11:44:03.212434586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:44:03.214868 containerd[1476]: time="2025-03-19T11:44:03.214821986Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Mar 19 11:44:03.218765 containerd[1476]: time="2025-03-19T11:44:03.218685518Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:44:03.220020 containerd[1476]: time="2025-03-19T11:44:03.219988067Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:44:03.220555 containerd[1476]: time="2025-03-19T11:44:03.220510336Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 19 11:44:03.221079 containerd[1476]: time="2025-03-19T11:44:03.221054983Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:44:03.221608 containerd[1476]: time="2025-03-19T11:44:03.221572087Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 19 11:44:03.223773 containerd[1476]: time="2025-03-19T11:44:03.223705518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:44:03.224985 containerd[1476]: time="2025-03-19T11:44:03.224955825Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 438.656221ms" Mar 19 11:44:03.231443 containerd[1476]: time="2025-03-19T11:44:03.231177332Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 426.96957ms" Mar 19 11:44:03.236312 containerd[1476]: time="2025-03-19T11:44:03.235986399Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 435.307908ms" Mar 19 11:44:03.360160 containerd[1476]: time="2025-03-19T11:44:03.360072017Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:44:03.360160 containerd[1476]: time="2025-03-19T11:44:03.360137270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:44:03.360596 containerd[1476]: time="2025-03-19T11:44:03.360151602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:44:03.362296 containerd[1476]: time="2025-03-19T11:44:03.362221741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:44:03.363093 containerd[1476]: time="2025-03-19T11:44:03.362966392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:44:03.363093 containerd[1476]: time="2025-03-19T11:44:03.363017995Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:44:03.363093 containerd[1476]: time="2025-03-19T11:44:03.363033968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:44:03.363964 containerd[1476]: time="2025-03-19T11:44:03.363876419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:44:03.363964 containerd[1476]: time="2025-03-19T11:44:03.363924259Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:44:03.363964 containerd[1476]: time="2025-03-19T11:44:03.363935548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:44:03.364039 containerd[1476]: time="2025-03-19T11:44:03.364001922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:44:03.364547 containerd[1476]: time="2025-03-19T11:44:03.364359816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:44:03.389436 systemd[1]: Started cri-containerd-4e9c2b0626ccc413ce916137010a1a4867491f3c05cbcecc2a0c749c1a0d6654.scope - libcontainer container 4e9c2b0626ccc413ce916137010a1a4867491f3c05cbcecc2a0c749c1a0d6654. Mar 19 11:44:03.390639 systemd[1]: Started cri-containerd-4f9a2080453b21a2630431ca5cac0e68edf1b1af33118fcc5f6a1541875ee5bd.scope - libcontainer container 4f9a2080453b21a2630431ca5cac0e68edf1b1af33118fcc5f6a1541875ee5bd. Mar 19 11:44:03.391678 systemd[1]: Started cri-containerd-77bd7a9cb98cc30d939c2457f1e8896967dc1c7a1f8211c76b995027be573716.scope - libcontainer container 77bd7a9cb98cc30d939c2457f1e8896967dc1c7a1f8211c76b995027be573716. Mar 19 11:44:03.424699 containerd[1476]: time="2025-03-19T11:44:03.423782234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:60762308083b5ef6c837b1be48ec53d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f9a2080453b21a2630431ca5cac0e68edf1b1af33118fcc5f6a1541875ee5bd\"" Mar 19 11:44:03.426719 containerd[1476]: time="2025-03-19T11:44:03.426504709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6f32907a07e55aea05abdc5cd284a8d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e9c2b0626ccc413ce916137010a1a4867491f3c05cbcecc2a0c749c1a0d6654\"" Mar 19 11:44:03.429889 containerd[1476]: time="2025-03-19T11:44:03.429693086Z" level=info msg="CreateContainer within sandbox \"4e9c2b0626ccc413ce916137010a1a4867491f3c05cbcecc2a0c749c1a0d6654\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 19 11:44:03.429982 containerd[1476]: time="2025-03-19T11:44:03.429776554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c1dd749d8488daed43bbe081d7679478,Namespace:kube-system,Attempt:0,} returns sandbox id \"77bd7a9cb98cc30d939c2457f1e8896967dc1c7a1f8211c76b995027be573716\"" Mar 19 11:44:03.431028 containerd[1476]: time="2025-03-19T11:44:03.430994474Z" level=info msg="CreateContainer within sandbox \"4f9a2080453b21a2630431ca5cac0e68edf1b1af33118fcc5f6a1541875ee5bd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 19 11:44:03.433199 containerd[1476]: time="2025-03-19T11:44:03.433168018Z" level=info msg="CreateContainer within sandbox \"77bd7a9cb98cc30d939c2457f1e8896967dc1c7a1f8211c76b995027be573716\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 19 11:44:03.445467 containerd[1476]: time="2025-03-19T11:44:03.445419115Z" level=info msg="CreateContainer within sandbox \"4e9c2b0626ccc413ce916137010a1a4867491f3c05cbcecc2a0c749c1a0d6654\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d66b3356bfbca915f12abc8589ffe702a55b2eb050acd499546d04a7b2dd3bd0\"" Mar 19 11:44:03.448274 containerd[1476]: time="2025-03-19T11:44:03.448236347Z" level=info msg="StartContainer for \"d66b3356bfbca915f12abc8589ffe702a55b2eb050acd499546d04a7b2dd3bd0\"" Mar 19 11:44:03.450557 containerd[1476]: time="2025-03-19T11:44:03.449950594Z" level=info msg="CreateContainer within sandbox \"4f9a2080453b21a2630431ca5cac0e68edf1b1af33118fcc5f6a1541875ee5bd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"088005bb378745644c14affaee80c8e635b0ef8bc910dcef6d8a8ca2ac1beae0\"" Mar 19 11:44:03.450650 containerd[1476]: time="2025-03-19T11:44:03.450631714Z" level=info msg="StartContainer for \"088005bb378745644c14affaee80c8e635b0ef8bc910dcef6d8a8ca2ac1beae0\"" Mar 19 11:44:03.453862 containerd[1476]: time="2025-03-19T11:44:03.453813005Z" level=info msg="CreateContainer within sandbox \"77bd7a9cb98cc30d939c2457f1e8896967dc1c7a1f8211c76b995027be573716\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"39845fff819a257397ccb71c7b63aafa24836c381134efa748645f9867b62378\"" Mar 19 11:44:03.454551 containerd[1476]: time="2025-03-19T11:44:03.454431393Z" level=info msg="StartContainer for \"39845fff819a257397ccb71c7b63aafa24836c381134efa748645f9867b62378\"" Mar 19 11:44:03.472430 systemd[1]: Started cri-containerd-d66b3356bfbca915f12abc8589ffe702a55b2eb050acd499546d04a7b2dd3bd0.scope - libcontainer container d66b3356bfbca915f12abc8589ffe702a55b2eb050acd499546d04a7b2dd3bd0. Mar 19 11:44:03.476310 systemd[1]: Started cri-containerd-088005bb378745644c14affaee80c8e635b0ef8bc910dcef6d8a8ca2ac1beae0.scope - libcontainer container 088005bb378745644c14affaee80c8e635b0ef8bc910dcef6d8a8ca2ac1beae0. Mar 19 11:44:03.477214 systemd[1]: Started cri-containerd-39845fff819a257397ccb71c7b63aafa24836c381134efa748645f9867b62378.scope - libcontainer container 39845fff819a257397ccb71c7b63aafa24836c381134efa748645f9867b62378. Mar 19 11:44:03.535315 containerd[1476]: time="2025-03-19T11:44:03.535273313Z" level=info msg="StartContainer for \"d66b3356bfbca915f12abc8589ffe702a55b2eb050acd499546d04a7b2dd3bd0\" returns successfully" Mar 19 11:44:03.552237 containerd[1476]: time="2025-03-19T11:44:03.551696674Z" level=info msg="StartContainer for \"39845fff819a257397ccb71c7b63aafa24836c381134efa748645f9867b62378\" returns successfully" Mar 19 11:44:03.552237 containerd[1476]: time="2025-03-19T11:44:03.551696954Z" level=info msg="StartContainer for \"088005bb378745644c14affaee80c8e635b0ef8bc910dcef6d8a8ca2ac1beae0\" returns successfully" Mar 19 11:44:03.559306 kubelet[2205]: E0319 11:44:03.555471 2205 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.94:6443: connect: connection refused" interval="1.6s" Mar 19 11:44:03.638702 kubelet[2205]: W0319 11:44:03.638016 2205 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.94:6443: connect: connection refused Mar 19 11:44:03.638702 kubelet[2205]: E0319 11:44:03.638087 2205 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:44:03.914198 kubelet[2205]: I0319 11:44:03.914092 2205 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 19 11:44:05.136774 kubelet[2205]: I0319 11:44:05.136734 2205 apiserver.go:52] "Watching apiserver" Mar 19 11:44:05.159272 kubelet[2205]: I0319 11:44:05.158825 2205 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 19 11:44:05.219367 kubelet[2205]: E0319 11:44:05.219331 2205 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 19 11:44:05.229363 kubelet[2205]: I0319 11:44:05.229333 2205 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Mar 19 11:44:07.525444 systemd[1]: Reload requested from client PID 2483 ('systemctl') (unit session-7.scope)... Mar 19 11:44:07.525461 systemd[1]: Reloading... Mar 19 11:44:07.599273 zram_generator::config[2530]: No configuration found. Mar 19 11:44:07.678492 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:44:07.763610 systemd[1]: Reloading finished in 237 ms. Mar 19 11:44:07.785803 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:44:07.801409 systemd[1]: kubelet.service: Deactivated successfully. Mar 19 11:44:07.801663 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:44:07.801726 systemd[1]: kubelet.service: Consumed 1.927s CPU time, 117.5M memory peak. Mar 19 11:44:07.812479 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:44:07.913033 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:44:07.918441 (kubelet)[2569]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 19 11:44:07.956536 kubelet[2569]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:44:07.956536 kubelet[2569]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 19 11:44:07.956536 kubelet[2569]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:44:07.956910 kubelet[2569]: I0319 11:44:07.956583 2569 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 19 11:44:07.963103 kubelet[2569]: I0319 11:44:07.963067 2569 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 19 11:44:07.963103 kubelet[2569]: I0319 11:44:07.963093 2569 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 19 11:44:07.963337 kubelet[2569]: I0319 11:44:07.963313 2569 server.go:929] "Client rotation is on, will bootstrap in background" Mar 19 11:44:07.964649 kubelet[2569]: I0319 11:44:07.964624 2569 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 19 11:44:07.967909 kubelet[2569]: I0319 11:44:07.967873 2569 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 19 11:44:07.971233 kubelet[2569]: E0319 11:44:07.971184 2569 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 19 11:44:07.971233 kubelet[2569]: I0319 11:44:07.971217 2569 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 19 11:44:07.974131 kubelet[2569]: I0319 11:44:07.974031 2569 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 19 11:44:07.974599 kubelet[2569]: I0319 11:44:07.974459 2569 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 19 11:44:07.974599 kubelet[2569]: I0319 11:44:07.974582 2569 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 19 11:44:07.974786 kubelet[2569]: I0319 11:44:07.974601 2569 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 19 11:44:07.974861 kubelet[2569]: I0319 11:44:07.974796 2569 topology_manager.go:138] "Creating topology manager with none policy" Mar 19 11:44:07.974861 kubelet[2569]: I0319 11:44:07.974805 2569 container_manager_linux.go:300] "Creating device plugin manager" Mar 19 11:44:07.974861 kubelet[2569]: I0319 11:44:07.974834 2569 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:44:07.974935 kubelet[2569]: I0319 11:44:07.974928 2569 kubelet.go:408] "Attempting to sync node with API server" Mar 19 11:44:07.974963 kubelet[2569]: I0319 11:44:07.974940 2569 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 19 11:44:07.974963 kubelet[2569]: I0319 11:44:07.974960 2569 kubelet.go:314] "Adding apiserver pod source" Mar 19 11:44:07.975005 kubelet[2569]: I0319 11:44:07.974971 2569 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 19 11:44:07.975467 kubelet[2569]: I0319 11:44:07.975437 2569 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 19 11:44:07.975885 kubelet[2569]: I0319 11:44:07.975866 2569 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 19 11:44:07.976281 kubelet[2569]: I0319 11:44:07.976262 2569 server.go:1269] "Started kubelet" Mar 19 11:44:07.977268 kubelet[2569]: I0319 11:44:07.977223 2569 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 19 11:44:07.982367 kubelet[2569]: I0319 11:44:07.979710 2569 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 19 11:44:07.988000 kubelet[2569]: I0319 11:44:07.986287 2569 server.go:460] "Adding debug handlers to kubelet server" Mar 19 11:44:07.988000 kubelet[2569]: I0319 11:44:07.987071 2569 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 19 11:44:07.988000 kubelet[2569]: I0319 11:44:07.987290 2569 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 19 11:44:07.988000 kubelet[2569]: I0319 11:44:07.987468 2569 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 19 11:44:07.988174 kubelet[2569]: E0319 11:44:07.988101 2569 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 19 11:44:07.988502 kubelet[2569]: I0319 11:44:07.988480 2569 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 19 11:44:07.988788 kubelet[2569]: I0319 11:44:07.988770 2569 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 19 11:44:07.988979 kubelet[2569]: I0319 11:44:07.988966 2569 reconciler.go:26] "Reconciler: start to sync state" Mar 19 11:44:07.992047 kubelet[2569]: I0319 11:44:07.992015 2569 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 19 11:44:07.992145 kubelet[2569]: I0319 11:44:07.992112 2569 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 19 11:44:07.994234 kubelet[2569]: I0319 11:44:07.994210 2569 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 19 11:44:07.994234 kubelet[2569]: I0319 11:44:07.994233 2569 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 19 11:44:07.994373 kubelet[2569]: I0319 11:44:07.994305 2569 kubelet.go:2321] "Starting kubelet main sync loop" Mar 19 11:44:07.994373 kubelet[2569]: E0319 11:44:07.994343 2569 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 19 11:44:07.994755 kubelet[2569]: E0319 11:44:07.994731 2569 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 19 11:44:07.994868 kubelet[2569]: I0319 11:44:07.994849 2569 factory.go:221] Registration of the containerd container factory successfully Mar 19 11:44:07.994868 kubelet[2569]: I0319 11:44:07.994868 2569 factory.go:221] Registration of the systemd container factory successfully Mar 19 11:44:08.032779 kubelet[2569]: I0319 11:44:08.032749 2569 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 19 11:44:08.032779 kubelet[2569]: I0319 11:44:08.032770 2569 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 19 11:44:08.032779 kubelet[2569]: I0319 11:44:08.032791 2569 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:44:08.032969 kubelet[2569]: I0319 11:44:08.032946 2569 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 19 11:44:08.032999 kubelet[2569]: I0319 11:44:08.032968 2569 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 19 11:44:08.032999 kubelet[2569]: I0319 11:44:08.032986 2569 policy_none.go:49] "None policy: Start" Mar 19 11:44:08.033550 kubelet[2569]: I0319 11:44:08.033531 2569 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 19 11:44:08.033594 kubelet[2569]: I0319 11:44:08.033559 2569 state_mem.go:35] "Initializing new in-memory state store" Mar 19 11:44:08.033707 kubelet[2569]: I0319 11:44:08.033692 2569 state_mem.go:75] "Updated machine memory state" Mar 19 11:44:08.038009 kubelet[2569]: I0319 11:44:08.037890 2569 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 19 11:44:08.038077 kubelet[2569]: I0319 11:44:08.038050 2569 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 19 11:44:08.038098 kubelet[2569]: I0319 11:44:08.038061 2569 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 19 11:44:08.038347 kubelet[2569]: I0319 11:44:08.038265 2569 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 19 11:44:08.101554 kubelet[2569]: E0319 11:44:08.101508 2569 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 19 11:44:08.139613 kubelet[2569]: I0319 11:44:08.139573 2569 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 19 11:44:08.145544 kubelet[2569]: I0319 11:44:08.145518 2569 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Mar 19 11:44:08.145633 kubelet[2569]: I0319 11:44:08.145589 2569 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Mar 19 11:44:08.190759 kubelet[2569]: I0319 11:44:08.190720 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:44:08.190840 kubelet[2569]: I0319 11:44:08.190771 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f32907a07e55aea05abdc5cd284a8d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6f32907a07e55aea05abdc5cd284a8d5\") " pod="kube-system/kube-scheduler-localhost" Mar 19 11:44:08.190840 kubelet[2569]: I0319 11:44:08.190804 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c1dd749d8488daed43bbe081d7679478-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c1dd749d8488daed43bbe081d7679478\") " pod="kube-system/kube-apiserver-localhost" Mar 19 11:44:08.190840 kubelet[2569]: I0319 11:44:08.190831 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c1dd749d8488daed43bbe081d7679478-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c1dd749d8488daed43bbe081d7679478\") " pod="kube-system/kube-apiserver-localhost" Mar 19 11:44:08.190912 kubelet[2569]: I0319 11:44:08.190859 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c1dd749d8488daed43bbe081d7679478-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c1dd749d8488daed43bbe081d7679478\") " pod="kube-system/kube-apiserver-localhost" Mar 19 11:44:08.190912 kubelet[2569]: I0319 11:44:08.190888 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:44:08.190912 kubelet[2569]: I0319 11:44:08.190905 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:44:08.190974 kubelet[2569]: I0319 11:44:08.190919 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:44:08.190974 kubelet[2569]: I0319 11:44:08.190935 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:44:08.530287 sudo[2603]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 19 11:44:08.530573 sudo[2603]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 19 11:44:08.958276 sudo[2603]: pam_unix(sudo:session): session closed for user root Mar 19 11:44:08.977326 kubelet[2569]: I0319 11:44:08.976174 2569 apiserver.go:52] "Watching apiserver" Mar 19 11:44:08.989130 kubelet[2569]: I0319 11:44:08.989058 2569 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 19 11:44:09.027617 kubelet[2569]: E0319 11:44:09.027388 2569 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 19 11:44:09.042453 kubelet[2569]: I0319 11:44:09.042396 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.042381605 podStartE2EDuration="1.042381605s" podCreationTimestamp="2025-03-19 11:44:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:44:09.041062279 +0000 UTC m=+1.119129723" watchObservedRunningTime="2025-03-19 11:44:09.042381605 +0000 UTC m=+1.120449089" Mar 19 11:44:09.056765 kubelet[2569]: I0319 11:44:09.056447 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.056431063 podStartE2EDuration="1.056431063s" podCreationTimestamp="2025-03-19 11:44:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:44:09.048858112 +0000 UTC m=+1.126925596" watchObservedRunningTime="2025-03-19 11:44:09.056431063 +0000 UTC m=+1.134498547" Mar 19 11:44:09.066813 kubelet[2569]: I0319 11:44:09.066143 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.066127037 podStartE2EDuration="2.066127037s" podCreationTimestamp="2025-03-19 11:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:44:09.057041848 +0000 UTC m=+1.135109332" watchObservedRunningTime="2025-03-19 11:44:09.066127037 +0000 UTC m=+1.144194521" Mar 19 11:44:10.877200 sudo[1664]: pam_unix(sudo:session): session closed for user root Mar 19 11:44:10.878320 sshd[1663]: Connection closed by 10.0.0.1 port 58948 Mar 19 11:44:10.878659 sshd-session[1659]: pam_unix(sshd:session): session closed for user core Mar 19 11:44:10.881913 systemd[1]: sshd@6-10.0.0.94:22-10.0.0.1:58948.service: Deactivated successfully. Mar 19 11:44:10.883760 systemd[1]: session-7.scope: Deactivated successfully. Mar 19 11:44:10.883996 systemd[1]: session-7.scope: Consumed 7.466s CPU time, 261.9M memory peak. Mar 19 11:44:10.884925 systemd-logind[1460]: Session 7 logged out. Waiting for processes to exit. Mar 19 11:44:10.885776 systemd-logind[1460]: Removed session 7. Mar 19 11:44:14.382745 kubelet[2569]: I0319 11:44:14.382661 2569 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 19 11:44:14.383158 containerd[1476]: time="2025-03-19T11:44:14.382998087Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 19 11:44:14.383391 kubelet[2569]: I0319 11:44:14.383155 2569 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 19 11:44:15.341672 kubelet[2569]: I0319 11:44:15.341642 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/78fc3a44-fc2c-4a39-afe2-be8e9460c12e-xtables-lock\") pod \"kube-proxy-phbzx\" (UID: \"78fc3a44-fc2c-4a39-afe2-be8e9460c12e\") " pod="kube-system/kube-proxy-phbzx" Mar 19 11:44:15.341792 kubelet[2569]: I0319 11:44:15.341674 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/78fc3a44-fc2c-4a39-afe2-be8e9460c12e-kube-proxy\") pod \"kube-proxy-phbzx\" (UID: \"78fc3a44-fc2c-4a39-afe2-be8e9460c12e\") " pod="kube-system/kube-proxy-phbzx" Mar 19 11:44:15.341792 kubelet[2569]: I0319 11:44:15.341694 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78fc3a44-fc2c-4a39-afe2-be8e9460c12e-lib-modules\") pod \"kube-proxy-phbzx\" (UID: \"78fc3a44-fc2c-4a39-afe2-be8e9460c12e\") " pod="kube-system/kube-proxy-phbzx" Mar 19 11:44:15.341792 kubelet[2569]: I0319 11:44:15.341712 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v4bt\" (UniqueName: \"kubernetes.io/projected/78fc3a44-fc2c-4a39-afe2-be8e9460c12e-kube-api-access-5v4bt\") pod \"kube-proxy-phbzx\" (UID: \"78fc3a44-fc2c-4a39-afe2-be8e9460c12e\") " pod="kube-system/kube-proxy-phbzx" Mar 19 11:44:15.342104 systemd[1]: Created slice kubepods-besteffort-pod78fc3a44_fc2c_4a39_afe2_be8e9460c12e.slice - libcontainer container kubepods-besteffort-pod78fc3a44_fc2c_4a39_afe2_be8e9460c12e.slice. Mar 19 11:44:15.351680 systemd[1]: Created slice kubepods-burstable-pod29f74f42_e131_48ab_8b9b_8fd9f5ae22d6.slice - libcontainer container kubepods-burstable-pod29f74f42_e131_48ab_8b9b_8fd9f5ae22d6.slice. Mar 19 11:44:15.441913 kubelet[2569]: I0319 11:44:15.441875 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-cilium-run\") pod \"cilium-wlr4t\" (UID: \"29f74f42-e131-48ab-8b9b-8fd9f5ae22d6\") " pod="kube-system/cilium-wlr4t" Mar 19 11:44:15.441913 kubelet[2569]: I0319 11:44:15.441911 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2abdda0-cbe1-4dc3-87be-723205733bdd-cilium-config-path\") pod \"cilium-operator-5d85765b45-dbd4k\" (UID: \"d2abdda0-cbe1-4dc3-87be-723205733bdd\") " pod="kube-system/cilium-operator-5d85765b45-dbd4k" Mar 19 11:44:15.442205 kubelet[2569]: I0319 11:44:15.441957 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-bpf-maps\") pod \"cilium-wlr4t\" (UID: \"29f74f42-e131-48ab-8b9b-8fd9f5ae22d6\") " pod="kube-system/cilium-wlr4t" Mar 19 11:44:15.442205 kubelet[2569]: I0319 11:44:15.441978 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-host-proc-sys-kernel\") pod \"cilium-wlr4t\" (UID: \"29f74f42-e131-48ab-8b9b-8fd9f5ae22d6\") " pod="kube-system/cilium-wlr4t" Mar 19 11:44:15.442205 kubelet[2569]: I0319 11:44:15.441994 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-hubble-tls\") pod \"cilium-wlr4t\" (UID: \"29f74f42-e131-48ab-8b9b-8fd9f5ae22d6\") " pod="kube-system/cilium-wlr4t" Mar 19 11:44:15.442205 kubelet[2569]: I0319 11:44:15.442008 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-hostproc\") pod \"cilium-wlr4t\" (UID: \"29f74f42-e131-48ab-8b9b-8fd9f5ae22d6\") " pod="kube-system/cilium-wlr4t" Mar 19 11:44:15.442205 kubelet[2569]: I0319 11:44:15.442022 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-cilium-config-path\") pod \"cilium-wlr4t\" (UID: \"29f74f42-e131-48ab-8b9b-8fd9f5ae22d6\") " pod="kube-system/cilium-wlr4t" Mar 19 11:44:15.442205 kubelet[2569]: I0319 11:44:15.442036 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-etc-cni-netd\") pod \"cilium-wlr4t\" (UID: \"29f74f42-e131-48ab-8b9b-8fd9f5ae22d6\") " pod="kube-system/cilium-wlr4t" Mar 19 11:44:15.442368 kubelet[2569]: I0319 11:44:15.442050 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjd2c\" (UniqueName: \"kubernetes.io/projected/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-kube-api-access-jjd2c\") pod \"cilium-wlr4t\" (UID: \"29f74f42-e131-48ab-8b9b-8fd9f5ae22d6\") " pod="kube-system/cilium-wlr4t" Mar 19 11:44:15.442368 kubelet[2569]: I0319 11:44:15.442064 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-cilium-cgroup\") pod \"cilium-wlr4t\" (UID: \"29f74f42-e131-48ab-8b9b-8fd9f5ae22d6\") " pod="kube-system/cilium-wlr4t" Mar 19 11:44:15.442368 kubelet[2569]: I0319 11:44:15.442131 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-host-proc-sys-net\") pod \"cilium-wlr4t\" (UID: \"29f74f42-e131-48ab-8b9b-8fd9f5ae22d6\") " pod="kube-system/cilium-wlr4t" Mar 19 11:44:15.442368 kubelet[2569]: I0319 11:44:15.442146 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-274nb\" (UniqueName: \"kubernetes.io/projected/d2abdda0-cbe1-4dc3-87be-723205733bdd-kube-api-access-274nb\") pod \"cilium-operator-5d85765b45-dbd4k\" (UID: \"d2abdda0-cbe1-4dc3-87be-723205733bdd\") " pod="kube-system/cilium-operator-5d85765b45-dbd4k" Mar 19 11:44:15.442368 kubelet[2569]: I0319 11:44:15.442227 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-cni-path\") pod \"cilium-wlr4t\" (UID: \"29f74f42-e131-48ab-8b9b-8fd9f5ae22d6\") " pod="kube-system/cilium-wlr4t" Mar 19 11:44:15.442471 kubelet[2569]: I0319 11:44:15.442273 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-xtables-lock\") pod \"cilium-wlr4t\" (UID: \"29f74f42-e131-48ab-8b9b-8fd9f5ae22d6\") " pod="kube-system/cilium-wlr4t" Mar 19 11:44:15.442471 kubelet[2569]: I0319 11:44:15.442291 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-clustermesh-secrets\") pod \"cilium-wlr4t\" (UID: \"29f74f42-e131-48ab-8b9b-8fd9f5ae22d6\") " pod="kube-system/cilium-wlr4t" Mar 19 11:44:15.442471 kubelet[2569]: I0319 11:44:15.442309 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-lib-modules\") pod \"cilium-wlr4t\" (UID: \"29f74f42-e131-48ab-8b9b-8fd9f5ae22d6\") " pod="kube-system/cilium-wlr4t" Mar 19 11:44:15.442794 systemd[1]: Created slice kubepods-besteffort-podd2abdda0_cbe1_4dc3_87be_723205733bdd.slice - libcontainer container kubepods-besteffort-podd2abdda0_cbe1_4dc3_87be_723205733bdd.slice. Mar 19 11:44:15.649912 containerd[1476]: time="2025-03-19T11:44:15.649806081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-phbzx,Uid:78fc3a44-fc2c-4a39-afe2-be8e9460c12e,Namespace:kube-system,Attempt:0,}" Mar 19 11:44:15.654800 containerd[1476]: time="2025-03-19T11:44:15.654760315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wlr4t,Uid:29f74f42-e131-48ab-8b9b-8fd9f5ae22d6,Namespace:kube-system,Attempt:0,}" Mar 19 11:44:15.668064 containerd[1476]: time="2025-03-19T11:44:15.667906056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:44:15.668064 containerd[1476]: time="2025-03-19T11:44:15.667972873Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:44:15.668064 containerd[1476]: time="2025-03-19T11:44:15.667990677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:44:15.668204 containerd[1476]: time="2025-03-19T11:44:15.668105147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:44:15.673704 containerd[1476]: time="2025-03-19T11:44:15.673317528Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:44:15.673704 containerd[1476]: time="2025-03-19T11:44:15.673525821Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:44:15.673704 containerd[1476]: time="2025-03-19T11:44:15.673541705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:44:15.673704 containerd[1476]: time="2025-03-19T11:44:15.673615844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:44:15.684397 systemd[1]: Started cri-containerd-25d7619ec96de09a593a5c5c2b659926a049233ab36c99955917bb8edd2807a3.scope - libcontainer container 25d7619ec96de09a593a5c5c2b659926a049233ab36c99955917bb8edd2807a3. Mar 19 11:44:15.686811 systemd[1]: Started cri-containerd-3c2730ae58d976cb7cd9c6e8486f8eff44777926b448c2b93c8766d5d2719215.scope - libcontainer container 3c2730ae58d976cb7cd9c6e8486f8eff44777926b448c2b93c8766d5d2719215. Mar 19 11:44:15.702664 containerd[1476]: time="2025-03-19T11:44:15.702624425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-phbzx,Uid:78fc3a44-fc2c-4a39-afe2-be8e9460c12e,Namespace:kube-system,Attempt:0,} returns sandbox id \"25d7619ec96de09a593a5c5c2b659926a049233ab36c99955917bb8edd2807a3\"" Mar 19 11:44:15.705568 containerd[1476]: time="2025-03-19T11:44:15.705536094Z" level=info msg="CreateContainer within sandbox \"25d7619ec96de09a593a5c5c2b659926a049233ab36c99955917bb8edd2807a3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 19 11:44:15.709915 containerd[1476]: time="2025-03-19T11:44:15.709880371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wlr4t,Uid:29f74f42-e131-48ab-8b9b-8fd9f5ae22d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c2730ae58d976cb7cd9c6e8486f8eff44777926b448c2b93c8766d5d2719215\"" Mar 19 11:44:15.711124 containerd[1476]: time="2025-03-19T11:44:15.711097884Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 19 11:44:15.733430 containerd[1476]: time="2025-03-19T11:44:15.733359650Z" level=info msg="CreateContainer within sandbox \"25d7619ec96de09a593a5c5c2b659926a049233ab36c99955917bb8edd2807a3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"427bc8037f33d8231bf823ac094914e30136dba753025d6c303cd0a78404d6c6\"" Mar 19 11:44:15.733863 containerd[1476]: time="2025-03-19T11:44:15.733837573Z" level=info msg="StartContainer for \"427bc8037f33d8231bf823ac094914e30136dba753025d6c303cd0a78404d6c6\"" Mar 19 11:44:15.746141 containerd[1476]: time="2025-03-19T11:44:15.745848662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-dbd4k,Uid:d2abdda0-cbe1-4dc3-87be-723205733bdd,Namespace:kube-system,Attempt:0,}" Mar 19 11:44:15.759390 systemd[1]: Started cri-containerd-427bc8037f33d8231bf823ac094914e30136dba753025d6c303cd0a78404d6c6.scope - libcontainer container 427bc8037f33d8231bf823ac094914e30136dba753025d6c303cd0a78404d6c6. Mar 19 11:44:15.764750 containerd[1476]: time="2025-03-19T11:44:15.764239232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:44:15.764750 containerd[1476]: time="2025-03-19T11:44:15.764690748Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:44:15.764750 containerd[1476]: time="2025-03-19T11:44:15.764706152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:44:15.765007 containerd[1476]: time="2025-03-19T11:44:15.764779451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:44:15.783389 systemd[1]: Started cri-containerd-8c1539e1ba8e507e4ef7bb148b23f7be560fb77389f5a5ed0f78faaaa7a5e6b9.scope - libcontainer container 8c1539e1ba8e507e4ef7bb148b23f7be560fb77389f5a5ed0f78faaaa7a5e6b9. Mar 19 11:44:15.800578 containerd[1476]: time="2025-03-19T11:44:15.800531566Z" level=info msg="StartContainer for \"427bc8037f33d8231bf823ac094914e30136dba753025d6c303cd0a78404d6c6\" returns successfully" Mar 19 11:44:15.826627 containerd[1476]: time="2025-03-19T11:44:15.825275890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-dbd4k,Uid:d2abdda0-cbe1-4dc3-87be-723205733bdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c1539e1ba8e507e4ef7bb148b23f7be560fb77389f5a5ed0f78faaaa7a5e6b9\"" Mar 19 11:44:16.043372 kubelet[2569]: I0319 11:44:16.042877 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-phbzx" podStartSLOduration=1.042859 podStartE2EDuration="1.042859s" podCreationTimestamp="2025-03-19 11:44:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:44:16.042436977 +0000 UTC m=+8.120504501" watchObservedRunningTime="2025-03-19 11:44:16.042859 +0000 UTC m=+8.120926484" Mar 19 11:44:21.204914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1118617193.mount: Deactivated successfully. Mar 19 11:44:22.145418 update_engine[1464]: I20250319 11:44:22.145274 1464 update_attempter.cc:509] Updating boot flags... Mar 19 11:44:22.433267 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2973) Mar 19 11:44:22.452223 containerd[1476]: time="2025-03-19T11:44:22.452178063Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:44:22.453845 containerd[1476]: time="2025-03-19T11:44:22.453791909Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Mar 19 11:44:22.457101 containerd[1476]: time="2025-03-19T11:44:22.457060930Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:44:22.458658 containerd[1476]: time="2025-03-19T11:44:22.458516908Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.747385975s" Mar 19 11:44:22.458658 containerd[1476]: time="2025-03-19T11:44:22.458550314Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 19 11:44:22.465067 containerd[1476]: time="2025-03-19T11:44:22.464889719Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 19 11:44:22.471325 containerd[1476]: time="2025-03-19T11:44:22.471184796Z" level=info msg="CreateContainer within sandbox \"3c2730ae58d976cb7cd9c6e8486f8eff44777926b448c2b93c8766d5d2719215\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 19 11:44:22.487756 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2973) Mar 19 11:44:22.512331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1399370751.mount: Deactivated successfully. Mar 19 11:44:22.517432 containerd[1476]: time="2025-03-19T11:44:22.517381355Z" level=info msg="CreateContainer within sandbox \"3c2730ae58d976cb7cd9c6e8486f8eff44777926b448c2b93c8766d5d2719215\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9dd87ae111433cfcdc4dfa793fed4d5e00d5b43b3cf49e61447bfe8be1801060\"" Mar 19 11:44:22.523294 containerd[1476]: time="2025-03-19T11:44:22.523221231Z" level=info msg="StartContainer for \"9dd87ae111433cfcdc4dfa793fed4d5e00d5b43b3cf49e61447bfe8be1801060\"" Mar 19 11:44:22.566421 systemd[1]: Started cri-containerd-9dd87ae111433cfcdc4dfa793fed4d5e00d5b43b3cf49e61447bfe8be1801060.scope - libcontainer container 9dd87ae111433cfcdc4dfa793fed4d5e00d5b43b3cf49e61447bfe8be1801060. Mar 19 11:44:22.625680 containerd[1476]: time="2025-03-19T11:44:22.625626966Z" level=info msg="StartContainer for \"9dd87ae111433cfcdc4dfa793fed4d5e00d5b43b3cf49e61447bfe8be1801060\" returns successfully" Mar 19 11:44:22.651653 systemd[1]: cri-containerd-9dd87ae111433cfcdc4dfa793fed4d5e00d5b43b3cf49e61447bfe8be1801060.scope: Deactivated successfully. Mar 19 11:44:22.693596 containerd[1476]: time="2025-03-19T11:44:22.688897515Z" level=info msg="shim disconnected" id=9dd87ae111433cfcdc4dfa793fed4d5e00d5b43b3cf49e61447bfe8be1801060 namespace=k8s.io Mar 19 11:44:22.693596 containerd[1476]: time="2025-03-19T11:44:22.693596229Z" level=warning msg="cleaning up after shim disconnected" id=9dd87ae111433cfcdc4dfa793fed4d5e00d5b43b3cf49e61447bfe8be1801060 namespace=k8s.io Mar 19 11:44:22.693783 containerd[1476]: time="2025-03-19T11:44:22.693607911Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:44:23.064422 containerd[1476]: time="2025-03-19T11:44:23.064226178Z" level=info msg="CreateContainer within sandbox \"3c2730ae58d976cb7cd9c6e8486f8eff44777926b448c2b93c8766d5d2719215\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 19 11:44:23.094236 containerd[1476]: time="2025-03-19T11:44:23.094180393Z" level=info msg="CreateContainer within sandbox \"3c2730ae58d976cb7cd9c6e8486f8eff44777926b448c2b93c8766d5d2719215\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bdd2e46ba177457c1cb9d3032c46c0320e91694d2401ab2c17f0cebe6578be6b\"" Mar 19 11:44:23.096816 containerd[1476]: time="2025-03-19T11:44:23.095696209Z" level=info msg="StartContainer for \"bdd2e46ba177457c1cb9d3032c46c0320e91694d2401ab2c17f0cebe6578be6b\"" Mar 19 11:44:23.123417 systemd[1]: Started cri-containerd-bdd2e46ba177457c1cb9d3032c46c0320e91694d2401ab2c17f0cebe6578be6b.scope - libcontainer container bdd2e46ba177457c1cb9d3032c46c0320e91694d2401ab2c17f0cebe6578be6b. Mar 19 11:44:23.150064 containerd[1476]: time="2025-03-19T11:44:23.149928960Z" level=info msg="StartContainer for \"bdd2e46ba177457c1cb9d3032c46c0320e91694d2401ab2c17f0cebe6578be6b\" returns successfully" Mar 19 11:44:23.168342 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 19 11:44:23.168561 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:44:23.174005 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 19 11:44:23.182549 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 19 11:44:23.182740 systemd[1]: cri-containerd-bdd2e46ba177457c1cb9d3032c46c0320e91694d2401ab2c17f0cebe6578be6b.scope: Deactivated successfully. Mar 19 11:44:23.193305 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:44:23.203122 containerd[1476]: time="2025-03-19T11:44:23.203068208Z" level=info msg="shim disconnected" id=bdd2e46ba177457c1cb9d3032c46c0320e91694d2401ab2c17f0cebe6578be6b namespace=k8s.io Mar 19 11:44:23.203122 containerd[1476]: time="2025-03-19T11:44:23.203121737Z" level=warning msg="cleaning up after shim disconnected" id=bdd2e46ba177457c1cb9d3032c46c0320e91694d2401ab2c17f0cebe6578be6b namespace=k8s.io Mar 19 11:44:23.203599 containerd[1476]: time="2025-03-19T11:44:23.203130018Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:44:23.512335 systemd[1]: run-containerd-runc-k8s.io-9dd87ae111433cfcdc4dfa793fed4d5e00d5b43b3cf49e61447bfe8be1801060-runc.ov1rv9.mount: Deactivated successfully. Mar 19 11:44:23.512652 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9dd87ae111433cfcdc4dfa793fed4d5e00d5b43b3cf49e61447bfe8be1801060-rootfs.mount: Deactivated successfully. Mar 19 11:44:23.667055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2101340009.mount: Deactivated successfully. Mar 19 11:44:24.071929 containerd[1476]: time="2025-03-19T11:44:24.071683765Z" level=info msg="CreateContainer within sandbox \"3c2730ae58d976cb7cd9c6e8486f8eff44777926b448c2b93c8766d5d2719215\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 19 11:44:24.093514 containerd[1476]: time="2025-03-19T11:44:24.093471423Z" level=info msg="CreateContainer within sandbox \"3c2730ae58d976cb7cd9c6e8486f8eff44777926b448c2b93c8766d5d2719215\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c89ee235d85ae8389e8b54264768afecca894d9e78e8365dd493091f45b8a044\"" Mar 19 11:44:24.094131 containerd[1476]: time="2025-03-19T11:44:24.094057437Z" level=info msg="StartContainer for \"c89ee235d85ae8389e8b54264768afecca894d9e78e8365dd493091f45b8a044\"" Mar 19 11:44:24.125737 systemd[1]: Started cri-containerd-c89ee235d85ae8389e8b54264768afecca894d9e78e8365dd493091f45b8a044.scope - libcontainer container c89ee235d85ae8389e8b54264768afecca894d9e78e8365dd493091f45b8a044. Mar 19 11:44:24.178184 systemd[1]: cri-containerd-c89ee235d85ae8389e8b54264768afecca894d9e78e8365dd493091f45b8a044.scope: Deactivated successfully. Mar 19 11:44:24.191984 containerd[1476]: time="2025-03-19T11:44:24.191938794Z" level=info msg="StartContainer for \"c89ee235d85ae8389e8b54264768afecca894d9e78e8365dd493091f45b8a044\" returns successfully" Mar 19 11:44:24.236026 containerd[1476]: time="2025-03-19T11:44:24.235891732Z" level=info msg="shim disconnected" id=c89ee235d85ae8389e8b54264768afecca894d9e78e8365dd493091f45b8a044 namespace=k8s.io Mar 19 11:44:24.236026 containerd[1476]: time="2025-03-19T11:44:24.235975625Z" level=warning msg="cleaning up after shim disconnected" id=c89ee235d85ae8389e8b54264768afecca894d9e78e8365dd493091f45b8a044 namespace=k8s.io Mar 19 11:44:24.236026 containerd[1476]: time="2025-03-19T11:44:24.235987507Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:44:24.246849 containerd[1476]: time="2025-03-19T11:44:24.246641538Z" level=warning msg="cleanup warnings time=\"2025-03-19T11:44:24Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 19 11:44:24.252190 containerd[1476]: time="2025-03-19T11:44:24.252149703Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:44:24.252875 containerd[1476]: time="2025-03-19T11:44:24.252677147Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Mar 19 11:44:24.253794 containerd[1476]: time="2025-03-19T11:44:24.253731437Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:44:24.255380 containerd[1476]: time="2025-03-19T11:44:24.255237638Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.790311193s" Mar 19 11:44:24.255380 containerd[1476]: time="2025-03-19T11:44:24.255292967Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 19 11:44:24.264054 containerd[1476]: time="2025-03-19T11:44:24.264007447Z" level=info msg="CreateContainer within sandbox \"8c1539e1ba8e507e4ef7bb148b23f7be560fb77389f5a5ed0f78faaaa7a5e6b9\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 19 11:44:24.275165 containerd[1476]: time="2025-03-19T11:44:24.275100588Z" level=info msg="CreateContainer within sandbox \"8c1539e1ba8e507e4ef7bb148b23f7be560fb77389f5a5ed0f78faaaa7a5e6b9\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f18e3ee560b632389eacaf35c719967e4119070bc8af3c67e238ee3e14a80a77\"" Mar 19 11:44:24.275771 containerd[1476]: time="2025-03-19T11:44:24.275599668Z" level=info msg="StartContainer for \"f18e3ee560b632389eacaf35c719967e4119070bc8af3c67e238ee3e14a80a77\"" Mar 19 11:44:24.302445 systemd[1]: Started cri-containerd-f18e3ee560b632389eacaf35c719967e4119070bc8af3c67e238ee3e14a80a77.scope - libcontainer container f18e3ee560b632389eacaf35c719967e4119070bc8af3c67e238ee3e14a80a77. Mar 19 11:44:24.333221 containerd[1476]: time="2025-03-19T11:44:24.333116224Z" level=info msg="StartContainer for \"f18e3ee560b632389eacaf35c719967e4119070bc8af3c67e238ee3e14a80a77\" returns successfully" Mar 19 11:44:25.079580 containerd[1476]: time="2025-03-19T11:44:25.079541791Z" level=info msg="CreateContainer within sandbox \"3c2730ae58d976cb7cd9c6e8486f8eff44777926b448c2b93c8766d5d2719215\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 19 11:44:25.085123 kubelet[2569]: I0319 11:44:25.084747 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-dbd4k" podStartSLOduration=1.6545059370000001 podStartE2EDuration="10.084729305s" podCreationTimestamp="2025-03-19 11:44:15 +0000 UTC" firstStartedPulling="2025-03-19 11:44:15.826347765 +0000 UTC m=+7.904415249" lastFinishedPulling="2025-03-19 11:44:24.256571132 +0000 UTC m=+16.334638617" observedRunningTime="2025-03-19 11:44:25.083036686 +0000 UTC m=+17.161104170" watchObservedRunningTime="2025-03-19 11:44:25.084729305 +0000 UTC m=+17.162796789" Mar 19 11:44:25.098960 containerd[1476]: time="2025-03-19T11:44:25.098899471Z" level=info msg="CreateContainer within sandbox \"3c2730ae58d976cb7cd9c6e8486f8eff44777926b448c2b93c8766d5d2719215\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6b6bfa6abc4479e3c35c40b1ca96264ad876221251c9253cf6f3b95364758395\"" Mar 19 11:44:25.099547 containerd[1476]: time="2025-03-19T11:44:25.099507444Z" level=info msg="StartContainer for \"6b6bfa6abc4479e3c35c40b1ca96264ad876221251c9253cf6f3b95364758395\"" Mar 19 11:44:25.128390 systemd[1]: Started cri-containerd-6b6bfa6abc4479e3c35c40b1ca96264ad876221251c9253cf6f3b95364758395.scope - libcontainer container 6b6bfa6abc4479e3c35c40b1ca96264ad876221251c9253cf6f3b95364758395. Mar 19 11:44:25.145879 systemd[1]: cri-containerd-6b6bfa6abc4479e3c35c40b1ca96264ad876221251c9253cf6f3b95364758395.scope: Deactivated successfully. Mar 19 11:44:25.147573 containerd[1476]: time="2025-03-19T11:44:25.147484500Z" level=info msg="StartContainer for \"6b6bfa6abc4479e3c35c40b1ca96264ad876221251c9253cf6f3b95364758395\" returns successfully" Mar 19 11:44:25.171942 containerd[1476]: time="2025-03-19T11:44:25.171821421Z" level=info msg="shim disconnected" id=6b6bfa6abc4479e3c35c40b1ca96264ad876221251c9253cf6f3b95364758395 namespace=k8s.io Mar 19 11:44:25.171942 containerd[1476]: time="2025-03-19T11:44:25.171874629Z" level=warning msg="cleaning up after shim disconnected" id=6b6bfa6abc4479e3c35c40b1ca96264ad876221251c9253cf6f3b95364758395 namespace=k8s.io Mar 19 11:44:25.171942 containerd[1476]: time="2025-03-19T11:44:25.171882471Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:44:25.512729 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b6bfa6abc4479e3c35c40b1ca96264ad876221251c9253cf6f3b95364758395-rootfs.mount: Deactivated successfully. Mar 19 11:44:26.080719 containerd[1476]: time="2025-03-19T11:44:26.080662897Z" level=info msg="CreateContainer within sandbox \"3c2730ae58d976cb7cd9c6e8486f8eff44777926b448c2b93c8766d5d2719215\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 19 11:44:26.113397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3636832942.mount: Deactivated successfully. Mar 19 11:44:26.115543 containerd[1476]: time="2025-03-19T11:44:26.115418201Z" level=info msg="CreateContainer within sandbox \"3c2730ae58d976cb7cd9c6e8486f8eff44777926b448c2b93c8766d5d2719215\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bf70da2d14bbe116e862660afbad9261649c1ca2a0d3dec27714c99771ca8a3a\"" Mar 19 11:44:26.115967 containerd[1476]: time="2025-03-19T11:44:26.115940677Z" level=info msg="StartContainer for \"bf70da2d14bbe116e862660afbad9261649c1ca2a0d3dec27714c99771ca8a3a\"" Mar 19 11:44:26.147390 systemd[1]: Started cri-containerd-bf70da2d14bbe116e862660afbad9261649c1ca2a0d3dec27714c99771ca8a3a.scope - libcontainer container bf70da2d14bbe116e862660afbad9261649c1ca2a0d3dec27714c99771ca8a3a. Mar 19 11:44:26.174011 containerd[1476]: time="2025-03-19T11:44:26.173962372Z" level=info msg="StartContainer for \"bf70da2d14bbe116e862660afbad9261649c1ca2a0d3dec27714c99771ca8a3a\" returns successfully" Mar 19 11:44:26.287738 kubelet[2569]: I0319 11:44:26.287697 2569 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Mar 19 11:44:26.322635 systemd[1]: Created slice kubepods-burstable-pod40dba81f_33c4_4f75_bae3_c332dee43f38.slice - libcontainer container kubepods-burstable-pod40dba81f_33c4_4f75_bae3_c332dee43f38.slice. Mar 19 11:44:26.323269 kubelet[2569]: I0319 11:44:26.323227 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40dba81f-33c4-4f75-bae3-c332dee43f38-config-volume\") pod \"coredns-6f6b679f8f-9685k\" (UID: \"40dba81f-33c4-4f75-bae3-c332dee43f38\") " pod="kube-system/coredns-6f6b679f8f-9685k" Mar 19 11:44:26.323373 kubelet[2569]: I0319 11:44:26.323278 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/99206072-bf3f-4d89-a388-20a224e0ebec-config-volume\") pod \"coredns-6f6b679f8f-k7wgf\" (UID: \"99206072-bf3f-4d89-a388-20a224e0ebec\") " pod="kube-system/coredns-6f6b679f8f-k7wgf" Mar 19 11:44:26.323373 kubelet[2569]: I0319 11:44:26.323298 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms4m2\" (UniqueName: \"kubernetes.io/projected/99206072-bf3f-4d89-a388-20a224e0ebec-kube-api-access-ms4m2\") pod \"coredns-6f6b679f8f-k7wgf\" (UID: \"99206072-bf3f-4d89-a388-20a224e0ebec\") " pod="kube-system/coredns-6f6b679f8f-k7wgf" Mar 19 11:44:26.323373 kubelet[2569]: I0319 11:44:26.323318 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h8h4\" (UniqueName: \"kubernetes.io/projected/40dba81f-33c4-4f75-bae3-c332dee43f38-kube-api-access-4h8h4\") pod \"coredns-6f6b679f8f-9685k\" (UID: \"40dba81f-33c4-4f75-bae3-c332dee43f38\") " pod="kube-system/coredns-6f6b679f8f-9685k" Mar 19 11:44:26.330647 systemd[1]: Created slice kubepods-burstable-pod99206072_bf3f_4d89_a388_20a224e0ebec.slice - libcontainer container kubepods-burstable-pod99206072_bf3f_4d89_a388_20a224e0ebec.slice. Mar 19 11:44:26.627729 containerd[1476]: time="2025-03-19T11:44:26.627555707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9685k,Uid:40dba81f-33c4-4f75-bae3-c332dee43f38,Namespace:kube-system,Attempt:0,}" Mar 19 11:44:26.635356 containerd[1476]: time="2025-03-19T11:44:26.635305797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-k7wgf,Uid:99206072-bf3f-4d89-a388-20a224e0ebec,Namespace:kube-system,Attempt:0,}" Mar 19 11:44:27.099530 kubelet[2569]: I0319 11:44:27.099474 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wlr4t" podStartSLOduration=5.345516059 podStartE2EDuration="12.099458591s" podCreationTimestamp="2025-03-19 11:44:15 +0000 UTC" firstStartedPulling="2025-03-19 11:44:15.710716626 +0000 UTC m=+7.788784070" lastFinishedPulling="2025-03-19 11:44:22.464659118 +0000 UTC m=+14.542726602" observedRunningTime="2025-03-19 11:44:27.096993768 +0000 UTC m=+19.175061252" watchObservedRunningTime="2025-03-19 11:44:27.099458591 +0000 UTC m=+19.177526075" Mar 19 11:44:28.363348 systemd-networkd[1398]: cilium_host: Link UP Mar 19 11:44:28.363472 systemd-networkd[1398]: cilium_net: Link UP Mar 19 11:44:28.363590 systemd-networkd[1398]: cilium_net: Gained carrier Mar 19 11:44:28.363831 systemd-networkd[1398]: cilium_host: Gained carrier Mar 19 11:44:28.445575 systemd-networkd[1398]: cilium_vxlan: Link UP Mar 19 11:44:28.445582 systemd-networkd[1398]: cilium_vxlan: Gained carrier Mar 19 11:44:28.757284 kernel: NET: Registered PF_ALG protocol family Mar 19 11:44:28.923360 systemd-networkd[1398]: cilium_net: Gained IPv6LL Mar 19 11:44:28.987193 systemd-networkd[1398]: cilium_host: Gained IPv6LL Mar 19 11:44:29.313570 systemd-networkd[1398]: lxc_health: Link UP Mar 19 11:44:29.315370 systemd-networkd[1398]: lxc_health: Gained carrier Mar 19 11:44:29.794274 kernel: eth0: renamed from tmp47b27 Mar 19 11:44:29.800365 kernel: eth0: renamed from tmpdd0ad Mar 19 11:44:29.809745 systemd-networkd[1398]: lxcb01ffe84a639: Link UP Mar 19 11:44:29.811518 systemd-networkd[1398]: lxc261d4f656a66: Link UP Mar 19 11:44:29.811726 systemd-networkd[1398]: lxc261d4f656a66: Gained carrier Mar 19 11:44:29.811841 systemd-networkd[1398]: lxcb01ffe84a639: Gained carrier Mar 19 11:44:30.266394 systemd-networkd[1398]: cilium_vxlan: Gained IPv6LL Mar 19 11:44:30.522429 systemd-networkd[1398]: lxc_health: Gained IPv6LL Mar 19 11:44:31.162448 systemd-networkd[1398]: lxc261d4f656a66: Gained IPv6LL Mar 19 11:44:31.290433 systemd-networkd[1398]: lxcb01ffe84a639: Gained IPv6LL Mar 19 11:44:33.227521 containerd[1476]: time="2025-03-19T11:44:33.227436874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:44:33.227521 containerd[1476]: time="2025-03-19T11:44:33.227498641Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:44:33.227521 containerd[1476]: time="2025-03-19T11:44:33.227513682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:44:33.228328 containerd[1476]: time="2025-03-19T11:44:33.227597971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:44:33.231525 containerd[1476]: time="2025-03-19T11:44:33.229701475Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:44:33.231525 containerd[1476]: time="2025-03-19T11:44:33.230481158Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:44:33.231525 containerd[1476]: time="2025-03-19T11:44:33.230505521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:44:33.231525 containerd[1476]: time="2025-03-19T11:44:33.230576648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:44:33.250434 systemd[1]: Started cri-containerd-dd0add9e1d831a6ae3addb5e919f379f828ee186a281a266fe7ec110ef6309ba.scope - libcontainer container dd0add9e1d831a6ae3addb5e919f379f828ee186a281a266fe7ec110ef6309ba. Mar 19 11:44:33.252856 systemd[1]: Started cri-containerd-47b27c7db9edf9c634fc06cc6cbb92a3a522d63777a0d9714ff6dd1fb1c70740.scope - libcontainer container 47b27c7db9edf9c634fc06cc6cbb92a3a522d63777a0d9714ff6dd1fb1c70740. Mar 19 11:44:33.262761 systemd-resolved[1321]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 19 11:44:33.263645 systemd-resolved[1321]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 19 11:44:33.279666 containerd[1476]: time="2025-03-19T11:44:33.279583308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-k7wgf,Uid:99206072-bf3f-4d89-a388-20a224e0ebec,Namespace:kube-system,Attempt:0,} returns sandbox id \"47b27c7db9edf9c634fc06cc6cbb92a3a522d63777a0d9714ff6dd1fb1c70740\"" Mar 19 11:44:33.280903 containerd[1476]: time="2025-03-19T11:44:33.280860484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9685k,Uid:40dba81f-33c4-4f75-bae3-c332dee43f38,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd0add9e1d831a6ae3addb5e919f379f828ee186a281a266fe7ec110ef6309ba\"" Mar 19 11:44:33.283811 containerd[1476]: time="2025-03-19T11:44:33.283759393Z" level=info msg="CreateContainer within sandbox \"47b27c7db9edf9c634fc06cc6cbb92a3a522d63777a0d9714ff6dd1fb1c70740\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 19 11:44:33.285622 containerd[1476]: time="2025-03-19T11:44:33.285579507Z" level=info msg="CreateContainer within sandbox \"dd0add9e1d831a6ae3addb5e919f379f828ee186a281a266fe7ec110ef6309ba\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 19 11:44:33.298252 containerd[1476]: time="2025-03-19T11:44:33.298201132Z" level=info msg="CreateContainer within sandbox \"47b27c7db9edf9c634fc06cc6cbb92a3a522d63777a0d9714ff6dd1fb1c70740\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1d94e1eb6665e79499a36ec2bcf91ecd0c670b7f1b5d734da8b55ee137ab0d5b\"" Mar 19 11:44:33.298711 containerd[1476]: time="2025-03-19T11:44:33.298685103Z" level=info msg="StartContainer for \"1d94e1eb6665e79499a36ec2bcf91ecd0c670b7f1b5d734da8b55ee137ab0d5b\"" Mar 19 11:44:33.303430 containerd[1476]: time="2025-03-19T11:44:33.303310796Z" level=info msg="CreateContainer within sandbox \"dd0add9e1d831a6ae3addb5e919f379f828ee186a281a266fe7ec110ef6309ba\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"be33bf69917787927284e772516b9e4ce4d4ca1ab09d5cc919d3f878af10cafe\"" Mar 19 11:44:33.304696 containerd[1476]: time="2025-03-19T11:44:33.304165287Z" level=info msg="StartContainer for \"be33bf69917787927284e772516b9e4ce4d4ca1ab09d5cc919d3f878af10cafe\"" Mar 19 11:44:33.324431 systemd[1]: Started cri-containerd-1d94e1eb6665e79499a36ec2bcf91ecd0c670b7f1b5d734da8b55ee137ab0d5b.scope - libcontainer container 1d94e1eb6665e79499a36ec2bcf91ecd0c670b7f1b5d734da8b55ee137ab0d5b. Mar 19 11:44:33.327017 systemd[1]: Started cri-containerd-be33bf69917787927284e772516b9e4ce4d4ca1ab09d5cc919d3f878af10cafe.scope - libcontainer container be33bf69917787927284e772516b9e4ce4d4ca1ab09d5cc919d3f878af10cafe. Mar 19 11:44:33.348654 containerd[1476]: time="2025-03-19T11:44:33.348518411Z" level=info msg="StartContainer for \"1d94e1eb6665e79499a36ec2bcf91ecd0c670b7f1b5d734da8b55ee137ab0d5b\" returns successfully" Mar 19 11:44:33.358028 containerd[1476]: time="2025-03-19T11:44:33.357987220Z" level=info msg="StartContainer for \"be33bf69917787927284e772516b9e4ce4d4ca1ab09d5cc919d3f878af10cafe\" returns successfully" Mar 19 11:44:34.112625 kubelet[2569]: I0319 11:44:34.112554 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-k7wgf" podStartSLOduration=19.112538033 podStartE2EDuration="19.112538033s" podCreationTimestamp="2025-03-19 11:44:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:44:34.111992938 +0000 UTC m=+26.190060422" watchObservedRunningTime="2025-03-19 11:44:34.112538033 +0000 UTC m=+26.190605517" Mar 19 11:44:34.134962 kubelet[2569]: I0319 11:44:34.134904 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-9685k" podStartSLOduration=19.134885358 podStartE2EDuration="19.134885358s" podCreationTimestamp="2025-03-19 11:44:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:44:34.125254413 +0000 UTC m=+26.203321937" watchObservedRunningTime="2025-03-19 11:44:34.134885358 +0000 UTC m=+26.212952802" Mar 19 11:44:35.464731 systemd[1]: Started sshd@7-10.0.0.94:22-10.0.0.1:43984.service - OpenSSH per-connection server daemon (10.0.0.1:43984). Mar 19 11:44:35.511610 sshd[3981]: Accepted publickey for core from 10.0.0.1 port 43984 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:44:35.513212 sshd-session[3981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:44:35.517434 systemd-logind[1460]: New session 8 of user core. Mar 19 11:44:35.530527 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 19 11:44:35.655847 sshd[3983]: Connection closed by 10.0.0.1 port 43984 Mar 19 11:44:35.656194 sshd-session[3981]: pam_unix(sshd:session): session closed for user core Mar 19 11:44:35.658893 systemd[1]: sshd@7-10.0.0.94:22-10.0.0.1:43984.service: Deactivated successfully. Mar 19 11:44:35.660507 systemd[1]: session-8.scope: Deactivated successfully. Mar 19 11:44:35.661696 systemd-logind[1460]: Session 8 logged out. Waiting for processes to exit. Mar 19 11:44:35.662471 systemd-logind[1460]: Removed session 8. Mar 19 11:44:40.684545 systemd[1]: Started sshd@8-10.0.0.94:22-10.0.0.1:44000.service - OpenSSH per-connection server daemon (10.0.0.1:44000). Mar 19 11:44:40.724065 sshd[4005]: Accepted publickey for core from 10.0.0.1 port 44000 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:44:40.725428 sshd-session[4005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:44:40.730058 systemd-logind[1460]: New session 9 of user core. Mar 19 11:44:40.739505 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 19 11:44:40.852331 sshd[4007]: Connection closed by 10.0.0.1 port 44000 Mar 19 11:44:40.852845 sshd-session[4005]: pam_unix(sshd:session): session closed for user core Mar 19 11:44:40.856212 systemd-logind[1460]: Session 9 logged out. Waiting for processes to exit. Mar 19 11:44:40.856653 systemd[1]: sshd@8-10.0.0.94:22-10.0.0.1:44000.service: Deactivated successfully. Mar 19 11:44:40.859117 systemd[1]: session-9.scope: Deactivated successfully. Mar 19 11:44:40.859895 systemd-logind[1460]: Removed session 9. Mar 19 11:44:45.865445 systemd[1]: Started sshd@9-10.0.0.94:22-10.0.0.1:54084.service - OpenSSH per-connection server daemon (10.0.0.1:54084). Mar 19 11:44:45.907007 sshd[4021]: Accepted publickey for core from 10.0.0.1 port 54084 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:44:45.908092 sshd-session[4021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:44:45.912447 systemd-logind[1460]: New session 10 of user core. Mar 19 11:44:45.927403 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 19 11:44:46.042872 sshd[4023]: Connection closed by 10.0.0.1 port 54084 Mar 19 11:44:46.043583 sshd-session[4021]: pam_unix(sshd:session): session closed for user core Mar 19 11:44:46.046696 systemd-logind[1460]: Session 10 logged out. Waiting for processes to exit. Mar 19 11:44:46.046957 systemd[1]: sshd@9-10.0.0.94:22-10.0.0.1:54084.service: Deactivated successfully. Mar 19 11:44:46.048535 systemd[1]: session-10.scope: Deactivated successfully. Mar 19 11:44:46.049606 systemd-logind[1460]: Removed session 10. Mar 19 11:44:51.068483 systemd[1]: Started sshd@10-10.0.0.94:22-10.0.0.1:54088.service - OpenSSH per-connection server daemon (10.0.0.1:54088). Mar 19 11:44:51.115148 sshd[4039]: Accepted publickey for core from 10.0.0.1 port 54088 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:44:51.116364 sshd-session[4039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:44:51.120394 systemd-logind[1460]: New session 11 of user core. Mar 19 11:44:51.134413 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 19 11:44:51.251412 sshd[4041]: Connection closed by 10.0.0.1 port 54088 Mar 19 11:44:51.252021 sshd-session[4039]: pam_unix(sshd:session): session closed for user core Mar 19 11:44:51.265431 systemd[1]: sshd@10-10.0.0.94:22-10.0.0.1:54088.service: Deactivated successfully. Mar 19 11:44:51.267044 systemd[1]: session-11.scope: Deactivated successfully. Mar 19 11:44:51.267722 systemd-logind[1460]: Session 11 logged out. Waiting for processes to exit. Mar 19 11:44:51.273631 systemd[1]: Started sshd@11-10.0.0.94:22-10.0.0.1:54096.service - OpenSSH per-connection server daemon (10.0.0.1:54096). Mar 19 11:44:51.274774 systemd-logind[1460]: Removed session 11. Mar 19 11:44:51.310922 sshd[4055]: Accepted publickey for core from 10.0.0.1 port 54096 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:44:51.312088 sshd-session[4055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:44:51.315913 systemd-logind[1460]: New session 12 of user core. Mar 19 11:44:51.327408 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 19 11:44:51.471219 sshd[4058]: Connection closed by 10.0.0.1 port 54096 Mar 19 11:44:51.472284 sshd-session[4055]: pam_unix(sshd:session): session closed for user core Mar 19 11:44:51.485398 systemd[1]: sshd@11-10.0.0.94:22-10.0.0.1:54096.service: Deactivated successfully. Mar 19 11:44:51.489718 systemd[1]: session-12.scope: Deactivated successfully. Mar 19 11:44:51.490471 systemd-logind[1460]: Session 12 logged out. Waiting for processes to exit. Mar 19 11:44:51.500582 systemd[1]: Started sshd@12-10.0.0.94:22-10.0.0.1:54102.service - OpenSSH per-connection server daemon (10.0.0.1:54102). Mar 19 11:44:51.502680 systemd-logind[1460]: Removed session 12. Mar 19 11:44:51.540678 sshd[4068]: Accepted publickey for core from 10.0.0.1 port 54102 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:44:51.541841 sshd-session[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:44:51.545969 systemd-logind[1460]: New session 13 of user core. Mar 19 11:44:51.553391 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 19 11:44:51.661691 sshd[4072]: Connection closed by 10.0.0.1 port 54102 Mar 19 11:44:51.661975 sshd-session[4068]: pam_unix(sshd:session): session closed for user core Mar 19 11:44:51.666772 systemd[1]: sshd@12-10.0.0.94:22-10.0.0.1:54102.service: Deactivated successfully. Mar 19 11:44:51.668605 systemd[1]: session-13.scope: Deactivated successfully. Mar 19 11:44:51.670774 systemd-logind[1460]: Session 13 logged out. Waiting for processes to exit. Mar 19 11:44:51.671892 systemd-logind[1460]: Removed session 13. Mar 19 11:44:56.673530 systemd[1]: Started sshd@13-10.0.0.94:22-10.0.0.1:57790.service - OpenSSH per-connection server daemon (10.0.0.1:57790). Mar 19 11:44:56.713692 sshd[4085]: Accepted publickey for core from 10.0.0.1 port 57790 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:44:56.714803 sshd-session[4085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:44:56.718184 systemd-logind[1460]: New session 14 of user core. Mar 19 11:44:56.725396 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 19 11:44:56.833344 sshd[4087]: Connection closed by 10.0.0.1 port 57790 Mar 19 11:44:56.833644 sshd-session[4085]: pam_unix(sshd:session): session closed for user core Mar 19 11:44:56.836748 systemd[1]: sshd@13-10.0.0.94:22-10.0.0.1:57790.service: Deactivated successfully. Mar 19 11:44:56.840380 systemd[1]: session-14.scope: Deactivated successfully. Mar 19 11:44:56.840988 systemd-logind[1460]: Session 14 logged out. Waiting for processes to exit. Mar 19 11:44:56.841826 systemd-logind[1460]: Removed session 14. Mar 19 11:45:01.845577 systemd[1]: Started sshd@14-10.0.0.94:22-10.0.0.1:57802.service - OpenSSH per-connection server daemon (10.0.0.1:57802). Mar 19 11:45:01.889591 sshd[4101]: Accepted publickey for core from 10.0.0.1 port 57802 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:45:01.890955 sshd-session[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:45:01.895820 systemd-logind[1460]: New session 15 of user core. Mar 19 11:45:01.905488 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 19 11:45:02.040529 sshd[4103]: Connection closed by 10.0.0.1 port 57802 Mar 19 11:45:02.039388 sshd-session[4101]: pam_unix(sshd:session): session closed for user core Mar 19 11:45:02.054325 systemd[1]: sshd@14-10.0.0.94:22-10.0.0.1:57802.service: Deactivated successfully. Mar 19 11:45:02.055809 systemd[1]: session-15.scope: Deactivated successfully. Mar 19 11:45:02.057229 systemd-logind[1460]: Session 15 logged out. Waiting for processes to exit. Mar 19 11:45:02.068514 systemd[1]: Started sshd@15-10.0.0.94:22-10.0.0.1:57812.service - OpenSSH per-connection server daemon (10.0.0.1:57812). Mar 19 11:45:02.070081 systemd-logind[1460]: Removed session 15. Mar 19 11:45:02.110543 sshd[4116]: Accepted publickey for core from 10.0.0.1 port 57812 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:45:02.110925 sshd-session[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:45:02.115695 systemd-logind[1460]: New session 16 of user core. Mar 19 11:45:02.123420 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 19 11:45:02.343407 sshd[4119]: Connection closed by 10.0.0.1 port 57812 Mar 19 11:45:02.344351 sshd-session[4116]: pam_unix(sshd:session): session closed for user core Mar 19 11:45:02.357886 systemd[1]: sshd@15-10.0.0.94:22-10.0.0.1:57812.service: Deactivated successfully. Mar 19 11:45:02.359740 systemd[1]: session-16.scope: Deactivated successfully. Mar 19 11:45:02.360493 systemd-logind[1460]: Session 16 logged out. Waiting for processes to exit. Mar 19 11:45:02.363221 systemd[1]: Started sshd@16-10.0.0.94:22-10.0.0.1:57828.service - OpenSSH per-connection server daemon (10.0.0.1:57828). Mar 19 11:45:02.364177 systemd-logind[1460]: Removed session 16. Mar 19 11:45:02.411128 sshd[4129]: Accepted publickey for core from 10.0.0.1 port 57828 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:45:02.410893 sshd-session[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:45:02.415307 systemd-logind[1460]: New session 17 of user core. Mar 19 11:45:02.425397 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 19 11:45:03.650822 sshd[4132]: Connection closed by 10.0.0.1 port 57828 Mar 19 11:45:03.651612 sshd-session[4129]: pam_unix(sshd:session): session closed for user core Mar 19 11:45:03.661029 systemd[1]: sshd@16-10.0.0.94:22-10.0.0.1:57828.service: Deactivated successfully. Mar 19 11:45:03.664790 systemd[1]: session-17.scope: Deactivated successfully. Mar 19 11:45:03.668229 systemd-logind[1460]: Session 17 logged out. Waiting for processes to exit. Mar 19 11:45:03.677645 systemd[1]: Started sshd@17-10.0.0.94:22-10.0.0.1:52756.service - OpenSSH per-connection server daemon (10.0.0.1:52756). Mar 19 11:45:03.679055 systemd-logind[1460]: Removed session 17. Mar 19 11:45:03.717088 sshd[4154]: Accepted publickey for core from 10.0.0.1 port 52756 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:45:03.718410 sshd-session[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:45:03.722224 systemd-logind[1460]: New session 18 of user core. Mar 19 11:45:03.733461 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 19 11:45:03.953789 sshd[4157]: Connection closed by 10.0.0.1 port 52756 Mar 19 11:45:03.953701 sshd-session[4154]: pam_unix(sshd:session): session closed for user core Mar 19 11:45:03.968361 systemd[1]: sshd@17-10.0.0.94:22-10.0.0.1:52756.service: Deactivated successfully. Mar 19 11:45:03.970032 systemd[1]: session-18.scope: Deactivated successfully. Mar 19 11:45:03.972168 systemd-logind[1460]: Session 18 logged out. Waiting for processes to exit. Mar 19 11:45:03.982021 systemd[1]: Started sshd@18-10.0.0.94:22-10.0.0.1:52766.service - OpenSSH per-connection server daemon (10.0.0.1:52766). Mar 19 11:45:03.984302 systemd-logind[1460]: Removed session 18. Mar 19 11:45:04.019915 sshd[4167]: Accepted publickey for core from 10.0.0.1 port 52766 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:45:04.021959 sshd-session[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:45:04.026313 systemd-logind[1460]: New session 19 of user core. Mar 19 11:45:04.040399 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 19 11:45:04.146512 sshd[4170]: Connection closed by 10.0.0.1 port 52766 Mar 19 11:45:04.147044 sshd-session[4167]: pam_unix(sshd:session): session closed for user core Mar 19 11:45:04.150206 systemd[1]: sshd@18-10.0.0.94:22-10.0.0.1:52766.service: Deactivated successfully. Mar 19 11:45:04.152402 systemd[1]: session-19.scope: Deactivated successfully. Mar 19 11:45:04.153092 systemd-logind[1460]: Session 19 logged out. Waiting for processes to exit. Mar 19 11:45:04.154113 systemd-logind[1460]: Removed session 19. Mar 19 11:45:09.159126 systemd[1]: Started sshd@19-10.0.0.94:22-10.0.0.1:52776.service - OpenSSH per-connection server daemon (10.0.0.1:52776). Mar 19 11:45:09.199743 sshd[4188]: Accepted publickey for core from 10.0.0.1 port 52776 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:45:09.200813 sshd-session[4188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:45:09.204305 systemd-logind[1460]: New session 20 of user core. Mar 19 11:45:09.216390 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 19 11:45:09.320294 sshd[4190]: Connection closed by 10.0.0.1 port 52776 Mar 19 11:45:09.320451 sshd-session[4188]: pam_unix(sshd:session): session closed for user core Mar 19 11:45:09.323547 systemd[1]: sshd@19-10.0.0.94:22-10.0.0.1:52776.service: Deactivated successfully. Mar 19 11:45:09.325767 systemd[1]: session-20.scope: Deactivated successfully. Mar 19 11:45:09.326663 systemd-logind[1460]: Session 20 logged out. Waiting for processes to exit. Mar 19 11:45:09.327740 systemd-logind[1460]: Removed session 20. Mar 19 11:45:14.335683 systemd[1]: Started sshd@20-10.0.0.94:22-10.0.0.1:50216.service - OpenSSH per-connection server daemon (10.0.0.1:50216). Mar 19 11:45:14.376609 sshd[4203]: Accepted publickey for core from 10.0.0.1 port 50216 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:45:14.377831 sshd-session[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:45:14.382492 systemd-logind[1460]: New session 21 of user core. Mar 19 11:45:14.395414 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 19 11:45:14.502276 sshd[4205]: Connection closed by 10.0.0.1 port 50216 Mar 19 11:45:14.502634 sshd-session[4203]: pam_unix(sshd:session): session closed for user core Mar 19 11:45:14.505991 systemd[1]: sshd@20-10.0.0.94:22-10.0.0.1:50216.service: Deactivated successfully. Mar 19 11:45:14.507761 systemd[1]: session-21.scope: Deactivated successfully. Mar 19 11:45:14.509795 systemd-logind[1460]: Session 21 logged out. Waiting for processes to exit. Mar 19 11:45:14.510710 systemd-logind[1460]: Removed session 21. Mar 19 11:45:19.513632 systemd[1]: Started sshd@21-10.0.0.94:22-10.0.0.1:50224.service - OpenSSH per-connection server daemon (10.0.0.1:50224). Mar 19 11:45:19.554486 sshd[4220]: Accepted publickey for core from 10.0.0.1 port 50224 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:45:19.555571 sshd-session[4220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:45:19.559274 systemd-logind[1460]: New session 22 of user core. Mar 19 11:45:19.573377 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 19 11:45:19.678026 sshd[4222]: Connection closed by 10.0.0.1 port 50224 Mar 19 11:45:19.678484 sshd-session[4220]: pam_unix(sshd:session): session closed for user core Mar 19 11:45:19.696570 systemd[1]: sshd@21-10.0.0.94:22-10.0.0.1:50224.service: Deactivated successfully. Mar 19 11:45:19.698123 systemd[1]: session-22.scope: Deactivated successfully. Mar 19 11:45:19.698819 systemd-logind[1460]: Session 22 logged out. Waiting for processes to exit. Mar 19 11:45:19.721514 systemd[1]: Started sshd@22-10.0.0.94:22-10.0.0.1:50232.service - OpenSSH per-connection server daemon (10.0.0.1:50232). Mar 19 11:45:19.722589 systemd-logind[1460]: Removed session 22. Mar 19 11:45:19.758230 sshd[4234]: Accepted publickey for core from 10.0.0.1 port 50232 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:45:19.759312 sshd-session[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:45:19.762988 systemd-logind[1460]: New session 23 of user core. Mar 19 11:45:19.783392 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 19 11:45:23.367911 containerd[1476]: time="2025-03-19T11:45:23.367858074Z" level=info msg="StopContainer for \"f18e3ee560b632389eacaf35c719967e4119070bc8af3c67e238ee3e14a80a77\" with timeout 30 (s)" Mar 19 11:45:23.368857 containerd[1476]: time="2025-03-19T11:45:23.368816002Z" level=info msg="Stop container \"f18e3ee560b632389eacaf35c719967e4119070bc8af3c67e238ee3e14a80a77\" with signal terminated" Mar 19 11:45:23.381034 systemd[1]: cri-containerd-f18e3ee560b632389eacaf35c719967e4119070bc8af3c67e238ee3e14a80a77.scope: Deactivated successfully. Mar 19 11:45:23.423445 containerd[1476]: time="2025-03-19T11:45:23.423316160Z" level=info msg="StopContainer for \"bf70da2d14bbe116e862660afbad9261649c1ca2a0d3dec27714c99771ca8a3a\" with timeout 2 (s)" Mar 19 11:45:23.423657 containerd[1476]: time="2025-03-19T11:45:23.423533393Z" level=info msg="Stop container \"bf70da2d14bbe116e862660afbad9261649c1ca2a0d3dec27714c99771ca8a3a\" with signal terminated" Mar 19 11:45:23.428108 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f18e3ee560b632389eacaf35c719967e4119070bc8af3c67e238ee3e14a80a77-rootfs.mount: Deactivated successfully. Mar 19 11:45:23.428755 containerd[1476]: time="2025-03-19T11:45:23.428684423Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 19 11:45:23.432260 systemd-networkd[1398]: lxc_health: Link DOWN Mar 19 11:45:23.432267 systemd-networkd[1398]: lxc_health: Lost carrier Mar 19 11:45:23.438532 containerd[1476]: time="2025-03-19T11:45:23.438477779Z" level=info msg="shim disconnected" id=f18e3ee560b632389eacaf35c719967e4119070bc8af3c67e238ee3e14a80a77 namespace=k8s.io Mar 19 11:45:23.438532 containerd[1476]: time="2025-03-19T11:45:23.438529617Z" level=warning msg="cleaning up after shim disconnected" id=f18e3ee560b632389eacaf35c719967e4119070bc8af3c67e238ee3e14a80a77 namespace=k8s.io Mar 19 11:45:23.438673 containerd[1476]: time="2025-03-19T11:45:23.438539097Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:45:23.458390 systemd[1]: cri-containerd-bf70da2d14bbe116e862660afbad9261649c1ca2a0d3dec27714c99771ca8a3a.scope: Deactivated successfully. Mar 19 11:45:23.461174 systemd[1]: cri-containerd-bf70da2d14bbe116e862660afbad9261649c1ca2a0d3dec27714c99771ca8a3a.scope: Consumed 6.377s CPU time, 121.7M memory peak, 148K read from disk, 12.9M written to disk. Mar 19 11:45:23.480407 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf70da2d14bbe116e862660afbad9261649c1ca2a0d3dec27714c99771ca8a3a-rootfs.mount: Deactivated successfully. Mar 19 11:45:23.486682 containerd[1476]: time="2025-03-19T11:45:23.486618428Z" level=info msg="shim disconnected" id=bf70da2d14bbe116e862660afbad9261649c1ca2a0d3dec27714c99771ca8a3a namespace=k8s.io Mar 19 11:45:23.486682 containerd[1476]: time="2025-03-19T11:45:23.486676626Z" level=warning msg="cleaning up after shim disconnected" id=bf70da2d14bbe116e862660afbad9261649c1ca2a0d3dec27714c99771ca8a3a namespace=k8s.io Mar 19 11:45:23.486842 containerd[1476]: time="2025-03-19T11:45:23.486686306Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:45:23.489710 containerd[1476]: time="2025-03-19T11:45:23.489609569Z" level=info msg="StopContainer for \"f18e3ee560b632389eacaf35c719967e4119070bc8af3c67e238ee3e14a80a77\" returns successfully" Mar 19 11:45:23.490315 containerd[1476]: time="2025-03-19T11:45:23.490276267Z" level=info msg="StopPodSandbox for \"8c1539e1ba8e507e4ef7bb148b23f7be560fb77389f5a5ed0f78faaaa7a5e6b9\"" Mar 19 11:45:23.494943 containerd[1476]: time="2025-03-19T11:45:23.494898714Z" level=info msg="Container to stop \"f18e3ee560b632389eacaf35c719967e4119070bc8af3c67e238ee3e14a80a77\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 11:45:23.496666 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8c1539e1ba8e507e4ef7bb148b23f7be560fb77389f5a5ed0f78faaaa7a5e6b9-shm.mount: Deactivated successfully. Mar 19 11:45:23.501043 systemd[1]: cri-containerd-8c1539e1ba8e507e4ef7bb148b23f7be560fb77389f5a5ed0f78faaaa7a5e6b9.scope: Deactivated successfully. Mar 19 11:45:23.503212 containerd[1476]: time="2025-03-19T11:45:23.503108723Z" level=info msg="StopContainer for \"bf70da2d14bbe116e862660afbad9261649c1ca2a0d3dec27714c99771ca8a3a\" returns successfully" Mar 19 11:45:23.503656 containerd[1476]: time="2025-03-19T11:45:23.503633345Z" level=info msg="StopPodSandbox for \"3c2730ae58d976cb7cd9c6e8486f8eff44777926b448c2b93c8766d5d2719215\"" Mar 19 11:45:23.503721 containerd[1476]: time="2025-03-19T11:45:23.503667584Z" level=info msg="Container to stop \"9dd87ae111433cfcdc4dfa793fed4d5e00d5b43b3cf49e61447bfe8be1801060\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 11:45:23.503721 containerd[1476]: time="2025-03-19T11:45:23.503679224Z" level=info msg="Container to stop \"bf70da2d14bbe116e862660afbad9261649c1ca2a0d3dec27714c99771ca8a3a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 11:45:23.503721 containerd[1476]: time="2025-03-19T11:45:23.503687223Z" level=info msg="Container to stop \"6b6bfa6abc4479e3c35c40b1ca96264ad876221251c9253cf6f3b95364758395\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 11:45:23.503721 containerd[1476]: time="2025-03-19T11:45:23.503705583Z" level=info msg="Container to stop \"bdd2e46ba177457c1cb9d3032c46c0320e91694d2401ab2c17f0cebe6578be6b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 11:45:23.503721 containerd[1476]: time="2025-03-19T11:45:23.503713383Z" level=info msg="Container to stop \"c89ee235d85ae8389e8b54264768afecca894d9e78e8365dd493091f45b8a044\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 11:45:23.506927 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3c2730ae58d976cb7cd9c6e8486f8eff44777926b448c2b93c8766d5d2719215-shm.mount: Deactivated successfully. Mar 19 11:45:23.517644 systemd[1]: cri-containerd-3c2730ae58d976cb7cd9c6e8486f8eff44777926b448c2b93c8766d5d2719215.scope: Deactivated successfully. Mar 19 11:45:23.531126 containerd[1476]: time="2025-03-19T11:45:23.530978121Z" level=info msg="shim disconnected" id=8c1539e1ba8e507e4ef7bb148b23f7be560fb77389f5a5ed0f78faaaa7a5e6b9 namespace=k8s.io Mar 19 11:45:23.531126 containerd[1476]: time="2025-03-19T11:45:23.531031000Z" level=warning msg="cleaning up after shim disconnected" id=8c1539e1ba8e507e4ef7bb148b23f7be560fb77389f5a5ed0f78faaaa7a5e6b9 namespace=k8s.io Mar 19 11:45:23.531126 containerd[1476]: time="2025-03-19T11:45:23.531038799Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:45:23.542226 containerd[1476]: time="2025-03-19T11:45:23.542115993Z" level=info msg="TearDown network for sandbox \"8c1539e1ba8e507e4ef7bb148b23f7be560fb77389f5a5ed0f78faaaa7a5e6b9\" successfully" Mar 19 11:45:23.542226 containerd[1476]: time="2025-03-19T11:45:23.542218150Z" level=info msg="StopPodSandbox for \"8c1539e1ba8e507e4ef7bb148b23f7be560fb77389f5a5ed0f78faaaa7a5e6b9\" returns successfully" Mar 19 11:45:23.542437 containerd[1476]: time="2025-03-19T11:45:23.542296267Z" level=info msg="shim disconnected" id=3c2730ae58d976cb7cd9c6e8486f8eff44777926b448c2b93c8766d5d2719215 namespace=k8s.io Mar 19 11:45:23.542437 containerd[1476]: time="2025-03-19T11:45:23.542352625Z" level=warning msg="cleaning up after shim disconnected" id=3c2730ae58d976cb7cd9c6e8486f8eff44777926b448c2b93c8766d5d2719215 namespace=k8s.io Mar 19 11:45:23.542437 containerd[1476]: time="2025-03-19T11:45:23.542376105Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:45:23.554760 containerd[1476]: time="2025-03-19T11:45:23.554683138Z" level=info msg="TearDown network for sandbox \"3c2730ae58d976cb7cd9c6e8486f8eff44777926b448c2b93c8766d5d2719215\" successfully" Mar 19 11:45:23.554760 containerd[1476]: time="2025-03-19T11:45:23.554720096Z" level=info msg="StopPodSandbox for \"3c2730ae58d976cb7cd9c6e8486f8eff44777926b448c2b93c8766d5d2719215\" returns successfully" Mar 19 11:45:23.654004 kubelet[2569]: I0319 11:45:23.653799 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-hostproc\") pod \"29f74f42-e131-48ab-8b9b-8fd9f5ae22d6\" (UID: \"29f74f42-e131-48ab-8b9b-8fd9f5ae22d6\") " Mar 19 11:45:23.654004 kubelet[2569]: I0319 11:45:23.653849 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-274nb\" (UniqueName: \"kubernetes.io/projected/d2abdda0-cbe1-4dc3-87be-723205733bdd-kube-api-access-274nb\") pod \"d2abdda0-cbe1-4dc3-87be-723205733bdd\" (UID: \"d2abdda0-cbe1-4dc3-87be-723205733bdd\") " Mar 19 11:45:23.654004 kubelet[2569]: I0319 11:45:23.653871 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-clustermesh-secrets\") pod \"29f74f42-e131-48ab-8b9b-8fd9f5ae22d6\" (UID: \"29f74f42-e131-48ab-8b9b-8fd9f5ae22d6\") " Mar 19 11:45:23.654004 kubelet[2569]: I0319 11:45:23.653887 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-etc-cni-netd\") pod \"29f74f42-e131-48ab-8b9b-8fd9f5ae22d6\" (UID: \"29f74f42-e131-48ab-8b9b-8fd9f5ae22d6\") " Mar 19 11:45:23.654004 kubelet[2569]: I0319 11:45:23.653905 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjd2c\" (UniqueName: \"kubernetes.io/projected/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-kube-api-access-jjd2c\") pod \"29f74f42-e131-48ab-8b9b-8fd9f5ae22d6\" (UID: \"29f74f42-e131-48ab-8b9b-8fd9f5ae22d6\") " Mar 19 11:45:23.654004 kubelet[2569]: I0319 11:45:23.653920 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-xtables-lock\") pod \"29f74f42-e131-48ab-8b9b-8fd9f5ae22d6\" (UID: \"29f74f42-e131-48ab-8b9b-8fd9f5ae22d6\") " Mar 19 11:45:23.655103 kubelet[2569]: I0319 11:45:23.653934 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-lib-modules\") pod \"29f74f42-e131-48ab-8b9b-8fd9f5ae22d6\" (UID: \"29f74f42-e131-48ab-8b9b-8fd9f5ae22d6\") " Mar 19 11:45:23.655103 kubelet[2569]: I0319 11:45:23.653948 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-cilium-cgroup\") pod \"29f74f42-e131-48ab-8b9b-8fd9f5ae22d6\" (UID: \"29f74f42-e131-48ab-8b9b-8fd9f5ae22d6\") " Mar 19 11:45:23.655103 kubelet[2569]: I0319 11:45:23.653964 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-cni-path\") pod \"29f74f42-e131-48ab-8b9b-8fd9f5ae22d6\" (UID: \"29f74f42-e131-48ab-8b9b-8fd9f5ae22d6\") " Mar 19 11:45:23.655103 kubelet[2569]: I0319 11:45:23.653978 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-host-proc-sys-kernel\") pod \"29f74f42-e131-48ab-8b9b-8fd9f5ae22d6\" (UID: \"29f74f42-e131-48ab-8b9b-8fd9f5ae22d6\") " Mar 19 11:45:23.655103 kubelet[2569]: I0319 11:45:23.653992 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-bpf-maps\") pod \"29f74f42-e131-48ab-8b9b-8fd9f5ae22d6\" (UID: \"29f74f42-e131-48ab-8b9b-8fd9f5ae22d6\") " Mar 19 11:45:23.655103 kubelet[2569]: I0319 11:45:23.654009 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-hubble-tls\") pod \"29f74f42-e131-48ab-8b9b-8fd9f5ae22d6\" (UID: \"29f74f42-e131-48ab-8b9b-8fd9f5ae22d6\") " Mar 19 11:45:23.655241 kubelet[2569]: I0319 11:45:23.654024 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-cilium-config-path\") pod \"29f74f42-e131-48ab-8b9b-8fd9f5ae22d6\" (UID: \"29f74f42-e131-48ab-8b9b-8fd9f5ae22d6\") " Mar 19 11:45:23.655241 kubelet[2569]: I0319 11:45:23.654038 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-cilium-run\") pod \"29f74f42-e131-48ab-8b9b-8fd9f5ae22d6\" (UID: \"29f74f42-e131-48ab-8b9b-8fd9f5ae22d6\") " Mar 19 11:45:23.655241 kubelet[2569]: I0319 11:45:23.654054 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2abdda0-cbe1-4dc3-87be-723205733bdd-cilium-config-path\") pod \"d2abdda0-cbe1-4dc3-87be-723205733bdd\" (UID: \"d2abdda0-cbe1-4dc3-87be-723205733bdd\") " Mar 19 11:45:23.655241 kubelet[2569]: I0319 11:45:23.654069 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-host-proc-sys-net\") pod \"29f74f42-e131-48ab-8b9b-8fd9f5ae22d6\" (UID: \"29f74f42-e131-48ab-8b9b-8fd9f5ae22d6\") " Mar 19 11:45:23.656070 kubelet[2569]: I0319 11:45:23.656039 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "29f74f42-e131-48ab-8b9b-8fd9f5ae22d6" (UID: "29f74f42-e131-48ab-8b9b-8fd9f5ae22d6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:45:23.656144 kubelet[2569]: I0319 11:45:23.656097 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "29f74f42-e131-48ab-8b9b-8fd9f5ae22d6" (UID: "29f74f42-e131-48ab-8b9b-8fd9f5ae22d6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:45:23.658935 kubelet[2569]: I0319 11:45:23.658165 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-hostproc" (OuterVolumeSpecName: "hostproc") pod "29f74f42-e131-48ab-8b9b-8fd9f5ae22d6" (UID: "29f74f42-e131-48ab-8b9b-8fd9f5ae22d6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:45:23.658935 kubelet[2569]: I0319 11:45:23.658219 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "29f74f42-e131-48ab-8b9b-8fd9f5ae22d6" (UID: "29f74f42-e131-48ab-8b9b-8fd9f5ae22d6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:45:23.658935 kubelet[2569]: I0319 11:45:23.658240 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "29f74f42-e131-48ab-8b9b-8fd9f5ae22d6" (UID: "29f74f42-e131-48ab-8b9b-8fd9f5ae22d6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:45:23.658935 kubelet[2569]: I0319 11:45:23.658279 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "29f74f42-e131-48ab-8b9b-8fd9f5ae22d6" (UID: "29f74f42-e131-48ab-8b9b-8fd9f5ae22d6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:45:23.658935 kubelet[2569]: I0319 11:45:23.658294 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "29f74f42-e131-48ab-8b9b-8fd9f5ae22d6" (UID: "29f74f42-e131-48ab-8b9b-8fd9f5ae22d6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:45:23.659101 kubelet[2569]: I0319 11:45:23.658308 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-cni-path" (OuterVolumeSpecName: "cni-path") pod "29f74f42-e131-48ab-8b9b-8fd9f5ae22d6" (UID: "29f74f42-e131-48ab-8b9b-8fd9f5ae22d6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:45:23.660085 kubelet[2569]: I0319 11:45:23.660055 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "29f74f42-e131-48ab-8b9b-8fd9f5ae22d6" (UID: "29f74f42-e131-48ab-8b9b-8fd9f5ae22d6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 11:45:23.660123 kubelet[2569]: I0319 11:45:23.660107 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "29f74f42-e131-48ab-8b9b-8fd9f5ae22d6" (UID: "29f74f42-e131-48ab-8b9b-8fd9f5ae22d6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:45:23.670768 kubelet[2569]: I0319 11:45:23.670707 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "29f74f42-e131-48ab-8b9b-8fd9f5ae22d6" (UID: "29f74f42-e131-48ab-8b9b-8fd9f5ae22d6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:45:23.671486 kubelet[2569]: I0319 11:45:23.671457 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-kube-api-access-jjd2c" (OuterVolumeSpecName: "kube-api-access-jjd2c") pod "29f74f42-e131-48ab-8b9b-8fd9f5ae22d6" (UID: "29f74f42-e131-48ab-8b9b-8fd9f5ae22d6"). InnerVolumeSpecName "kube-api-access-jjd2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 11:45:23.671577 kubelet[2569]: I0319 11:45:23.671466 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "29f74f42-e131-48ab-8b9b-8fd9f5ae22d6" (UID: "29f74f42-e131-48ab-8b9b-8fd9f5ae22d6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 11:45:23.672060 kubelet[2569]: I0319 11:45:23.671985 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "29f74f42-e131-48ab-8b9b-8fd9f5ae22d6" (UID: "29f74f42-e131-48ab-8b9b-8fd9f5ae22d6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 11:45:23.672142 kubelet[2569]: I0319 11:45:23.672116 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2abdda0-cbe1-4dc3-87be-723205733bdd-kube-api-access-274nb" (OuterVolumeSpecName: "kube-api-access-274nb") pod "d2abdda0-cbe1-4dc3-87be-723205733bdd" (UID: "d2abdda0-cbe1-4dc3-87be-723205733bdd"). InnerVolumeSpecName "kube-api-access-274nb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 11:45:23.672756 kubelet[2569]: I0319 11:45:23.672730 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2abdda0-cbe1-4dc3-87be-723205733bdd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d2abdda0-cbe1-4dc3-87be-723205733bdd" (UID: "d2abdda0-cbe1-4dc3-87be-723205733bdd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 11:45:23.754469 kubelet[2569]: I0319 11:45:23.754425 2569 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 19 11:45:23.754469 kubelet[2569]: I0319 11:45:23.754459 2569 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 19 11:45:23.754469 kubelet[2569]: I0319 11:45:23.754470 2569 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-274nb\" (UniqueName: \"kubernetes.io/projected/d2abdda0-cbe1-4dc3-87be-723205733bdd-kube-api-access-274nb\") on node \"localhost\" DevicePath \"\"" Mar 19 11:45:23.754469 kubelet[2569]: I0319 11:45:23.754478 2569 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 19 11:45:23.754644 kubelet[2569]: I0319 11:45:23.754487 2569 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 19 11:45:23.754644 kubelet[2569]: I0319 11:45:23.754497 2569 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-jjd2c\" (UniqueName: \"kubernetes.io/projected/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-kube-api-access-jjd2c\") on node \"localhost\" DevicePath \"\"" Mar 19 11:45:23.754644 kubelet[2569]: I0319 11:45:23.754504 2569 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 19 11:45:23.754644 kubelet[2569]: I0319 11:45:23.754511 2569 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 19 11:45:23.754644 kubelet[2569]: I0319 11:45:23.754518 2569 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 19 11:45:23.754644 kubelet[2569]: I0319 11:45:23.754527 2569 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 19 11:45:23.754644 kubelet[2569]: I0319 11:45:23.754536 2569 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 19 11:45:23.754644 kubelet[2569]: I0319 11:45:23.754543 2569 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 19 11:45:23.754832 kubelet[2569]: I0319 11:45:23.754550 2569 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 19 11:45:23.754832 kubelet[2569]: I0319 11:45:23.754560 2569 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2abdda0-cbe1-4dc3-87be-723205733bdd-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 19 11:45:23.754832 kubelet[2569]: I0319 11:45:23.754568 2569 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 19 11:45:23.754832 kubelet[2569]: I0319 11:45:23.754576 2569 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 19 11:45:24.003200 systemd[1]: Removed slice kubepods-burstable-pod29f74f42_e131_48ab_8b9b_8fd9f5ae22d6.slice - libcontainer container kubepods-burstable-pod29f74f42_e131_48ab_8b9b_8fd9f5ae22d6.slice. Mar 19 11:45:24.003322 systemd[1]: kubepods-burstable-pod29f74f42_e131_48ab_8b9b_8fd9f5ae22d6.slice: Consumed 6.550s CPU time, 122.1M memory peak, 164K read from disk, 12.9M written to disk. Mar 19 11:45:24.004130 systemd[1]: Removed slice kubepods-besteffort-podd2abdda0_cbe1_4dc3_87be_723205733bdd.slice - libcontainer container kubepods-besteffort-podd2abdda0_cbe1_4dc3_87be_723205733bdd.slice. Mar 19 11:45:24.204774 kubelet[2569]: I0319 11:45:24.204678 2569 scope.go:117] "RemoveContainer" containerID="bf70da2d14bbe116e862660afbad9261649c1ca2a0d3dec27714c99771ca8a3a" Mar 19 11:45:24.207396 containerd[1476]: time="2025-03-19T11:45:24.207346674Z" level=info msg="RemoveContainer for \"bf70da2d14bbe116e862660afbad9261649c1ca2a0d3dec27714c99771ca8a3a\"" Mar 19 11:45:24.211308 containerd[1476]: time="2025-03-19T11:45:24.211262112Z" level=info msg="RemoveContainer for \"bf70da2d14bbe116e862660afbad9261649c1ca2a0d3dec27714c99771ca8a3a\" returns successfully" Mar 19 11:45:24.211722 kubelet[2569]: I0319 11:45:24.211612 2569 scope.go:117] "RemoveContainer" containerID="6b6bfa6abc4479e3c35c40b1ca96264ad876221251c9253cf6f3b95364758395" Mar 19 11:45:24.212942 containerd[1476]: time="2025-03-19T11:45:24.212784465Z" level=info msg="RemoveContainer for \"6b6bfa6abc4479e3c35c40b1ca96264ad876221251c9253cf6f3b95364758395\"" Mar 19 11:45:24.219885 containerd[1476]: time="2025-03-19T11:45:24.219841805Z" level=info msg="RemoveContainer for \"6b6bfa6abc4479e3c35c40b1ca96264ad876221251c9253cf6f3b95364758395\" returns successfully" Mar 19 11:45:24.220155 kubelet[2569]: I0319 11:45:24.220128 2569 scope.go:117] "RemoveContainer" containerID="c89ee235d85ae8389e8b54264768afecca894d9e78e8365dd493091f45b8a044" Mar 19 11:45:24.223433 containerd[1476]: time="2025-03-19T11:45:24.223390254Z" level=info msg="RemoveContainer for \"c89ee235d85ae8389e8b54264768afecca894d9e78e8365dd493091f45b8a044\"" Mar 19 11:45:24.233293 containerd[1476]: time="2025-03-19T11:45:24.233230908Z" level=info msg="RemoveContainer for \"c89ee235d85ae8389e8b54264768afecca894d9e78e8365dd493091f45b8a044\" returns successfully" Mar 19 11:45:24.233718 kubelet[2569]: I0319 11:45:24.233609 2569 scope.go:117] "RemoveContainer" containerID="bdd2e46ba177457c1cb9d3032c46c0320e91694d2401ab2c17f0cebe6578be6b" Mar 19 11:45:24.234576 containerd[1476]: time="2025-03-19T11:45:24.234531947Z" level=info msg="RemoveContainer for \"bdd2e46ba177457c1cb9d3032c46c0320e91694d2401ab2c17f0cebe6578be6b\"" Mar 19 11:45:24.239823 containerd[1476]: time="2025-03-19T11:45:24.239795703Z" level=info msg="RemoveContainer for \"bdd2e46ba177457c1cb9d3032c46c0320e91694d2401ab2c17f0cebe6578be6b\" returns successfully" Mar 19 11:45:24.240086 kubelet[2569]: I0319 11:45:24.240069 2569 scope.go:117] "RemoveContainer" containerID="9dd87ae111433cfcdc4dfa793fed4d5e00d5b43b3cf49e61447bfe8be1801060" Mar 19 11:45:24.241258 containerd[1476]: time="2025-03-19T11:45:24.241219419Z" level=info msg="RemoveContainer for \"9dd87ae111433cfcdc4dfa793fed4d5e00d5b43b3cf49e61447bfe8be1801060\"" Mar 19 11:45:24.243533 containerd[1476]: time="2025-03-19T11:45:24.243498868Z" level=info msg="RemoveContainer for \"9dd87ae111433cfcdc4dfa793fed4d5e00d5b43b3cf49e61447bfe8be1801060\" returns successfully" Mar 19 11:45:24.243678 kubelet[2569]: I0319 11:45:24.243648 2569 scope.go:117] "RemoveContainer" containerID="bf70da2d14bbe116e862660afbad9261649c1ca2a0d3dec27714c99771ca8a3a" Mar 19 11:45:24.243867 containerd[1476]: time="2025-03-19T11:45:24.243834457Z" level=error msg="ContainerStatus for \"bf70da2d14bbe116e862660afbad9261649c1ca2a0d3dec27714c99771ca8a3a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bf70da2d14bbe116e862660afbad9261649c1ca2a0d3dec27714c99771ca8a3a\": not found" Mar 19 11:45:24.246149 kubelet[2569]: E0319 11:45:24.246121 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bf70da2d14bbe116e862660afbad9261649c1ca2a0d3dec27714c99771ca8a3a\": not found" containerID="bf70da2d14bbe116e862660afbad9261649c1ca2a0d3dec27714c99771ca8a3a" Mar 19 11:45:24.246362 kubelet[2569]: I0319 11:45:24.246271 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bf70da2d14bbe116e862660afbad9261649c1ca2a0d3dec27714c99771ca8a3a"} err="failed to get container status \"bf70da2d14bbe116e862660afbad9261649c1ca2a0d3dec27714c99771ca8a3a\": rpc error: code = NotFound desc = an error occurred when try to find container \"bf70da2d14bbe116e862660afbad9261649c1ca2a0d3dec27714c99771ca8a3a\": not found" Mar 19 11:45:24.246502 kubelet[2569]: I0319 11:45:24.246434 2569 scope.go:117] "RemoveContainer" containerID="6b6bfa6abc4479e3c35c40b1ca96264ad876221251c9253cf6f3b95364758395" Mar 19 11:45:24.246726 containerd[1476]: time="2025-03-19T11:45:24.246695728Z" level=error msg="ContainerStatus for \"6b6bfa6abc4479e3c35c40b1ca96264ad876221251c9253cf6f3b95364758395\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6b6bfa6abc4479e3c35c40b1ca96264ad876221251c9253cf6f3b95364758395\": not found" Mar 19 11:45:24.246828 kubelet[2569]: E0319 11:45:24.246809 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6b6bfa6abc4479e3c35c40b1ca96264ad876221251c9253cf6f3b95364758395\": not found" containerID="6b6bfa6abc4479e3c35c40b1ca96264ad876221251c9253cf6f3b95364758395" Mar 19 11:45:24.246857 kubelet[2569]: I0319 11:45:24.246836 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6b6bfa6abc4479e3c35c40b1ca96264ad876221251c9253cf6f3b95364758395"} err="failed to get container status \"6b6bfa6abc4479e3c35c40b1ca96264ad876221251c9253cf6f3b95364758395\": rpc error: code = NotFound desc = an error occurred when try to find container \"6b6bfa6abc4479e3c35c40b1ca96264ad876221251c9253cf6f3b95364758395\": not found" Mar 19 11:45:24.246857 kubelet[2569]: I0319 11:45:24.246851 2569 scope.go:117] "RemoveContainer" containerID="c89ee235d85ae8389e8b54264768afecca894d9e78e8365dd493091f45b8a044" Mar 19 11:45:24.247078 containerd[1476]: time="2025-03-19T11:45:24.247023118Z" level=error msg="ContainerStatus for \"c89ee235d85ae8389e8b54264768afecca894d9e78e8365dd493091f45b8a044\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c89ee235d85ae8389e8b54264768afecca894d9e78e8365dd493091f45b8a044\": not found" Mar 19 11:45:24.247318 kubelet[2569]: E0319 11:45:24.247139 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c89ee235d85ae8389e8b54264768afecca894d9e78e8365dd493091f45b8a044\": not found" containerID="c89ee235d85ae8389e8b54264768afecca894d9e78e8365dd493091f45b8a044" Mar 19 11:45:24.247318 kubelet[2569]: I0319 11:45:24.247170 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c89ee235d85ae8389e8b54264768afecca894d9e78e8365dd493091f45b8a044"} err="failed to get container status \"c89ee235d85ae8389e8b54264768afecca894d9e78e8365dd493091f45b8a044\": rpc error: code = NotFound desc = an error occurred when try to find container \"c89ee235d85ae8389e8b54264768afecca894d9e78e8365dd493091f45b8a044\": not found" Mar 19 11:45:24.247318 kubelet[2569]: I0319 11:45:24.247185 2569 scope.go:117] "RemoveContainer" containerID="bdd2e46ba177457c1cb9d3032c46c0320e91694d2401ab2c17f0cebe6578be6b" Mar 19 11:45:24.247390 containerd[1476]: time="2025-03-19T11:45:24.247333828Z" level=error msg="ContainerStatus for \"bdd2e46ba177457c1cb9d3032c46c0320e91694d2401ab2c17f0cebe6578be6b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bdd2e46ba177457c1cb9d3032c46c0320e91694d2401ab2c17f0cebe6578be6b\": not found" Mar 19 11:45:24.247467 kubelet[2569]: E0319 11:45:24.247448 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bdd2e46ba177457c1cb9d3032c46c0320e91694d2401ab2c17f0cebe6578be6b\": not found" containerID="bdd2e46ba177457c1cb9d3032c46c0320e91694d2401ab2c17f0cebe6578be6b" Mar 19 11:45:24.247510 kubelet[2569]: I0319 11:45:24.247471 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bdd2e46ba177457c1cb9d3032c46c0320e91694d2401ab2c17f0cebe6578be6b"} err="failed to get container status \"bdd2e46ba177457c1cb9d3032c46c0320e91694d2401ab2c17f0cebe6578be6b\": rpc error: code = NotFound desc = an error occurred when try to find container \"bdd2e46ba177457c1cb9d3032c46c0320e91694d2401ab2c17f0cebe6578be6b\": not found" Mar 19 11:45:24.247510 kubelet[2569]: I0319 11:45:24.247486 2569 scope.go:117] "RemoveContainer" containerID="9dd87ae111433cfcdc4dfa793fed4d5e00d5b43b3cf49e61447bfe8be1801060" Mar 19 11:45:24.247656 containerd[1476]: time="2025-03-19T11:45:24.247628619Z" level=error msg="ContainerStatus for \"9dd87ae111433cfcdc4dfa793fed4d5e00d5b43b3cf49e61447bfe8be1801060\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9dd87ae111433cfcdc4dfa793fed4d5e00d5b43b3cf49e61447bfe8be1801060\": not found" Mar 19 11:45:24.247764 kubelet[2569]: E0319 11:45:24.247745 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9dd87ae111433cfcdc4dfa793fed4d5e00d5b43b3cf49e61447bfe8be1801060\": not found" containerID="9dd87ae111433cfcdc4dfa793fed4d5e00d5b43b3cf49e61447bfe8be1801060" Mar 19 11:45:24.247811 kubelet[2569]: I0319 11:45:24.247771 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9dd87ae111433cfcdc4dfa793fed4d5e00d5b43b3cf49e61447bfe8be1801060"} err="failed to get container status \"9dd87ae111433cfcdc4dfa793fed4d5e00d5b43b3cf49e61447bfe8be1801060\": rpc error: code = NotFound desc = an error occurred when try to find container \"9dd87ae111433cfcdc4dfa793fed4d5e00d5b43b3cf49e61447bfe8be1801060\": not found" Mar 19 11:45:24.247811 kubelet[2569]: I0319 11:45:24.247788 2569 scope.go:117] "RemoveContainer" containerID="f18e3ee560b632389eacaf35c719967e4119070bc8af3c67e238ee3e14a80a77" Mar 19 11:45:24.248619 containerd[1476]: time="2025-03-19T11:45:24.248596749Z" level=info msg="RemoveContainer for \"f18e3ee560b632389eacaf35c719967e4119070bc8af3c67e238ee3e14a80a77\"" Mar 19 11:45:24.250746 containerd[1476]: time="2025-03-19T11:45:24.250714483Z" level=info msg="RemoveContainer for \"f18e3ee560b632389eacaf35c719967e4119070bc8af3c67e238ee3e14a80a77\" returns successfully" Mar 19 11:45:24.250937 kubelet[2569]: I0319 11:45:24.250879 2569 scope.go:117] "RemoveContainer" containerID="f18e3ee560b632389eacaf35c719967e4119070bc8af3c67e238ee3e14a80a77" Mar 19 11:45:24.251160 containerd[1476]: time="2025-03-19T11:45:24.251132710Z" level=error msg="ContainerStatus for \"f18e3ee560b632389eacaf35c719967e4119070bc8af3c67e238ee3e14a80a77\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f18e3ee560b632389eacaf35c719967e4119070bc8af3c67e238ee3e14a80a77\": not found" Mar 19 11:45:24.251317 kubelet[2569]: E0319 11:45:24.251266 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f18e3ee560b632389eacaf35c719967e4119070bc8af3c67e238ee3e14a80a77\": not found" containerID="f18e3ee560b632389eacaf35c719967e4119070bc8af3c67e238ee3e14a80a77" Mar 19 11:45:24.251317 kubelet[2569]: I0319 11:45:24.251292 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f18e3ee560b632389eacaf35c719967e4119070bc8af3c67e238ee3e14a80a77"} err="failed to get container status \"f18e3ee560b632389eacaf35c719967e4119070bc8af3c67e238ee3e14a80a77\": rpc error: code = NotFound desc = an error occurred when try to find container \"f18e3ee560b632389eacaf35c719967e4119070bc8af3c67e238ee3e14a80a77\": not found" Mar 19 11:45:24.400737 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c1539e1ba8e507e4ef7bb148b23f7be560fb77389f5a5ed0f78faaaa7a5e6b9-rootfs.mount: Deactivated successfully. Mar 19 11:45:24.400843 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c2730ae58d976cb7cd9c6e8486f8eff44777926b448c2b93c8766d5d2719215-rootfs.mount: Deactivated successfully. Mar 19 11:45:24.400900 systemd[1]: var-lib-kubelet-pods-d2abdda0\x2dcbe1\x2d4dc3\x2d87be\x2d723205733bdd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d274nb.mount: Deactivated successfully. Mar 19 11:45:24.400962 systemd[1]: var-lib-kubelet-pods-29f74f42\x2de131\x2d48ab\x2d8b9b\x2d8fd9f5ae22d6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djjd2c.mount: Deactivated successfully. Mar 19 11:45:24.401018 systemd[1]: var-lib-kubelet-pods-29f74f42\x2de131\x2d48ab\x2d8b9b\x2d8fd9f5ae22d6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 19 11:45:24.401068 systemd[1]: var-lib-kubelet-pods-29f74f42\x2de131\x2d48ab\x2d8b9b\x2d8fd9f5ae22d6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 19 11:45:25.328146 sshd[4237]: Connection closed by 10.0.0.1 port 50232 Mar 19 11:45:25.327998 sshd-session[4234]: pam_unix(sshd:session): session closed for user core Mar 19 11:45:25.345466 systemd[1]: sshd@22-10.0.0.94:22-10.0.0.1:50232.service: Deactivated successfully. Mar 19 11:45:25.348101 systemd[1]: session-23.scope: Deactivated successfully. Mar 19 11:45:25.348445 systemd[1]: session-23.scope: Consumed 2.926s CPU time, 27.6M memory peak. Mar 19 11:45:25.350011 systemd-logind[1460]: Session 23 logged out. Waiting for processes to exit. Mar 19 11:45:25.361542 systemd[1]: Started sshd@23-10.0.0.94:22-10.0.0.1:45164.service - OpenSSH per-connection server daemon (10.0.0.1:45164). Mar 19 11:45:25.362643 systemd-logind[1460]: Removed session 23. Mar 19 11:45:25.399850 sshd[4400]: Accepted publickey for core from 10.0.0.1 port 45164 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:45:25.401028 sshd-session[4400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:45:25.405291 systemd-logind[1460]: New session 24 of user core. Mar 19 11:45:25.420478 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 19 11:45:25.997725 kubelet[2569]: I0319 11:45:25.997684 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29f74f42-e131-48ab-8b9b-8fd9f5ae22d6" path="/var/lib/kubelet/pods/29f74f42-e131-48ab-8b9b-8fd9f5ae22d6/volumes" Mar 19 11:45:25.998234 kubelet[2569]: I0319 11:45:25.998203 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2abdda0-cbe1-4dc3-87be-723205733bdd" path="/var/lib/kubelet/pods/d2abdda0-cbe1-4dc3-87be-723205733bdd/volumes" Mar 19 11:45:26.124283 sshd[4403]: Connection closed by 10.0.0.1 port 45164 Mar 19 11:45:26.124732 sshd-session[4400]: pam_unix(sshd:session): session closed for user core Mar 19 11:45:26.132575 systemd[1]: sshd@23-10.0.0.94:22-10.0.0.1:45164.service: Deactivated successfully. Mar 19 11:45:26.136132 systemd[1]: session-24.scope: Deactivated successfully. Mar 19 11:45:26.137456 systemd-logind[1460]: Session 24 logged out. Waiting for processes to exit. Mar 19 11:45:26.140273 kubelet[2569]: E0319 11:45:26.138175 2569 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="29f74f42-e131-48ab-8b9b-8fd9f5ae22d6" containerName="clean-cilium-state" Mar 19 11:45:26.140273 kubelet[2569]: E0319 11:45:26.138204 2569 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="29f74f42-e131-48ab-8b9b-8fd9f5ae22d6" containerName="mount-bpf-fs" Mar 19 11:45:26.140273 kubelet[2569]: E0319 11:45:26.138210 2569 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="29f74f42-e131-48ab-8b9b-8fd9f5ae22d6" containerName="cilium-agent" Mar 19 11:45:26.140273 kubelet[2569]: E0319 11:45:26.138217 2569 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="29f74f42-e131-48ab-8b9b-8fd9f5ae22d6" containerName="mount-cgroup" Mar 19 11:45:26.140273 kubelet[2569]: E0319 11:45:26.138222 2569 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="29f74f42-e131-48ab-8b9b-8fd9f5ae22d6" containerName="apply-sysctl-overwrites" Mar 19 11:45:26.140273 kubelet[2569]: E0319 11:45:26.138228 2569 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d2abdda0-cbe1-4dc3-87be-723205733bdd" containerName="cilium-operator" Mar 19 11:45:26.140273 kubelet[2569]: I0319 11:45:26.138264 2569 memory_manager.go:354] "RemoveStaleState removing state" podUID="29f74f42-e131-48ab-8b9b-8fd9f5ae22d6" containerName="cilium-agent" Mar 19 11:45:26.140273 kubelet[2569]: I0319 11:45:26.138272 2569 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2abdda0-cbe1-4dc3-87be-723205733bdd" containerName="cilium-operator" Mar 19 11:45:26.145551 systemd[1]: Started sshd@24-10.0.0.94:22-10.0.0.1:45170.service - OpenSSH per-connection server daemon (10.0.0.1:45170). Mar 19 11:45:26.149898 systemd-logind[1460]: Removed session 24. Mar 19 11:45:26.160301 systemd[1]: Created slice kubepods-burstable-pod30c6a453_505a_408d_8e3b_6420ab699a90.slice - libcontainer container kubepods-burstable-pod30c6a453_505a_408d_8e3b_6420ab699a90.slice. Mar 19 11:45:26.167023 kubelet[2569]: I0319 11:45:26.166985 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/30c6a453-505a-408d-8e3b-6420ab699a90-bpf-maps\") pod \"cilium-bbjh9\" (UID: \"30c6a453-505a-408d-8e3b-6420ab699a90\") " pod="kube-system/cilium-bbjh9" Mar 19 11:45:26.167023 kubelet[2569]: I0319 11:45:26.167023 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvgz5\" (UniqueName: \"kubernetes.io/projected/30c6a453-505a-408d-8e3b-6420ab699a90-kube-api-access-nvgz5\") pod \"cilium-bbjh9\" (UID: \"30c6a453-505a-408d-8e3b-6420ab699a90\") " pod="kube-system/cilium-bbjh9" Mar 19 11:45:26.167147 kubelet[2569]: I0319 11:45:26.167044 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/30c6a453-505a-408d-8e3b-6420ab699a90-hubble-tls\") pod \"cilium-bbjh9\" (UID: \"30c6a453-505a-408d-8e3b-6420ab699a90\") " pod="kube-system/cilium-bbjh9" Mar 19 11:45:26.167147 kubelet[2569]: I0319 11:45:26.167062 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/30c6a453-505a-408d-8e3b-6420ab699a90-etc-cni-netd\") pod \"cilium-bbjh9\" (UID: \"30c6a453-505a-408d-8e3b-6420ab699a90\") " pod="kube-system/cilium-bbjh9" Mar 19 11:45:26.167147 kubelet[2569]: I0319 11:45:26.167077 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/30c6a453-505a-408d-8e3b-6420ab699a90-cilium-run\") pod \"cilium-bbjh9\" (UID: \"30c6a453-505a-408d-8e3b-6420ab699a90\") " pod="kube-system/cilium-bbjh9" Mar 19 11:45:26.167147 kubelet[2569]: I0319 11:45:26.167094 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/30c6a453-505a-408d-8e3b-6420ab699a90-cilium-cgroup\") pod \"cilium-bbjh9\" (UID: \"30c6a453-505a-408d-8e3b-6420ab699a90\") " pod="kube-system/cilium-bbjh9" Mar 19 11:45:26.167147 kubelet[2569]: I0319 11:45:26.167110 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/30c6a453-505a-408d-8e3b-6420ab699a90-cilium-config-path\") pod \"cilium-bbjh9\" (UID: \"30c6a453-505a-408d-8e3b-6420ab699a90\") " pod="kube-system/cilium-bbjh9" Mar 19 11:45:26.167147 kubelet[2569]: I0319 11:45:26.167124 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/30c6a453-505a-408d-8e3b-6420ab699a90-lib-modules\") pod \"cilium-bbjh9\" (UID: \"30c6a453-505a-408d-8e3b-6420ab699a90\") " pod="kube-system/cilium-bbjh9" Mar 19 11:45:26.167281 kubelet[2569]: I0319 11:45:26.167140 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/30c6a453-505a-408d-8e3b-6420ab699a90-hostproc\") pod \"cilium-bbjh9\" (UID: \"30c6a453-505a-408d-8e3b-6420ab699a90\") " pod="kube-system/cilium-bbjh9" Mar 19 11:45:26.167281 kubelet[2569]: I0319 11:45:26.167156 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/30c6a453-505a-408d-8e3b-6420ab699a90-cilium-ipsec-secrets\") pod \"cilium-bbjh9\" (UID: \"30c6a453-505a-408d-8e3b-6420ab699a90\") " pod="kube-system/cilium-bbjh9" Mar 19 11:45:26.167281 kubelet[2569]: I0319 11:45:26.167173 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/30c6a453-505a-408d-8e3b-6420ab699a90-cni-path\") pod \"cilium-bbjh9\" (UID: \"30c6a453-505a-408d-8e3b-6420ab699a90\") " pod="kube-system/cilium-bbjh9" Mar 19 11:45:26.167281 kubelet[2569]: I0319 11:45:26.167190 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/30c6a453-505a-408d-8e3b-6420ab699a90-host-proc-sys-net\") pod \"cilium-bbjh9\" (UID: \"30c6a453-505a-408d-8e3b-6420ab699a90\") " pod="kube-system/cilium-bbjh9" Mar 19 11:45:26.167281 kubelet[2569]: I0319 11:45:26.167205 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/30c6a453-505a-408d-8e3b-6420ab699a90-host-proc-sys-kernel\") pod \"cilium-bbjh9\" (UID: \"30c6a453-505a-408d-8e3b-6420ab699a90\") " pod="kube-system/cilium-bbjh9" Mar 19 11:45:26.167281 kubelet[2569]: I0319 11:45:26.167220 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/30c6a453-505a-408d-8e3b-6420ab699a90-xtables-lock\") pod \"cilium-bbjh9\" (UID: \"30c6a453-505a-408d-8e3b-6420ab699a90\") " pod="kube-system/cilium-bbjh9" Mar 19 11:45:26.167393 kubelet[2569]: I0319 11:45:26.167236 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/30c6a453-505a-408d-8e3b-6420ab699a90-clustermesh-secrets\") pod \"cilium-bbjh9\" (UID: \"30c6a453-505a-408d-8e3b-6420ab699a90\") " pod="kube-system/cilium-bbjh9" Mar 19 11:45:26.194016 sshd[4414]: Accepted publickey for core from 10.0.0.1 port 45170 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:45:26.195223 sshd-session[4414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:45:26.199302 systemd-logind[1460]: New session 25 of user core. Mar 19 11:45:26.205402 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 19 11:45:26.254400 sshd[4417]: Connection closed by 10.0.0.1 port 45170 Mar 19 11:45:26.254125 sshd-session[4414]: pam_unix(sshd:session): session closed for user core Mar 19 11:45:26.267490 systemd[1]: sshd@24-10.0.0.94:22-10.0.0.1:45170.service: Deactivated successfully. Mar 19 11:45:26.270768 systemd[1]: session-25.scope: Deactivated successfully. Mar 19 11:45:26.272623 systemd-logind[1460]: Session 25 logged out. Waiting for processes to exit. Mar 19 11:45:26.293493 systemd[1]: Started sshd@25-10.0.0.94:22-10.0.0.1:45176.service - OpenSSH per-connection server daemon (10.0.0.1:45176). Mar 19 11:45:26.294288 systemd-logind[1460]: Removed session 25. Mar 19 11:45:26.331113 sshd[4427]: Accepted publickey for core from 10.0.0.1 port 45176 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:45:26.332230 sshd-session[4427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:45:26.335896 systemd-logind[1460]: New session 26 of user core. Mar 19 11:45:26.343460 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 19 11:45:26.464619 containerd[1476]: time="2025-03-19T11:45:26.464569944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bbjh9,Uid:30c6a453-505a-408d-8e3b-6420ab699a90,Namespace:kube-system,Attempt:0,}" Mar 19 11:45:26.480779 containerd[1476]: time="2025-03-19T11:45:26.480704540Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:45:26.480779 containerd[1476]: time="2025-03-19T11:45:26.480751379Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:45:26.480947 containerd[1476]: time="2025-03-19T11:45:26.480761858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:45:26.480947 containerd[1476]: time="2025-03-19T11:45:26.480824817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:45:26.501425 systemd[1]: Started cri-containerd-5445eeabfc2986465edb3f6c70d9fcbb58049e8b7bf8e9e82055810da4c4f62b.scope - libcontainer container 5445eeabfc2986465edb3f6c70d9fcbb58049e8b7bf8e9e82055810da4c4f62b. Mar 19 11:45:26.519284 containerd[1476]: time="2025-03-19T11:45:26.519022925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bbjh9,Uid:30c6a453-505a-408d-8e3b-6420ab699a90,Namespace:kube-system,Attempt:0,} returns sandbox id \"5445eeabfc2986465edb3f6c70d9fcbb58049e8b7bf8e9e82055810da4c4f62b\"" Mar 19 11:45:26.522375 containerd[1476]: time="2025-03-19T11:45:26.522340634Z" level=info msg="CreateContainer within sandbox \"5445eeabfc2986465edb3f6c70d9fcbb58049e8b7bf8e9e82055810da4c4f62b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 19 11:45:26.539105 containerd[1476]: time="2025-03-19T11:45:26.539066893Z" level=info msg="CreateContainer within sandbox \"5445eeabfc2986465edb3f6c70d9fcbb58049e8b7bf8e9e82055810da4c4f62b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b931ae92933b9e2bd2149691d4a273c4fac86016bd28729fc941b4a2f11099fd\"" Mar 19 11:45:26.539896 containerd[1476]: time="2025-03-19T11:45:26.539689596Z" level=info msg="StartContainer for \"b931ae92933b9e2bd2149691d4a273c4fac86016bd28729fc941b4a2f11099fd\"" Mar 19 11:45:26.563399 systemd[1]: Started cri-containerd-b931ae92933b9e2bd2149691d4a273c4fac86016bd28729fc941b4a2f11099fd.scope - libcontainer container b931ae92933b9e2bd2149691d4a273c4fac86016bd28729fc941b4a2f11099fd. Mar 19 11:45:26.584611 containerd[1476]: time="2025-03-19T11:45:26.584562481Z" level=info msg="StartContainer for \"b931ae92933b9e2bd2149691d4a273c4fac86016bd28729fc941b4a2f11099fd\" returns successfully" Mar 19 11:45:26.611616 systemd[1]: cri-containerd-b931ae92933b9e2bd2149691d4a273c4fac86016bd28729fc941b4a2f11099fd.scope: Deactivated successfully. Mar 19 11:45:26.638010 containerd[1476]: time="2025-03-19T11:45:26.637922451Z" level=info msg="shim disconnected" id=b931ae92933b9e2bd2149691d4a273c4fac86016bd28729fc941b4a2f11099fd namespace=k8s.io Mar 19 11:45:26.638010 containerd[1476]: time="2025-03-19T11:45:26.637972690Z" level=warning msg="cleaning up after shim disconnected" id=b931ae92933b9e2bd2149691d4a273c4fac86016bd28729fc941b4a2f11099fd namespace=k8s.io Mar 19 11:45:26.638010 containerd[1476]: time="2025-03-19T11:45:26.637980650Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:45:27.219325 containerd[1476]: time="2025-03-19T11:45:27.219288936Z" level=info msg="CreateContainer within sandbox \"5445eeabfc2986465edb3f6c70d9fcbb58049e8b7bf8e9e82055810da4c4f62b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 19 11:45:27.228429 containerd[1476]: time="2025-03-19T11:45:27.228371541Z" level=info msg="CreateContainer within sandbox \"5445eeabfc2986465edb3f6c70d9fcbb58049e8b7bf8e9e82055810da4c4f62b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c31cd496c6ebbf095aba65a26bcbd1c0487501c766558266142c6735a7885d23\"" Mar 19 11:45:27.228938 containerd[1476]: time="2025-03-19T11:45:27.228906287Z" level=info msg="StartContainer for \"c31cd496c6ebbf095aba65a26bcbd1c0487501c766558266142c6735a7885d23\"" Mar 19 11:45:27.256401 systemd[1]: Started cri-containerd-c31cd496c6ebbf095aba65a26bcbd1c0487501c766558266142c6735a7885d23.scope - libcontainer container c31cd496c6ebbf095aba65a26bcbd1c0487501c766558266142c6735a7885d23. Mar 19 11:45:27.278592 containerd[1476]: time="2025-03-19T11:45:27.278534967Z" level=info msg="StartContainer for \"c31cd496c6ebbf095aba65a26bcbd1c0487501c766558266142c6735a7885d23\" returns successfully" Mar 19 11:45:27.286921 systemd[1]: cri-containerd-c31cd496c6ebbf095aba65a26bcbd1c0487501c766558266142c6735a7885d23.scope: Deactivated successfully. Mar 19 11:45:27.301988 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c31cd496c6ebbf095aba65a26bcbd1c0487501c766558266142c6735a7885d23-rootfs.mount: Deactivated successfully. Mar 19 11:45:27.305985 containerd[1476]: time="2025-03-19T11:45:27.305932660Z" level=info msg="shim disconnected" id=c31cd496c6ebbf095aba65a26bcbd1c0487501c766558266142c6735a7885d23 namespace=k8s.io Mar 19 11:45:27.305985 containerd[1476]: time="2025-03-19T11:45:27.305982658Z" level=warning msg="cleaning up after shim disconnected" id=c31cd496c6ebbf095aba65a26bcbd1c0487501c766558266142c6735a7885d23 namespace=k8s.io Mar 19 11:45:27.305985 containerd[1476]: time="2025-03-19T11:45:27.305990498Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:45:28.057720 kubelet[2569]: E0319 11:45:28.057671 2569 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 19 11:45:28.229941 containerd[1476]: time="2025-03-19T11:45:28.229885802Z" level=info msg="CreateContainer within sandbox \"5445eeabfc2986465edb3f6c70d9fcbb58049e8b7bf8e9e82055810da4c4f62b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 19 11:45:28.241278 containerd[1476]: time="2025-03-19T11:45:28.241183769Z" level=info msg="CreateContainer within sandbox \"5445eeabfc2986465edb3f6c70d9fcbb58049e8b7bf8e9e82055810da4c4f62b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f8547a9e3b3c4435d005af07385ddf501c4433eb5a8cc93130c540d8a467d0f9\"" Mar 19 11:45:28.241929 containerd[1476]: time="2025-03-19T11:45:28.241899992Z" level=info msg="StartContainer for \"f8547a9e3b3c4435d005af07385ddf501c4433eb5a8cc93130c540d8a467d0f9\"" Mar 19 11:45:28.267412 systemd[1]: Started cri-containerd-f8547a9e3b3c4435d005af07385ddf501c4433eb5a8cc93130c540d8a467d0f9.scope - libcontainer container f8547a9e3b3c4435d005af07385ddf501c4433eb5a8cc93130c540d8a467d0f9. Mar 19 11:45:28.295757 containerd[1476]: time="2025-03-19T11:45:28.295653815Z" level=info msg="StartContainer for \"f8547a9e3b3c4435d005af07385ddf501c4433eb5a8cc93130c540d8a467d0f9\" returns successfully" Mar 19 11:45:28.298606 systemd[1]: cri-containerd-f8547a9e3b3c4435d005af07385ddf501c4433eb5a8cc93130c540d8a467d0f9.scope: Deactivated successfully. Mar 19 11:45:28.315541 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8547a9e3b3c4435d005af07385ddf501c4433eb5a8cc93130c540d8a467d0f9-rootfs.mount: Deactivated successfully. Mar 19 11:45:28.321062 containerd[1476]: time="2025-03-19T11:45:28.320842887Z" level=info msg="shim disconnected" id=f8547a9e3b3c4435d005af07385ddf501c4433eb5a8cc93130c540d8a467d0f9 namespace=k8s.io Mar 19 11:45:28.321062 containerd[1476]: time="2025-03-19T11:45:28.320971924Z" level=warning msg="cleaning up after shim disconnected" id=f8547a9e3b3c4435d005af07385ddf501c4433eb5a8cc93130c540d8a467d0f9 namespace=k8s.io Mar 19 11:45:28.321062 containerd[1476]: time="2025-03-19T11:45:28.320991083Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:45:29.229854 containerd[1476]: time="2025-03-19T11:45:29.229770925Z" level=info msg="CreateContainer within sandbox \"5445eeabfc2986465edb3f6c70d9fcbb58049e8b7bf8e9e82055810da4c4f62b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 19 11:45:29.239938 containerd[1476]: time="2025-03-19T11:45:29.239890058Z" level=info msg="CreateContainer within sandbox \"5445eeabfc2986465edb3f6c70d9fcbb58049e8b7bf8e9e82055810da4c4f62b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fcd5e960981308394277a13da17f6dc39ac180db00770317be2f3a27d954cc1c\"" Mar 19 11:45:29.240618 containerd[1476]: time="2025-03-19T11:45:29.240309688Z" level=info msg="StartContainer for \"fcd5e960981308394277a13da17f6dc39ac180db00770317be2f3a27d954cc1c\"" Mar 19 11:45:29.269392 systemd[1]: Started cri-containerd-fcd5e960981308394277a13da17f6dc39ac180db00770317be2f3a27d954cc1c.scope - libcontainer container fcd5e960981308394277a13da17f6dc39ac180db00770317be2f3a27d954cc1c. Mar 19 11:45:29.289291 systemd[1]: cri-containerd-fcd5e960981308394277a13da17f6dc39ac180db00770317be2f3a27d954cc1c.scope: Deactivated successfully. Mar 19 11:45:29.290214 containerd[1476]: time="2025-03-19T11:45:29.290031769Z" level=info msg="StartContainer for \"fcd5e960981308394277a13da17f6dc39ac180db00770317be2f3a27d954cc1c\" returns successfully" Mar 19 11:45:29.304823 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fcd5e960981308394277a13da17f6dc39ac180db00770317be2f3a27d954cc1c-rootfs.mount: Deactivated successfully. Mar 19 11:45:29.307854 containerd[1476]: time="2025-03-19T11:45:29.307668532Z" level=info msg="shim disconnected" id=fcd5e960981308394277a13da17f6dc39ac180db00770317be2f3a27d954cc1c namespace=k8s.io Mar 19 11:45:29.307854 containerd[1476]: time="2025-03-19T11:45:29.307719971Z" level=warning msg="cleaning up after shim disconnected" id=fcd5e960981308394277a13da17f6dc39ac180db00770317be2f3a27d954cc1c namespace=k8s.io Mar 19 11:45:29.307854 containerd[1476]: time="2025-03-19T11:45:29.307730011Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:45:30.053878 kubelet[2569]: I0319 11:45:30.053802 2569 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-19T11:45:30Z","lastTransitionTime":"2025-03-19T11:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 19 11:45:30.233837 containerd[1476]: time="2025-03-19T11:45:30.233785012Z" level=info msg="CreateContainer within sandbox \"5445eeabfc2986465edb3f6c70d9fcbb58049e8b7bf8e9e82055810da4c4f62b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 19 11:45:30.254224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2312209060.mount: Deactivated successfully. Mar 19 11:45:30.256014 containerd[1476]: time="2025-03-19T11:45:30.255964588Z" level=info msg="CreateContainer within sandbox \"5445eeabfc2986465edb3f6c70d9fcbb58049e8b7bf8e9e82055810da4c4f62b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f8d5730019d873bb9d40f2ff1229741935d0272f88e1737e590a41dca47b1b0c\"" Mar 19 11:45:30.257207 containerd[1476]: time="2025-03-19T11:45:30.256634574Z" level=info msg="StartContainer for \"f8d5730019d873bb9d40f2ff1229741935d0272f88e1737e590a41dca47b1b0c\"" Mar 19 11:45:30.287456 systemd[1]: Started cri-containerd-f8d5730019d873bb9d40f2ff1229741935d0272f88e1737e590a41dca47b1b0c.scope - libcontainer container f8d5730019d873bb9d40f2ff1229741935d0272f88e1737e590a41dca47b1b0c. Mar 19 11:45:30.313622 containerd[1476]: time="2025-03-19T11:45:30.313495303Z" level=info msg="StartContainer for \"f8d5730019d873bb9d40f2ff1229741935d0272f88e1737e590a41dca47b1b0c\" returns successfully" Mar 19 11:45:30.575283 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Mar 19 11:45:31.248826 kubelet[2569]: I0319 11:45:31.248747 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bbjh9" podStartSLOduration=5.248732248 podStartE2EDuration="5.248732248s" podCreationTimestamp="2025-03-19 11:45:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:45:31.248293576 +0000 UTC m=+83.326361060" watchObservedRunningTime="2025-03-19 11:45:31.248732248 +0000 UTC m=+83.326799692" Mar 19 11:45:33.349740 systemd-networkd[1398]: lxc_health: Link UP Mar 19 11:45:33.357453 systemd-networkd[1398]: lxc_health: Gained carrier Mar 19 11:45:34.907647 systemd-networkd[1398]: lxc_health: Gained IPv6LL Mar 19 11:45:39.046443 sshd[4430]: Connection closed by 10.0.0.1 port 45176 Mar 19 11:45:39.046758 sshd-session[4427]: pam_unix(sshd:session): session closed for user core Mar 19 11:45:39.049939 systemd[1]: sshd@25-10.0.0.94:22-10.0.0.1:45176.service: Deactivated successfully. Mar 19 11:45:39.051670 systemd[1]: session-26.scope: Deactivated successfully. Mar 19 11:45:39.052291 systemd-logind[1460]: Session 26 logged out. Waiting for processes to exit. Mar 19 11:45:39.053120 systemd-logind[1460]: Removed session 26.