Mar 19 11:43:35.891624 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 19 11:43:35.891648 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Wed Mar 19 10:15:40 -00 2025 Mar 19 11:43:35.891658 kernel: KASLR enabled Mar 19 11:43:35.891663 kernel: efi: EFI v2.7 by EDK II Mar 19 11:43:35.891669 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Mar 19 11:43:35.891674 kernel: random: crng init done Mar 19 11:43:35.891683 kernel: secureboot: Secure boot disabled Mar 19 11:43:35.891689 kernel: ACPI: Early table checksum verification disabled Mar 19 11:43:35.891695 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Mar 19 11:43:35.891702 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Mar 19 11:43:35.891708 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:43:35.891714 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:43:35.891719 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:43:35.891725 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:43:35.891732 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:43:35.891739 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:43:35.891746 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:43:35.891752 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:43:35.891758 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:43:35.891764 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Mar 19 11:43:35.891777 kernel: NUMA: Failed to initialise from firmware Mar 19 11:43:35.891784 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Mar 19 11:43:35.891790 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Mar 19 11:43:35.891796 kernel: Zone ranges: Mar 19 11:43:35.891802 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Mar 19 11:43:35.891810 kernel: DMA32 empty Mar 19 11:43:35.891816 kernel: Normal empty Mar 19 11:43:35.891822 kernel: Movable zone start for each node Mar 19 11:43:35.891828 kernel: Early memory node ranges Mar 19 11:43:35.891834 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Mar 19 11:43:35.891840 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Mar 19 11:43:35.891846 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Mar 19 11:43:35.891852 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Mar 19 11:43:35.891858 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Mar 19 11:43:35.891864 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Mar 19 11:43:35.891870 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Mar 19 11:43:35.891876 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Mar 19 11:43:35.891883 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Mar 19 11:43:35.891889 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Mar 19 11:43:35.891895 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Mar 19 11:43:35.891904 kernel: psci: probing for conduit method from ACPI. Mar 19 11:43:35.891911 kernel: psci: PSCIv1.1 detected in firmware. Mar 19 11:43:35.891917 kernel: psci: Using standard PSCI v0.2 function IDs Mar 19 11:43:35.891925 kernel: psci: Trusted OS migration not required Mar 19 11:43:35.891932 kernel: psci: SMC Calling Convention v1.1 Mar 19 11:43:35.891938 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Mar 19 11:43:35.891945 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 19 11:43:35.891951 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 19 11:43:35.891958 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Mar 19 11:43:35.891964 kernel: Detected PIPT I-cache on CPU0 Mar 19 11:43:35.891970 kernel: CPU features: detected: GIC system register CPU interface Mar 19 11:43:35.891977 kernel: CPU features: detected: Hardware dirty bit management Mar 19 11:43:35.891983 kernel: CPU features: detected: Spectre-v4 Mar 19 11:43:35.891991 kernel: CPU features: detected: Spectre-BHB Mar 19 11:43:35.891997 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 19 11:43:35.892004 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 19 11:43:35.892010 kernel: CPU features: detected: ARM erratum 1418040 Mar 19 11:43:35.892017 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 19 11:43:35.892023 kernel: alternatives: applying boot alternatives Mar 19 11:43:35.892031 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=41cc5bdd62754423bbb4bbec0e0356d6a1ab5d0ac0f2396a30318f9fb189e7eb Mar 19 11:43:35.892038 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 19 11:43:35.892045 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 19 11:43:35.892051 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 19 11:43:35.892057 kernel: Fallback order for Node 0: 0 Mar 19 11:43:35.892065 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Mar 19 11:43:35.892072 kernel: Policy zone: DMA Mar 19 11:43:35.892078 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 19 11:43:35.892084 kernel: software IO TLB: area num 4. Mar 19 11:43:35.892091 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Mar 19 11:43:35.892097 kernel: Memory: 2387540K/2572288K available (10304K kernel code, 2186K rwdata, 8096K rodata, 38336K init, 897K bss, 184748K reserved, 0K cma-reserved) Mar 19 11:43:35.892104 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 19 11:43:35.892111 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 19 11:43:35.892118 kernel: rcu: RCU event tracing is enabled. Mar 19 11:43:35.892124 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 19 11:43:35.892131 kernel: Trampoline variant of Tasks RCU enabled. Mar 19 11:43:35.892137 kernel: Tracing variant of Tasks RCU enabled. Mar 19 11:43:35.892145 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 19 11:43:35.892151 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 19 11:43:35.892158 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 19 11:43:35.892164 kernel: GICv3: 256 SPIs implemented Mar 19 11:43:35.892170 kernel: GICv3: 0 Extended SPIs implemented Mar 19 11:43:35.892176 kernel: Root IRQ handler: gic_handle_irq Mar 19 11:43:35.892183 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 19 11:43:35.892189 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Mar 19 11:43:35.892195 kernel: ITS [mem 0x08080000-0x0809ffff] Mar 19 11:43:35.892202 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Mar 19 11:43:35.892208 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Mar 19 11:43:35.892216 kernel: GICv3: using LPI property table @0x00000000400f0000 Mar 19 11:43:35.892223 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Mar 19 11:43:35.892240 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 19 11:43:35.892247 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 19 11:43:35.892254 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 19 11:43:35.892260 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 19 11:43:35.892267 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 19 11:43:35.892273 kernel: arm-pv: using stolen time PV Mar 19 11:43:35.892280 kernel: Console: colour dummy device 80x25 Mar 19 11:43:35.892286 kernel: ACPI: Core revision 20230628 Mar 19 11:43:35.892293 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 19 11:43:35.892302 kernel: pid_max: default: 32768 minimum: 301 Mar 19 11:43:35.892308 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 19 11:43:35.892315 kernel: landlock: Up and running. Mar 19 11:43:35.892321 kernel: SELinux: Initializing. Mar 19 11:43:35.892328 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 19 11:43:35.892334 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 19 11:43:35.892341 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 19 11:43:35.892348 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 19 11:43:35.892354 kernel: rcu: Hierarchical SRCU implementation. Mar 19 11:43:35.892362 kernel: rcu: Max phase no-delay instances is 400. Mar 19 11:43:35.892369 kernel: Platform MSI: ITS@0x8080000 domain created Mar 19 11:43:35.892376 kernel: PCI/MSI: ITS@0x8080000 domain created Mar 19 11:43:35.892382 kernel: Remapping and enabling EFI services. Mar 19 11:43:35.892388 kernel: smp: Bringing up secondary CPUs ... Mar 19 11:43:35.892395 kernel: Detected PIPT I-cache on CPU1 Mar 19 11:43:35.892402 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Mar 19 11:43:35.892408 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Mar 19 11:43:35.892415 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 19 11:43:35.892423 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 19 11:43:35.892429 kernel: Detected PIPT I-cache on CPU2 Mar 19 11:43:35.892441 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Mar 19 11:43:35.892450 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Mar 19 11:43:35.892457 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 19 11:43:35.892463 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Mar 19 11:43:35.892470 kernel: Detected PIPT I-cache on CPU3 Mar 19 11:43:35.892477 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Mar 19 11:43:35.892484 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Mar 19 11:43:35.892492 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 19 11:43:35.892499 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Mar 19 11:43:35.892506 kernel: smp: Brought up 1 node, 4 CPUs Mar 19 11:43:35.892513 kernel: SMP: Total of 4 processors activated. Mar 19 11:43:35.892520 kernel: CPU features: detected: 32-bit EL0 Support Mar 19 11:43:35.892527 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 19 11:43:35.892533 kernel: CPU features: detected: Common not Private translations Mar 19 11:43:35.892541 kernel: CPU features: detected: CRC32 instructions Mar 19 11:43:35.892549 kernel: CPU features: detected: Enhanced Virtualization Traps Mar 19 11:43:35.892556 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 19 11:43:35.892563 kernel: CPU features: detected: LSE atomic instructions Mar 19 11:43:35.892570 kernel: CPU features: detected: Privileged Access Never Mar 19 11:43:35.892577 kernel: CPU features: detected: RAS Extension Support Mar 19 11:43:35.892584 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 19 11:43:35.892590 kernel: CPU: All CPU(s) started at EL1 Mar 19 11:43:35.892597 kernel: alternatives: applying system-wide alternatives Mar 19 11:43:35.892604 kernel: devtmpfs: initialized Mar 19 11:43:35.892611 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 19 11:43:35.892620 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 19 11:43:35.892626 kernel: pinctrl core: initialized pinctrl subsystem Mar 19 11:43:35.892633 kernel: SMBIOS 3.0.0 present. Mar 19 11:43:35.892640 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Mar 19 11:43:35.892647 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 19 11:43:35.892654 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 19 11:43:35.892661 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 19 11:43:35.892668 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 19 11:43:35.892676 kernel: audit: initializing netlink subsys (disabled) Mar 19 11:43:35.892683 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Mar 19 11:43:35.892690 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 19 11:43:35.892697 kernel: cpuidle: using governor menu Mar 19 11:43:35.892703 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 19 11:43:35.892710 kernel: ASID allocator initialised with 32768 entries Mar 19 11:43:35.892717 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 19 11:43:35.892724 kernel: Serial: AMBA PL011 UART driver Mar 19 11:43:35.892731 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 19 11:43:35.892739 kernel: Modules: 0 pages in range for non-PLT usage Mar 19 11:43:35.892746 kernel: Modules: 509280 pages in range for PLT usage Mar 19 11:43:35.892753 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 19 11:43:35.892760 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 19 11:43:35.892767 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 19 11:43:35.892779 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 19 11:43:35.892786 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 19 11:43:35.892793 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 19 11:43:35.892800 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 19 11:43:35.892808 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 19 11:43:35.892815 kernel: ACPI: Added _OSI(Module Device) Mar 19 11:43:35.892822 kernel: ACPI: Added _OSI(Processor Device) Mar 19 11:43:35.892829 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 19 11:43:35.892835 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 19 11:43:35.892842 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 19 11:43:35.892849 kernel: ACPI: Interpreter enabled Mar 19 11:43:35.892856 kernel: ACPI: Using GIC for interrupt routing Mar 19 11:43:35.892862 kernel: ACPI: MCFG table detected, 1 entries Mar 19 11:43:35.892869 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Mar 19 11:43:35.892878 kernel: printk: console [ttyAMA0] enabled Mar 19 11:43:35.892885 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 19 11:43:35.893021 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 19 11:43:35.893095 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 19 11:43:35.893160 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 19 11:43:35.893223 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Mar 19 11:43:35.893318 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Mar 19 11:43:35.893332 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Mar 19 11:43:35.893339 kernel: PCI host bridge to bus 0000:00 Mar 19 11:43:35.893411 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Mar 19 11:43:35.893471 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 19 11:43:35.893529 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Mar 19 11:43:35.893586 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 19 11:43:35.893664 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Mar 19 11:43:35.893749 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Mar 19 11:43:35.893832 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Mar 19 11:43:35.893900 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Mar 19 11:43:35.893965 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Mar 19 11:43:35.894029 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Mar 19 11:43:35.894094 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Mar 19 11:43:35.894157 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Mar 19 11:43:35.894220 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Mar 19 11:43:35.894316 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 19 11:43:35.894400 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Mar 19 11:43:35.894411 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 19 11:43:35.894418 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 19 11:43:35.894425 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 19 11:43:35.894432 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 19 11:43:35.894443 kernel: iommu: Default domain type: Translated Mar 19 11:43:35.894450 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 19 11:43:35.894457 kernel: efivars: Registered efivars operations Mar 19 11:43:35.894464 kernel: vgaarb: loaded Mar 19 11:43:35.894471 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 19 11:43:35.894477 kernel: VFS: Disk quotas dquot_6.6.0 Mar 19 11:43:35.894484 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 19 11:43:35.894491 kernel: pnp: PnP ACPI init Mar 19 11:43:35.894563 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Mar 19 11:43:35.894575 kernel: pnp: PnP ACPI: found 1 devices Mar 19 11:43:35.894582 kernel: NET: Registered PF_INET protocol family Mar 19 11:43:35.894589 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 19 11:43:35.894596 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 19 11:43:35.894603 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 19 11:43:35.894610 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 19 11:43:35.894617 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 19 11:43:35.894624 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 19 11:43:35.894631 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 19 11:43:35.894640 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 19 11:43:35.894647 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 19 11:43:35.894654 kernel: PCI: CLS 0 bytes, default 64 Mar 19 11:43:35.894660 kernel: kvm [1]: HYP mode not available Mar 19 11:43:35.894667 kernel: Initialise system trusted keyrings Mar 19 11:43:35.894674 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 19 11:43:35.894681 kernel: Key type asymmetric registered Mar 19 11:43:35.894688 kernel: Asymmetric key parser 'x509' registered Mar 19 11:43:35.894695 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 19 11:43:35.894703 kernel: io scheduler mq-deadline registered Mar 19 11:43:35.894710 kernel: io scheduler kyber registered Mar 19 11:43:35.894717 kernel: io scheduler bfq registered Mar 19 11:43:35.894724 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 19 11:43:35.894731 kernel: ACPI: button: Power Button [PWRB] Mar 19 11:43:35.894738 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 19 11:43:35.894813 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Mar 19 11:43:35.894824 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 19 11:43:35.894831 kernel: thunder_xcv, ver 1.0 Mar 19 11:43:35.894840 kernel: thunder_bgx, ver 1.0 Mar 19 11:43:35.894847 kernel: nicpf, ver 1.0 Mar 19 11:43:35.894854 kernel: nicvf, ver 1.0 Mar 19 11:43:35.894928 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 19 11:43:35.894991 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-19T11:43:35 UTC (1742384615) Mar 19 11:43:35.895000 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 19 11:43:35.895007 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Mar 19 11:43:35.895015 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 19 11:43:35.895024 kernel: watchdog: Hard watchdog permanently disabled Mar 19 11:43:35.895031 kernel: NET: Registered PF_INET6 protocol family Mar 19 11:43:35.895038 kernel: Segment Routing with IPv6 Mar 19 11:43:35.895045 kernel: In-situ OAM (IOAM) with IPv6 Mar 19 11:43:35.895052 kernel: NET: Registered PF_PACKET protocol family Mar 19 11:43:35.895059 kernel: Key type dns_resolver registered Mar 19 11:43:35.895066 kernel: registered taskstats version 1 Mar 19 11:43:35.895073 kernel: Loading compiled-in X.509 certificates Mar 19 11:43:35.895081 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 36392d496708ee63c4af5364493015d5256162ff' Mar 19 11:43:35.895089 kernel: Key type .fscrypt registered Mar 19 11:43:35.895097 kernel: Key type fscrypt-provisioning registered Mar 19 11:43:35.895104 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 19 11:43:35.895111 kernel: ima: Allocated hash algorithm: sha1 Mar 19 11:43:35.895118 kernel: ima: No architecture policies found Mar 19 11:43:35.895125 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 19 11:43:35.895132 kernel: clk: Disabling unused clocks Mar 19 11:43:35.895144 kernel: Freeing unused kernel memory: 38336K Mar 19 11:43:35.895151 kernel: Run /init as init process Mar 19 11:43:35.895160 kernel: with arguments: Mar 19 11:43:35.895166 kernel: /init Mar 19 11:43:35.895173 kernel: with environment: Mar 19 11:43:35.895180 kernel: HOME=/ Mar 19 11:43:35.895186 kernel: TERM=linux Mar 19 11:43:35.895193 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 19 11:43:35.895201 systemd[1]: Successfully made /usr/ read-only. Mar 19 11:43:35.895211 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 19 11:43:35.895221 systemd[1]: Detected virtualization kvm. Mar 19 11:43:35.895228 systemd[1]: Detected architecture arm64. Mar 19 11:43:35.895255 systemd[1]: Running in initrd. Mar 19 11:43:35.895262 systemd[1]: No hostname configured, using default hostname. Mar 19 11:43:35.895270 systemd[1]: Hostname set to . Mar 19 11:43:35.895277 systemd[1]: Initializing machine ID from VM UUID. Mar 19 11:43:35.895285 systemd[1]: Queued start job for default target initrd.target. Mar 19 11:43:35.895292 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 19 11:43:35.895302 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 19 11:43:35.895310 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 19 11:43:35.895317 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 19 11:43:35.895325 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 19 11:43:35.895333 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 19 11:43:35.895341 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 19 11:43:35.895351 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 19 11:43:35.895358 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 19 11:43:35.895366 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 19 11:43:35.895373 systemd[1]: Reached target paths.target - Path Units. Mar 19 11:43:35.895380 systemd[1]: Reached target slices.target - Slice Units. Mar 19 11:43:35.895388 systemd[1]: Reached target swap.target - Swaps. Mar 19 11:43:35.895395 systemd[1]: Reached target timers.target - Timer Units. Mar 19 11:43:35.895402 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 19 11:43:35.895410 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 19 11:43:35.895419 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 19 11:43:35.895427 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 19 11:43:35.895434 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 19 11:43:35.895442 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 19 11:43:35.895449 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 19 11:43:35.895457 systemd[1]: Reached target sockets.target - Socket Units. Mar 19 11:43:35.895464 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 19 11:43:35.895471 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 19 11:43:35.895480 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 19 11:43:35.895488 systemd[1]: Starting systemd-fsck-usr.service... Mar 19 11:43:35.895495 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 19 11:43:35.895503 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 19 11:43:35.895510 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:43:35.895518 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 19 11:43:35.895525 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 19 11:43:35.895534 systemd[1]: Finished systemd-fsck-usr.service. Mar 19 11:43:35.895559 systemd-journald[238]: Collecting audit messages is disabled. Mar 19 11:43:35.895580 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 19 11:43:35.895588 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:43:35.895595 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 19 11:43:35.895604 systemd-journald[238]: Journal started Mar 19 11:43:35.895623 systemd-journald[238]: Runtime Journal (/run/log/journal/d7926182d4e84c918e738b67135844a4) is 5.9M, max 47.3M, 41.4M free. Mar 19 11:43:35.884740 systemd-modules-load[239]: Inserted module 'overlay' Mar 19 11:43:35.897650 systemd[1]: Started systemd-journald.service - Journal Service. Mar 19 11:43:35.899261 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 19 11:43:35.900161 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 19 11:43:35.902357 systemd-modules-load[239]: Inserted module 'br_netfilter' Mar 19 11:43:35.903288 kernel: Bridge firewalling registered Mar 19 11:43:35.903423 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 19 11:43:35.905767 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 19 11:43:35.907109 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 19 11:43:35.912021 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 19 11:43:35.913144 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 19 11:43:35.917753 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 19 11:43:35.920958 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:43:35.935458 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 19 11:43:35.936400 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:43:35.938927 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 19 11:43:35.950928 dracut-cmdline[282]: dracut-dracut-053 Mar 19 11:43:35.953367 dracut-cmdline[282]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=41cc5bdd62754423bbb4bbec0e0356d6a1ab5d0ac0f2396a30318f9fb189e7eb Mar 19 11:43:35.967701 systemd-resolved[278]: Positive Trust Anchors: Mar 19 11:43:35.967719 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 19 11:43:35.967750 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 19 11:43:35.972408 systemd-resolved[278]: Defaulting to hostname 'linux'. Mar 19 11:43:35.973523 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 19 11:43:35.974632 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 19 11:43:36.023266 kernel: SCSI subsystem initialized Mar 19 11:43:36.031251 kernel: Loading iSCSI transport class v2.0-870. Mar 19 11:43:36.039272 kernel: iscsi: registered transport (tcp) Mar 19 11:43:36.053259 kernel: iscsi: registered transport (qla4xxx) Mar 19 11:43:36.053283 kernel: QLogic iSCSI HBA Driver Mar 19 11:43:36.096009 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 19 11:43:36.109434 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 19 11:43:36.128204 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 19 11:43:36.128260 kernel: device-mapper: uevent: version 1.0.3 Mar 19 11:43:36.129434 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 19 11:43:36.174263 kernel: raid6: neonx8 gen() 15785 MB/s Mar 19 11:43:36.191254 kernel: raid6: neonx4 gen() 15789 MB/s Mar 19 11:43:36.211255 kernel: raid6: neonx2 gen() 12577 MB/s Mar 19 11:43:36.228251 kernel: raid6: neonx1 gen() 5175 MB/s Mar 19 11:43:36.245284 kernel: raid6: int64x8 gen() 5884 MB/s Mar 19 11:43:36.262265 kernel: raid6: int64x4 gen() 4829 MB/s Mar 19 11:43:36.279251 kernel: raid6: int64x2 gen() 6106 MB/s Mar 19 11:43:36.296247 kernel: raid6: int64x1 gen() 5059 MB/s Mar 19 11:43:36.296263 kernel: raid6: using algorithm neonx4 gen() 15789 MB/s Mar 19 11:43:36.313255 kernel: raid6: .... xor() 12416 MB/s, rmw enabled Mar 19 11:43:36.313267 kernel: raid6: using neon recovery algorithm Mar 19 11:43:36.318247 kernel: xor: measuring software checksum speed Mar 19 11:43:36.318259 kernel: 8regs : 21101 MB/sec Mar 19 11:43:36.319712 kernel: 32regs : 19482 MB/sec Mar 19 11:43:36.319732 kernel: arm64_neon : 27159 MB/sec Mar 19 11:43:36.319741 kernel: xor: using function: arm64_neon (27159 MB/sec) Mar 19 11:43:36.370275 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 19 11:43:36.383548 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 19 11:43:36.396997 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 19 11:43:36.410634 systemd-udevd[463]: Using default interface naming scheme 'v255'. Mar 19 11:43:36.414321 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 19 11:43:36.416585 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 19 11:43:36.431924 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation Mar 19 11:43:36.460002 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 19 11:43:36.470408 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 19 11:43:36.512286 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 19 11:43:36.519430 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 19 11:43:36.529747 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 19 11:43:36.531571 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 19 11:43:36.534369 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 19 11:43:36.536244 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 19 11:43:36.544653 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 19 11:43:36.554055 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 19 11:43:36.567900 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Mar 19 11:43:36.580358 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 19 11:43:36.580470 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 19 11:43:36.580490 kernel: GPT:9289727 != 19775487 Mar 19 11:43:36.580500 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 19 11:43:36.580511 kernel: GPT:9289727 != 19775487 Mar 19 11:43:36.580520 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 19 11:43:36.580528 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 19 11:43:36.574439 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 19 11:43:36.574563 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:43:36.581162 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 19 11:43:36.582023 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 19 11:43:36.582173 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:43:36.584480 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:43:36.597628 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:43:36.601369 kernel: BTRFS: device fsid 7c80927c-98c3-4e81-a933-b7f5e1234bd2 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (525) Mar 19 11:43:36.603281 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (509) Mar 19 11:43:36.613165 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:43:36.625751 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 19 11:43:36.633188 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 19 11:43:36.640241 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 19 11:43:36.641169 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 19 11:43:36.649596 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 19 11:43:36.662363 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 19 11:43:36.663941 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 19 11:43:36.668315 disk-uuid[552]: Primary Header is updated. Mar 19 11:43:36.668315 disk-uuid[552]: Secondary Entries is updated. Mar 19 11:43:36.668315 disk-uuid[552]: Secondary Header is updated. Mar 19 11:43:36.674275 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 19 11:43:36.685641 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:43:37.679264 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 19 11:43:37.680285 disk-uuid[553]: The operation has completed successfully. Mar 19 11:43:37.703690 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 19 11:43:37.703821 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 19 11:43:37.742420 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 19 11:43:37.745039 sh[574]: Success Mar 19 11:43:37.758294 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 19 11:43:37.786335 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 19 11:43:37.799605 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 19 11:43:37.802264 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 19 11:43:37.811929 kernel: BTRFS info (device dm-0): first mount of filesystem 7c80927c-98c3-4e81-a933-b7f5e1234bd2 Mar 19 11:43:37.811968 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 19 11:43:37.811979 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 19 11:43:37.811989 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 19 11:43:37.812485 kernel: BTRFS info (device dm-0): using free space tree Mar 19 11:43:37.816077 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 19 11:43:37.817302 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 19 11:43:37.818058 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 19 11:43:37.820111 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 19 11:43:37.832271 kernel: BTRFS info (device vda6): first mount of filesystem eeeb6da4-27b1-474b-8015-c667e85f7b18 Mar 19 11:43:37.832312 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 19 11:43:37.832323 kernel: BTRFS info (device vda6): using free space tree Mar 19 11:43:37.835258 kernel: BTRFS info (device vda6): auto enabling async discard Mar 19 11:43:37.843257 kernel: BTRFS info (device vda6): last unmount of filesystem eeeb6da4-27b1-474b-8015-c667e85f7b18 Mar 19 11:43:37.848689 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 19 11:43:37.861439 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 19 11:43:37.873792 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 19 11:43:37.913043 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 19 11:43:37.925587 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 19 11:43:37.980999 systemd-networkd[761]: lo: Link UP Mar 19 11:43:37.981012 systemd-networkd[761]: lo: Gained carrier Mar 19 11:43:37.982014 systemd-networkd[761]: Enumeration completed Mar 19 11:43:37.982126 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 19 11:43:37.982709 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:43:37.982712 systemd-networkd[761]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 19 11:43:37.983274 systemd[1]: Reached target network.target - Network. Mar 19 11:43:37.983882 systemd-networkd[761]: eth0: Link UP Mar 19 11:43:37.983885 systemd-networkd[761]: eth0: Gained carrier Mar 19 11:43:37.983892 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:43:38.004736 ignition[674]: Ignition 2.20.0 Mar 19 11:43:38.004747 ignition[674]: Stage: fetch-offline Mar 19 11:43:38.004789 ignition[674]: no configs at "/usr/lib/ignition/base.d" Mar 19 11:43:38.004798 ignition[674]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 19 11:43:38.005070 ignition[674]: parsed url from cmdline: "" Mar 19 11:43:38.005073 ignition[674]: no config URL provided Mar 19 11:43:38.005078 ignition[674]: reading system config file "/usr/lib/ignition/user.ign" Mar 19 11:43:38.005085 ignition[674]: no config at "/usr/lib/ignition/user.ign" Mar 19 11:43:38.005109 ignition[674]: op(1): [started] loading QEMU firmware config module Mar 19 11:43:38.005113 ignition[674]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 19 11:43:38.011032 ignition[674]: op(1): [finished] loading QEMU firmware config module Mar 19 11:43:38.012290 systemd-networkd[761]: eth0: DHCPv4 address 10.0.0.95/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 19 11:43:38.048989 ignition[674]: parsing config with SHA512: b947e7a7fffb902e6aac3e36da99f1b3dec78c963bdffa57e1514adefdd1003d1fbc72e637c5f54481a29513ce0e84c73ab6d5b3a1ebc252849fadf988bac8be Mar 19 11:43:38.057224 unknown[674]: fetched base config from "system" Mar 19 11:43:38.057249 unknown[674]: fetched user config from "qemu" Mar 19 11:43:38.057731 ignition[674]: fetch-offline: fetch-offline passed Mar 19 11:43:38.057815 ignition[674]: Ignition finished successfully Mar 19 11:43:38.060155 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 19 11:43:38.061636 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 19 11:43:38.071401 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 19 11:43:38.083766 ignition[773]: Ignition 2.20.0 Mar 19 11:43:38.083786 ignition[773]: Stage: kargs Mar 19 11:43:38.083947 ignition[773]: no configs at "/usr/lib/ignition/base.d" Mar 19 11:43:38.083957 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 19 11:43:38.084859 ignition[773]: kargs: kargs passed Mar 19 11:43:38.087866 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 19 11:43:38.084907 ignition[773]: Ignition finished successfully Mar 19 11:43:38.102412 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 19 11:43:38.111592 ignition[782]: Ignition 2.20.0 Mar 19 11:43:38.111601 ignition[782]: Stage: disks Mar 19 11:43:38.111766 ignition[782]: no configs at "/usr/lib/ignition/base.d" Mar 19 11:43:38.111785 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 19 11:43:38.112652 ignition[782]: disks: disks passed Mar 19 11:43:38.114053 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 19 11:43:38.112695 ignition[782]: Ignition finished successfully Mar 19 11:43:38.115580 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 19 11:43:38.117143 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 19 11:43:38.118700 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 19 11:43:38.120186 systemd[1]: Reached target sysinit.target - System Initialization. Mar 19 11:43:38.121880 systemd[1]: Reached target basic.target - Basic System. Mar 19 11:43:38.131465 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 19 11:43:38.140913 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 19 11:43:38.145684 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 19 11:43:38.159380 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 19 11:43:38.202245 kernel: EXT4-fs (vda9): mounted filesystem 45bb9a4a-80dc-4ce4-9ca9-c4944d8ff0e6 r/w with ordered data mode. Quota mode: none. Mar 19 11:43:38.202624 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 19 11:43:38.203851 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 19 11:43:38.213309 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 19 11:43:38.214980 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 19 11:43:38.216357 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 19 11:43:38.216402 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 19 11:43:38.225364 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (801) Mar 19 11:43:38.225386 kernel: BTRFS info (device vda6): first mount of filesystem eeeb6da4-27b1-474b-8015-c667e85f7b18 Mar 19 11:43:38.225396 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 19 11:43:38.225405 kernel: BTRFS info (device vda6): using free space tree Mar 19 11:43:38.225414 kernel: BTRFS info (device vda6): auto enabling async discard Mar 19 11:43:38.216426 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 19 11:43:38.223028 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 19 11:43:38.227024 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 19 11:43:38.238389 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 19 11:43:38.276185 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Mar 19 11:43:38.280296 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Mar 19 11:43:38.283907 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Mar 19 11:43:38.286939 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Mar 19 11:43:38.358348 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 19 11:43:38.368330 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 19 11:43:38.369689 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 19 11:43:38.374252 kernel: BTRFS info (device vda6): last unmount of filesystem eeeb6da4-27b1-474b-8015-c667e85f7b18 Mar 19 11:43:38.387179 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 19 11:43:38.390648 ignition[914]: INFO : Ignition 2.20.0 Mar 19 11:43:38.390648 ignition[914]: INFO : Stage: mount Mar 19 11:43:38.391797 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 19 11:43:38.391797 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 19 11:43:38.391797 ignition[914]: INFO : mount: mount passed Mar 19 11:43:38.391797 ignition[914]: INFO : Ignition finished successfully Mar 19 11:43:38.392984 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 19 11:43:38.406357 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 19 11:43:38.874812 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 19 11:43:38.884407 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 19 11:43:38.891444 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (927) Mar 19 11:43:38.891481 kernel: BTRFS info (device vda6): first mount of filesystem eeeb6da4-27b1-474b-8015-c667e85f7b18 Mar 19 11:43:38.891492 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 19 11:43:38.892620 kernel: BTRFS info (device vda6): using free space tree Mar 19 11:43:38.896254 kernel: BTRFS info (device vda6): auto enabling async discard Mar 19 11:43:38.896860 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 19 11:43:38.913664 ignition[944]: INFO : Ignition 2.20.0 Mar 19 11:43:38.913664 ignition[944]: INFO : Stage: files Mar 19 11:43:38.914821 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 19 11:43:38.914821 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 19 11:43:38.914821 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Mar 19 11:43:38.917332 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 19 11:43:38.917332 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 19 11:43:38.922661 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 19 11:43:38.923596 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 19 11:43:38.923596 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 19 11:43:38.923211 unknown[944]: wrote ssh authorized keys file for user: core Mar 19 11:43:38.926980 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Mar 19 11:43:38.928308 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Mar 19 11:43:39.044981 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 19 11:43:39.162484 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Mar 19 11:43:39.162484 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 19 11:43:39.166419 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 19 11:43:39.470211 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 19 11:43:39.524388 systemd-networkd[761]: eth0: Gained IPv6LL Mar 19 11:43:39.544257 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 19 11:43:39.545895 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 19 11:43:39.545895 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 19 11:43:39.545895 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 19 11:43:39.545895 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 19 11:43:39.545895 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 19 11:43:39.545895 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 19 11:43:39.545895 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 19 11:43:39.545895 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 19 11:43:39.545895 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 19 11:43:39.545895 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 19 11:43:39.545895 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 19 11:43:39.545895 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 19 11:43:39.545895 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 19 11:43:39.545895 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Mar 19 11:43:39.801219 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 19 11:43:40.041554 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 19 11:43:40.041554 ignition[944]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 19 11:43:40.044455 ignition[944]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 19 11:43:40.044455 ignition[944]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 19 11:43:40.044455 ignition[944]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 19 11:43:40.044455 ignition[944]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 19 11:43:40.044455 ignition[944]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 19 11:43:40.044455 ignition[944]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 19 11:43:40.044455 ignition[944]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 19 11:43:40.044455 ignition[944]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 19 11:43:40.060369 ignition[944]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 19 11:43:40.063271 ignition[944]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 19 11:43:40.064424 ignition[944]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 19 11:43:40.064424 ignition[944]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 19 11:43:40.064424 ignition[944]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 19 11:43:40.064424 ignition[944]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 19 11:43:40.064424 ignition[944]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 19 11:43:40.064424 ignition[944]: INFO : files: files passed Mar 19 11:43:40.064424 ignition[944]: INFO : Ignition finished successfully Mar 19 11:43:40.067043 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 19 11:43:40.077398 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 19 11:43:40.079532 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 19 11:43:40.080567 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 19 11:43:40.080647 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 19 11:43:40.086842 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory Mar 19 11:43:40.089853 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 19 11:43:40.089853 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 19 11:43:40.093270 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 19 11:43:40.092348 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 19 11:43:40.094783 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 19 11:43:40.100449 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 19 11:43:40.118213 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 19 11:43:40.118326 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 19 11:43:40.120175 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 19 11:43:40.121815 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 19 11:43:40.123344 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 19 11:43:40.124052 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 19 11:43:40.138700 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 19 11:43:40.153432 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 19 11:43:40.160302 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 19 11:43:40.161187 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 19 11:43:40.162974 systemd[1]: Stopped target timers.target - Timer Units. Mar 19 11:43:40.164511 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 19 11:43:40.164614 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 19 11:43:40.166846 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 19 11:43:40.168590 systemd[1]: Stopped target basic.target - Basic System. Mar 19 11:43:40.170094 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 19 11:43:40.171544 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 19 11:43:40.173143 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 19 11:43:40.174878 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 19 11:43:40.176198 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 19 11:43:40.177645 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 19 11:43:40.179138 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 19 11:43:40.180409 systemd[1]: Stopped target swap.target - Swaps. Mar 19 11:43:40.181502 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 19 11:43:40.181605 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 19 11:43:40.183316 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 19 11:43:40.184714 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 19 11:43:40.186092 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 19 11:43:40.186173 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 19 11:43:40.187648 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 19 11:43:40.187752 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 19 11:43:40.189921 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 19 11:43:40.190033 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 19 11:43:40.191412 systemd[1]: Stopped target paths.target - Path Units. Mar 19 11:43:40.192530 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 19 11:43:40.192618 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 19 11:43:40.194020 systemd[1]: Stopped target slices.target - Slice Units. Mar 19 11:43:40.195305 systemd[1]: Stopped target sockets.target - Socket Units. Mar 19 11:43:40.196554 systemd[1]: iscsid.socket: Deactivated successfully. Mar 19 11:43:40.196630 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 19 11:43:40.197872 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 19 11:43:40.197943 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 19 11:43:40.199472 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 19 11:43:40.199569 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 19 11:43:40.200813 systemd[1]: ignition-files.service: Deactivated successfully. Mar 19 11:43:40.200908 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 19 11:43:40.214379 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 19 11:43:40.215032 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 19 11:43:40.215149 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 19 11:43:40.217276 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 19 11:43:40.218561 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 19 11:43:40.218674 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 19 11:43:40.220005 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 19 11:43:40.220090 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 19 11:43:40.225031 ignition[1000]: INFO : Ignition 2.20.0 Mar 19 11:43:40.225031 ignition[1000]: INFO : Stage: umount Mar 19 11:43:40.227807 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 19 11:43:40.227807 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 19 11:43:40.227807 ignition[1000]: INFO : umount: umount passed Mar 19 11:43:40.227807 ignition[1000]: INFO : Ignition finished successfully Mar 19 11:43:40.225738 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 19 11:43:40.225823 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 19 11:43:40.229415 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 19 11:43:40.229498 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 19 11:43:40.230787 systemd[1]: Stopped target network.target - Network. Mar 19 11:43:40.231904 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 19 11:43:40.231970 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 19 11:43:40.234425 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 19 11:43:40.234473 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 19 11:43:40.235869 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 19 11:43:40.235911 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 19 11:43:40.237469 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 19 11:43:40.237509 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 19 11:43:40.238916 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 19 11:43:40.240179 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 19 11:43:40.242277 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 19 11:43:40.242763 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 19 11:43:40.242861 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 19 11:43:40.244321 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 19 11:43:40.245320 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 19 11:43:40.249567 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 19 11:43:40.250685 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 19 11:43:40.250786 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 19 11:43:40.254053 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 19 11:43:40.254570 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 19 11:43:40.254621 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 19 11:43:40.255898 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 19 11:43:40.255947 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 19 11:43:40.271346 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 19 11:43:40.272077 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 19 11:43:40.272139 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 19 11:43:40.273572 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 19 11:43:40.273613 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:43:40.276562 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 19 11:43:40.276611 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 19 11:43:40.277965 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 19 11:43:40.278005 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 19 11:43:40.281157 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 19 11:43:40.283825 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 19 11:43:40.283889 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 19 11:43:40.289814 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 19 11:43:40.289925 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 19 11:43:40.292186 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 19 11:43:40.292336 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 19 11:43:40.293977 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 19 11:43:40.294015 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 19 11:43:40.295309 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 19 11:43:40.295341 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 19 11:43:40.296061 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 19 11:43:40.296102 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 19 11:43:40.297579 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 19 11:43:40.297619 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 19 11:43:40.299541 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 19 11:43:40.299578 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:43:40.316381 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 19 11:43:40.317128 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 19 11:43:40.317179 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 19 11:43:40.319529 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 19 11:43:40.319569 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:43:40.322447 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 19 11:43:40.322495 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 19 11:43:40.322737 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 19 11:43:40.322833 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 19 11:43:40.324686 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 19 11:43:40.326149 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 19 11:43:40.334752 systemd[1]: Switching root. Mar 19 11:43:40.360869 systemd-journald[238]: Journal stopped Mar 19 11:43:41.112339 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Mar 19 11:43:41.112397 kernel: SELinux: policy capability network_peer_controls=1 Mar 19 11:43:41.112412 kernel: SELinux: policy capability open_perms=1 Mar 19 11:43:41.112425 kernel: SELinux: policy capability extended_socket_class=1 Mar 19 11:43:41.112438 kernel: SELinux: policy capability always_check_network=0 Mar 19 11:43:41.112447 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 19 11:43:41.112456 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 19 11:43:41.112465 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 19 11:43:41.112477 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 19 11:43:41.112486 kernel: audit: type=1403 audit(1742384620.529:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 19 11:43:41.112498 systemd[1]: Successfully loaded SELinux policy in 36.797ms. Mar 19 11:43:41.112511 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.332ms. Mar 19 11:43:41.112521 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 19 11:43:41.112532 systemd[1]: Detected virtualization kvm. Mar 19 11:43:41.112542 systemd[1]: Detected architecture arm64. Mar 19 11:43:41.112552 systemd[1]: Detected first boot. Mar 19 11:43:41.112561 systemd[1]: Initializing machine ID from VM UUID. Mar 19 11:43:41.112576 zram_generator::config[1047]: No configuration found. Mar 19 11:43:41.112586 kernel: NET: Registered PF_VSOCK protocol family Mar 19 11:43:41.112597 systemd[1]: Populated /etc with preset unit settings. Mar 19 11:43:41.112607 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 19 11:43:41.112617 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 19 11:43:41.112627 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 19 11:43:41.112641 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 19 11:43:41.112651 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 19 11:43:41.112661 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 19 11:43:41.112675 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 19 11:43:41.112687 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 19 11:43:41.112697 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 19 11:43:41.112709 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 19 11:43:41.112719 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 19 11:43:41.112729 systemd[1]: Created slice user.slice - User and Session Slice. Mar 19 11:43:41.112739 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 19 11:43:41.112749 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 19 11:43:41.112760 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 19 11:43:41.112778 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 19 11:43:41.112793 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 19 11:43:41.112804 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 19 11:43:41.112814 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 19 11:43:41.112825 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 19 11:43:41.112835 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 19 11:43:41.112846 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 19 11:43:41.112856 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 19 11:43:41.112868 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 19 11:43:41.112878 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 19 11:43:41.112888 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 19 11:43:41.112898 systemd[1]: Reached target slices.target - Slice Units. Mar 19 11:43:41.112908 systemd[1]: Reached target swap.target - Swaps. Mar 19 11:43:41.112919 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 19 11:43:41.112929 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 19 11:43:41.112941 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 19 11:43:41.112951 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 19 11:43:41.112961 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 19 11:43:41.112974 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 19 11:43:41.112984 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 19 11:43:41.112994 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 19 11:43:41.113004 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 19 11:43:41.113014 systemd[1]: Mounting media.mount - External Media Directory... Mar 19 11:43:41.113025 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 19 11:43:41.113038 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 19 11:43:41.113049 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 19 11:43:41.113059 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 19 11:43:41.113072 systemd[1]: Reached target machines.target - Containers. Mar 19 11:43:41.113082 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 19 11:43:41.113092 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 11:43:41.113102 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 19 11:43:41.113113 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 19 11:43:41.113123 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 11:43:41.113133 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 19 11:43:41.113143 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 11:43:41.113156 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 19 11:43:41.113167 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 11:43:41.113178 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 19 11:43:41.113188 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 19 11:43:41.113198 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 19 11:43:41.113208 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 19 11:43:41.113218 systemd[1]: Stopped systemd-fsck-usr.service. Mar 19 11:43:41.113265 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 11:43:41.113281 kernel: loop: module loaded Mar 19 11:43:41.113291 kernel: fuse: init (API version 7.39) Mar 19 11:43:41.113301 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 19 11:43:41.113311 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 19 11:43:41.113321 kernel: ACPI: bus type drm_connector registered Mar 19 11:43:41.113330 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 19 11:43:41.113341 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 19 11:43:41.113351 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 19 11:43:41.113361 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 19 11:43:41.113373 systemd[1]: verity-setup.service: Deactivated successfully. Mar 19 11:43:41.113384 systemd[1]: Stopped verity-setup.service. Mar 19 11:43:41.113415 systemd-journald[1116]: Collecting audit messages is disabled. Mar 19 11:43:41.113436 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 19 11:43:41.113449 systemd-journald[1116]: Journal started Mar 19 11:43:41.113471 systemd-journald[1116]: Runtime Journal (/run/log/journal/d7926182d4e84c918e738b67135844a4) is 5.9M, max 47.3M, 41.4M free. Mar 19 11:43:41.113509 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 19 11:43:40.933989 systemd[1]: Queued start job for default target multi-user.target. Mar 19 11:43:40.944296 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 19 11:43:40.944687 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 19 11:43:41.115790 systemd[1]: Started systemd-journald.service - Journal Service. Mar 19 11:43:41.117150 systemd[1]: Mounted media.mount - External Media Directory. Mar 19 11:43:41.118057 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 19 11:43:41.119029 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 19 11:43:41.120000 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 19 11:43:41.122266 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 19 11:43:41.123509 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 19 11:43:41.124845 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 19 11:43:41.126273 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 19 11:43:41.127484 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 11:43:41.127726 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 11:43:41.129613 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 19 11:43:41.129804 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 19 11:43:41.130905 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 11:43:41.131066 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 11:43:41.132362 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 19 11:43:41.132514 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 19 11:43:41.133720 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 11:43:41.133903 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 11:43:41.135161 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 19 11:43:41.136419 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 19 11:43:41.137659 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 19 11:43:41.138860 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 19 11:43:41.150728 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 19 11:43:41.163341 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 19 11:43:41.165219 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 19 11:43:41.166051 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 19 11:43:41.166089 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 19 11:43:41.167865 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 19 11:43:41.169806 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 19 11:43:41.171622 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 19 11:43:41.172471 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 11:43:41.173745 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 19 11:43:41.175384 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 19 11:43:41.176358 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 19 11:43:41.180122 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 19 11:43:41.181442 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 19 11:43:41.184205 systemd-journald[1116]: Time spent on flushing to /var/log/journal/d7926182d4e84c918e738b67135844a4 is 17.962ms for 870 entries. Mar 19 11:43:41.184205 systemd-journald[1116]: System Journal (/var/log/journal/d7926182d4e84c918e738b67135844a4) is 8M, max 195.6M, 187.6M free. Mar 19 11:43:41.211311 systemd-journald[1116]: Received client request to flush runtime journal. Mar 19 11:43:41.211351 kernel: loop0: detected capacity change from 0 to 113512 Mar 19 11:43:41.185413 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 19 11:43:41.187393 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 19 11:43:41.189667 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 19 11:43:41.192425 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 19 11:43:41.193881 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 19 11:43:41.195557 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 19 11:43:41.198284 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 19 11:43:41.199565 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 19 11:43:41.203638 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 19 11:43:41.214744 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 19 11:43:41.218508 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 19 11:43:41.223826 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 19 11:43:41.226301 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 19 11:43:41.227620 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:43:41.237872 udevadm[1179]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 19 11:43:41.244513 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 19 11:43:41.246334 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 19 11:43:41.247006 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 19 11:43:41.256257 kernel: loop1: detected capacity change from 0 to 201592 Mar 19 11:43:41.258448 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 19 11:43:41.280915 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Mar 19 11:43:41.281317 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Mar 19 11:43:41.285612 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 19 11:43:41.301270 kernel: loop2: detected capacity change from 0 to 123192 Mar 19 11:43:41.344281 kernel: loop3: detected capacity change from 0 to 113512 Mar 19 11:43:41.349260 kernel: loop4: detected capacity change from 0 to 201592 Mar 19 11:43:41.354273 kernel: loop5: detected capacity change from 0 to 123192 Mar 19 11:43:41.357553 (sd-merge)[1191]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 19 11:43:41.357935 (sd-merge)[1191]: Merged extensions into '/usr'. Mar 19 11:43:41.361587 systemd[1]: Reload requested from client PID 1165 ('systemd-sysext') (unit systemd-sysext.service)... Mar 19 11:43:41.361608 systemd[1]: Reloading... Mar 19 11:43:41.429269 zram_generator::config[1219]: No configuration found. Mar 19 11:43:41.451674 ldconfig[1160]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 19 11:43:41.528810 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:43:41.589288 systemd[1]: Reloading finished in 227 ms. Mar 19 11:43:41.609675 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 19 11:43:41.610945 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 19 11:43:41.624441 systemd[1]: Starting ensure-sysext.service... Mar 19 11:43:41.626018 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 19 11:43:41.640046 systemd[1]: Reload requested from client PID 1254 ('systemctl') (unit ensure-sysext.service)... Mar 19 11:43:41.640062 systemd[1]: Reloading... Mar 19 11:43:41.644785 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 19 11:43:41.645007 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 19 11:43:41.645687 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 19 11:43:41.645921 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Mar 19 11:43:41.645971 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Mar 19 11:43:41.648549 systemd-tmpfiles[1255]: Detected autofs mount point /boot during canonicalization of boot. Mar 19 11:43:41.648560 systemd-tmpfiles[1255]: Skipping /boot Mar 19 11:43:41.657286 systemd-tmpfiles[1255]: Detected autofs mount point /boot during canonicalization of boot. Mar 19 11:43:41.657298 systemd-tmpfiles[1255]: Skipping /boot Mar 19 11:43:41.687501 zram_generator::config[1283]: No configuration found. Mar 19 11:43:41.771608 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:43:41.831678 systemd[1]: Reloading finished in 191 ms. Mar 19 11:43:41.841761 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 19 11:43:41.860541 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 19 11:43:41.868508 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 19 11:43:41.870992 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 19 11:43:41.873349 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 19 11:43:41.876499 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 19 11:43:41.884537 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 19 11:43:41.889398 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 19 11:43:41.893766 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 11:43:41.897518 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 11:43:41.900473 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 11:43:41.906182 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 11:43:41.907369 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 11:43:41.907489 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 11:43:41.910507 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 19 11:43:41.914309 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 19 11:43:41.915735 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 11:43:41.915908 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 11:43:41.918502 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 11:43:41.918687 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 11:43:41.920554 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 11:43:41.920743 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 11:43:41.923625 systemd-udevd[1325]: Using default interface naming scheme 'v255'. Mar 19 11:43:41.930205 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 19 11:43:41.934146 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 11:43:41.941614 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 11:43:41.947523 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 11:43:41.950365 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 11:43:41.951285 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 11:43:41.951404 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 11:43:41.952411 augenrules[1356]: No rules Mar 19 11:43:41.954039 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 19 11:43:41.957316 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 19 11:43:41.958501 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 19 11:43:41.960268 systemd[1]: audit-rules.service: Deactivated successfully. Mar 19 11:43:41.960475 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 19 11:43:41.962299 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 11:43:41.962456 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 11:43:41.963998 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 11:43:41.964150 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 11:43:41.965703 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 11:43:41.965913 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 11:43:41.969374 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 19 11:43:41.971912 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 19 11:43:41.993261 systemd[1]: Finished ensure-sysext.service. Mar 19 11:43:42.000578 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 19 11:43:42.011444 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 19 11:43:42.012369 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 11:43:42.016141 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 11:43:42.019325 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 19 11:43:42.025393 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 11:43:42.031067 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 11:43:42.033181 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 11:43:42.033228 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 11:43:42.037423 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 19 11:43:42.041429 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 19 11:43:42.042785 systemd-resolved[1324]: Positive Trust Anchors: Mar 19 11:43:42.043051 systemd-resolved[1324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 19 11:43:42.043137 systemd-resolved[1324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 19 11:43:42.043240 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 19 11:43:42.043844 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 11:43:42.045028 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 11:43:42.046482 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 19 11:43:42.047332 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 19 11:43:42.049585 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 11:43:42.049788 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 11:43:42.050106 systemd-resolved[1324]: Defaulting to hostname 'linux'. Mar 19 11:43:42.052676 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1386) Mar 19 11:43:42.052601 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 11:43:42.054259 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 11:43:42.056870 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 19 11:43:42.064321 augenrules[1397]: /sbin/augenrules: No change Mar 19 11:43:42.077020 augenrules[1428]: No rules Mar 19 11:43:42.081339 systemd[1]: audit-rules.service: Deactivated successfully. Mar 19 11:43:42.081576 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 19 11:43:42.083452 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 19 11:43:42.084968 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 19 11:43:42.085039 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 19 11:43:42.090427 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 19 11:43:42.104463 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 19 11:43:42.133276 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 19 11:43:42.134384 systemd[1]: Reached target time-set.target - System Time Set. Mar 19 11:43:42.136590 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 19 11:43:42.153207 systemd-networkd[1414]: lo: Link UP Mar 19 11:43:42.153221 systemd-networkd[1414]: lo: Gained carrier Mar 19 11:43:42.154108 systemd-networkd[1414]: Enumeration completed Mar 19 11:43:42.155503 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:43:42.156542 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 19 11:43:42.157854 systemd[1]: Reached target network.target - Network. Mar 19 11:43:42.159981 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 19 11:43:42.163079 systemd-networkd[1414]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:43:42.163083 systemd-networkd[1414]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 19 11:43:42.163898 systemd-networkd[1414]: eth0: Link UP Mar 19 11:43:42.163905 systemd-networkd[1414]: eth0: Gained carrier Mar 19 11:43:42.163919 systemd-networkd[1414]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:43:42.164046 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 19 11:43:42.179632 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 19 11:43:42.181364 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 19 11:43:42.186368 systemd-networkd[1414]: eth0: DHCPv4 address 10.0.0.95/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 19 11:43:42.187815 systemd-timesyncd[1416]: Network configuration changed, trying to establish connection. Mar 19 11:43:42.188862 systemd-timesyncd[1416]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 19 11:43:42.188899 systemd-timesyncd[1416]: Initial clock synchronization to Wed 2025-03-19 11:43:41.807539 UTC. Mar 19 11:43:42.192432 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 19 11:43:42.210338 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:43:42.216104 lvm[1449]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 19 11:43:42.248877 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 19 11:43:42.250418 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 19 11:43:42.251523 systemd[1]: Reached target sysinit.target - System Initialization. Mar 19 11:43:42.252697 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 19 11:43:42.253928 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 19 11:43:42.255314 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 19 11:43:42.256425 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 19 11:43:42.257628 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 19 11:43:42.258827 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 19 11:43:42.258860 systemd[1]: Reached target paths.target - Path Units. Mar 19 11:43:42.259795 systemd[1]: Reached target timers.target - Timer Units. Mar 19 11:43:42.261742 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 19 11:43:42.264170 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 19 11:43:42.267515 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 19 11:43:42.268902 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 19 11:43:42.270153 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 19 11:43:42.276174 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 19 11:43:42.277597 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 19 11:43:42.279984 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 19 11:43:42.281651 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 19 11:43:42.282816 systemd[1]: Reached target sockets.target - Socket Units. Mar 19 11:43:42.283798 systemd[1]: Reached target basic.target - Basic System. Mar 19 11:43:42.284792 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 19 11:43:42.284820 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 19 11:43:42.285722 systemd[1]: Starting containerd.service - containerd container runtime... Mar 19 11:43:42.287675 lvm[1456]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 19 11:43:42.287826 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 19 11:43:42.291405 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 19 11:43:42.294418 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 19 11:43:42.295657 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 19 11:43:42.298458 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 19 11:43:42.301344 jq[1459]: false Mar 19 11:43:42.302052 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 19 11:43:42.304138 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 19 11:43:42.310751 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 19 11:43:42.314151 extend-filesystems[1460]: Found loop3 Mar 19 11:43:42.315002 extend-filesystems[1460]: Found loop4 Mar 19 11:43:42.315002 extend-filesystems[1460]: Found loop5 Mar 19 11:43:42.315002 extend-filesystems[1460]: Found vda Mar 19 11:43:42.315002 extend-filesystems[1460]: Found vda1 Mar 19 11:43:42.315002 extend-filesystems[1460]: Found vda2 Mar 19 11:43:42.315002 extend-filesystems[1460]: Found vda3 Mar 19 11:43:42.315002 extend-filesystems[1460]: Found usr Mar 19 11:43:42.315002 extend-filesystems[1460]: Found vda4 Mar 19 11:43:42.315002 extend-filesystems[1460]: Found vda6 Mar 19 11:43:42.315002 extend-filesystems[1460]: Found vda7 Mar 19 11:43:42.315002 extend-filesystems[1460]: Found vda9 Mar 19 11:43:42.315002 extend-filesystems[1460]: Checking size of /dev/vda9 Mar 19 11:43:42.315642 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 19 11:43:42.322507 dbus-daemon[1458]: [system] SELinux support is enabled Mar 19 11:43:42.319955 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 19 11:43:42.321099 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 19 11:43:42.325459 systemd[1]: Starting update-engine.service - Update Engine... Mar 19 11:43:42.327951 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 19 11:43:42.330448 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 19 11:43:42.336542 extend-filesystems[1460]: Resized partition /dev/vda9 Mar 19 11:43:42.336449 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 19 11:43:42.341755 jq[1478]: true Mar 19 11:43:42.341424 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 19 11:43:42.341610 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 19 11:43:42.341971 systemd[1]: motdgen.service: Deactivated successfully. Mar 19 11:43:42.342136 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 19 11:43:42.342694 extend-filesystems[1481]: resize2fs 1.47.1 (20-May-2024) Mar 19 11:43:42.344402 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 19 11:43:42.344568 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 19 11:43:42.354248 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 19 11:43:42.360733 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 19 11:43:42.360784 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 19 11:43:42.363392 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 19 11:43:42.363417 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 19 11:43:42.370291 tar[1483]: linux-arm64/LICENSE Mar 19 11:43:42.370291 tar[1483]: linux-arm64/helm Mar 19 11:43:42.374649 jq[1490]: true Mar 19 11:43:42.377691 (ntainerd)[1491]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 19 11:43:42.383668 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1386) Mar 19 11:43:42.389249 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 19 11:43:42.406405 update_engine[1474]: I20250319 11:43:42.394836 1474 main.cc:92] Flatcar Update Engine starting Mar 19 11:43:42.406405 update_engine[1474]: I20250319 11:43:42.402367 1474 update_check_scheduler.cc:74] Next update check in 3m5s Mar 19 11:43:42.401073 systemd[1]: Started update-engine.service - Update Engine. Mar 19 11:43:42.409057 extend-filesystems[1481]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 19 11:43:42.409057 extend-filesystems[1481]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 19 11:43:42.409057 extend-filesystems[1481]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 19 11:43:42.405945 systemd-logind[1471]: Watching system buttons on /dev/input/event0 (Power Button) Mar 19 11:43:42.422077 extend-filesystems[1460]: Resized filesystem in /dev/vda9 Mar 19 11:43:42.408443 systemd-logind[1471]: New seat seat0. Mar 19 11:43:42.408747 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 19 11:43:42.410831 systemd[1]: Started systemd-logind.service - User Login Management. Mar 19 11:43:42.411939 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 19 11:43:42.413882 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 19 11:43:42.430719 bash[1512]: Updated "/home/core/.ssh/authorized_keys" Mar 19 11:43:42.434330 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 19 11:43:42.437840 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 19 11:43:42.475701 locksmithd[1511]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 19 11:43:42.582142 containerd[1491]: time="2025-03-19T11:43:42.581992800Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 19 11:43:42.615416 containerd[1491]: time="2025-03-19T11:43:42.615359440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:43:42.616858 containerd[1491]: time="2025-03-19T11:43:42.616817400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:43:42.616858 containerd[1491]: time="2025-03-19T11:43:42.616852080Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 19 11:43:42.616923 containerd[1491]: time="2025-03-19T11:43:42.616870800Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 19 11:43:42.617049 containerd[1491]: time="2025-03-19T11:43:42.617028280Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 19 11:43:42.617076 containerd[1491]: time="2025-03-19T11:43:42.617050240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 19 11:43:42.617126 containerd[1491]: time="2025-03-19T11:43:42.617107400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:43:42.617126 containerd[1491]: time="2025-03-19T11:43:42.617123440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:43:42.617380 containerd[1491]: time="2025-03-19T11:43:42.617357040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:43:42.617380 containerd[1491]: time="2025-03-19T11:43:42.617377480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 19 11:43:42.617441 containerd[1491]: time="2025-03-19T11:43:42.617391280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:43:42.617441 containerd[1491]: time="2025-03-19T11:43:42.617402280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 19 11:43:42.617492 containerd[1491]: time="2025-03-19T11:43:42.617473400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:43:42.617783 containerd[1491]: time="2025-03-19T11:43:42.617666960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:43:42.617820 containerd[1491]: time="2025-03-19T11:43:42.617805800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:43:42.617840 containerd[1491]: time="2025-03-19T11:43:42.617820480Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 19 11:43:42.617915 containerd[1491]: time="2025-03-19T11:43:42.617894360Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 19 11:43:42.617957 containerd[1491]: time="2025-03-19T11:43:42.617940880Z" level=info msg="metadata content store policy set" policy=shared Mar 19 11:43:42.621099 containerd[1491]: time="2025-03-19T11:43:42.621064000Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 19 11:43:42.621172 containerd[1491]: time="2025-03-19T11:43:42.621119840Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 19 11:43:42.621172 containerd[1491]: time="2025-03-19T11:43:42.621136160Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 19 11:43:42.621172 containerd[1491]: time="2025-03-19T11:43:42.621152280Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 19 11:43:42.621221 containerd[1491]: time="2025-03-19T11:43:42.621174040Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 19 11:43:42.621396 containerd[1491]: time="2025-03-19T11:43:42.621371240Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 19 11:43:42.621638 containerd[1491]: time="2025-03-19T11:43:42.621615360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 19 11:43:42.622204 containerd[1491]: time="2025-03-19T11:43:42.621718600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 19 11:43:42.622204 containerd[1491]: time="2025-03-19T11:43:42.621738920Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 19 11:43:42.622204 containerd[1491]: time="2025-03-19T11:43:42.621754600Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 19 11:43:42.622204 containerd[1491]: time="2025-03-19T11:43:42.621767920Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 19 11:43:42.622204 containerd[1491]: time="2025-03-19T11:43:42.621788720Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 19 11:43:42.622204 containerd[1491]: time="2025-03-19T11:43:42.621801200Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 19 11:43:42.622204 containerd[1491]: time="2025-03-19T11:43:42.621814520Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 19 11:43:42.622204 containerd[1491]: time="2025-03-19T11:43:42.621829480Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 19 11:43:42.622204 containerd[1491]: time="2025-03-19T11:43:42.621863560Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 19 11:43:42.622204 containerd[1491]: time="2025-03-19T11:43:42.621875560Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 19 11:43:42.622204 containerd[1491]: time="2025-03-19T11:43:42.621887160Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 19 11:43:42.622204 containerd[1491]: time="2025-03-19T11:43:42.621907600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 19 11:43:42.622204 containerd[1491]: time="2025-03-19T11:43:42.621920240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 19 11:43:42.622204 containerd[1491]: time="2025-03-19T11:43:42.621932360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 19 11:43:42.622464 containerd[1491]: time="2025-03-19T11:43:42.621943480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 19 11:43:42.622464 containerd[1491]: time="2025-03-19T11:43:42.621954920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 19 11:43:42.622464 containerd[1491]: time="2025-03-19T11:43:42.621967440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 19 11:43:42.622464 containerd[1491]: time="2025-03-19T11:43:42.621978440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 19 11:43:42.622464 containerd[1491]: time="2025-03-19T11:43:42.621990640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 19 11:43:42.622464 containerd[1491]: time="2025-03-19T11:43:42.622007120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 19 11:43:42.622464 containerd[1491]: time="2025-03-19T11:43:42.622022920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 19 11:43:42.622464 containerd[1491]: time="2025-03-19T11:43:42.622035280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 19 11:43:42.622464 containerd[1491]: time="2025-03-19T11:43:42.622050680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 19 11:43:42.622464 containerd[1491]: time="2025-03-19T11:43:42.622063040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 19 11:43:42.622464 containerd[1491]: time="2025-03-19T11:43:42.622078240Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 19 11:43:42.622464 containerd[1491]: time="2025-03-19T11:43:42.622098280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 19 11:43:42.622464 containerd[1491]: time="2025-03-19T11:43:42.622111760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 19 11:43:42.622464 containerd[1491]: time="2025-03-19T11:43:42.622122360Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 19 11:43:42.622687 containerd[1491]: time="2025-03-19T11:43:42.622326400Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 19 11:43:42.622687 containerd[1491]: time="2025-03-19T11:43:42.622367920Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 19 11:43:42.622687 containerd[1491]: time="2025-03-19T11:43:42.622379200Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 19 11:43:42.622687 containerd[1491]: time="2025-03-19T11:43:42.622390880Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 19 11:43:42.622687 containerd[1491]: time="2025-03-19T11:43:42.622402560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 19 11:43:42.622687 containerd[1491]: time="2025-03-19T11:43:42.622414760Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 19 11:43:42.622687 containerd[1491]: time="2025-03-19T11:43:42.622426000Z" level=info msg="NRI interface is disabled by configuration." Mar 19 11:43:42.622687 containerd[1491]: time="2025-03-19T11:43:42.622444160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 19 11:43:42.623163 containerd[1491]: time="2025-03-19T11:43:42.622837240Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 19 11:43:42.623163 containerd[1491]: time="2025-03-19T11:43:42.622890920Z" level=info msg="Connect containerd service" Mar 19 11:43:42.626182 containerd[1491]: time="2025-03-19T11:43:42.625907120Z" level=info msg="using legacy CRI server" Mar 19 11:43:42.626182 containerd[1491]: time="2025-03-19T11:43:42.625926480Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 19 11:43:42.626182 containerd[1491]: time="2025-03-19T11:43:42.626161800Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 19 11:43:42.626845 containerd[1491]: time="2025-03-19T11:43:42.626814600Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 19 11:43:42.627081 containerd[1491]: time="2025-03-19T11:43:42.627027360Z" level=info msg="Start subscribing containerd event" Mar 19 11:43:42.627081 containerd[1491]: time="2025-03-19T11:43:42.627078080Z" level=info msg="Start recovering state" Mar 19 11:43:42.628271 containerd[1491]: time="2025-03-19T11:43:42.627138800Z" level=info msg="Start event monitor" Mar 19 11:43:42.628271 containerd[1491]: time="2025-03-19T11:43:42.627154160Z" level=info msg="Start snapshots syncer" Mar 19 11:43:42.628271 containerd[1491]: time="2025-03-19T11:43:42.627163120Z" level=info msg="Start cni network conf syncer for default" Mar 19 11:43:42.628271 containerd[1491]: time="2025-03-19T11:43:42.627175000Z" level=info msg="Start streaming server" Mar 19 11:43:42.628271 containerd[1491]: time="2025-03-19T11:43:42.627424920Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 19 11:43:42.628271 containerd[1491]: time="2025-03-19T11:43:42.627511040Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 19 11:43:42.628271 containerd[1491]: time="2025-03-19T11:43:42.627568000Z" level=info msg="containerd successfully booted in 0.048516s" Mar 19 11:43:42.627657 systemd[1]: Started containerd.service - containerd container runtime. Mar 19 11:43:42.719319 sshd_keygen[1477]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 19 11:43:42.738260 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 19 11:43:42.752471 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 19 11:43:42.758463 systemd[1]: issuegen.service: Deactivated successfully. Mar 19 11:43:42.758672 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 19 11:43:42.761738 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 19 11:43:42.774075 tar[1483]: linux-arm64/README.md Mar 19 11:43:42.784802 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 19 11:43:42.787513 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 19 11:43:42.793057 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 19 11:43:42.795061 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 19 11:43:42.796163 systemd[1]: Reached target getty.target - Login Prompts. Mar 19 11:43:43.556358 systemd-networkd[1414]: eth0: Gained IPv6LL Mar 19 11:43:43.557935 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 19 11:43:43.559784 systemd[1]: Reached target network-online.target - Network is Online. Mar 19 11:43:43.575515 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 19 11:43:43.577541 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:43:43.579285 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 19 11:43:43.590905 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 19 11:43:43.591101 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 19 11:43:43.592805 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 19 11:43:43.600205 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 19 11:43:44.061770 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:43:44.062983 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 19 11:43:44.065077 (kubelet)[1572]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:43:44.065079 systemd[1]: Startup finished in 524ms (kernel) + 4.824s (initrd) + 3.575s (userspace) = 8.924s. Mar 19 11:43:44.428629 kubelet[1572]: E0319 11:43:44.428513 1572 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:43:44.430701 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:43:44.430838 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:43:44.431125 systemd[1]: kubelet.service: Consumed 765ms CPU time, 249.2M memory peak. Mar 19 11:43:48.429154 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 19 11:43:48.430360 systemd[1]: Started sshd@0-10.0.0.95:22-10.0.0.1:57976.service - OpenSSH per-connection server daemon (10.0.0.1:57976). Mar 19 11:43:48.498210 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 57976 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:43:48.499870 sshd-session[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:43:48.513781 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 19 11:43:48.526456 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 19 11:43:48.528483 systemd-logind[1471]: New session 1 of user core. Mar 19 11:43:48.534917 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 19 11:43:48.536909 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 19 11:43:48.543672 (systemd)[1589]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 19 11:43:48.546026 systemd-logind[1471]: New session c1 of user core. Mar 19 11:43:48.636251 systemd[1589]: Queued start job for default target default.target. Mar 19 11:43:48.648122 systemd[1589]: Created slice app.slice - User Application Slice. Mar 19 11:43:48.648149 systemd[1589]: Reached target paths.target - Paths. Mar 19 11:43:48.648186 systemd[1589]: Reached target timers.target - Timers. Mar 19 11:43:48.649434 systemd[1589]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 19 11:43:48.657981 systemd[1589]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 19 11:43:48.658047 systemd[1589]: Reached target sockets.target - Sockets. Mar 19 11:43:48.658085 systemd[1589]: Reached target basic.target - Basic System. Mar 19 11:43:48.658111 systemd[1589]: Reached target default.target - Main User Target. Mar 19 11:43:48.658135 systemd[1589]: Startup finished in 107ms. Mar 19 11:43:48.658424 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 19 11:43:48.660763 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 19 11:43:48.724006 systemd[1]: Started sshd@1-10.0.0.95:22-10.0.0.1:57986.service - OpenSSH per-connection server daemon (10.0.0.1:57986). Mar 19 11:43:48.767488 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 57986 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:43:48.768599 sshd-session[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:43:48.772832 systemd-logind[1471]: New session 2 of user core. Mar 19 11:43:48.780379 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 19 11:43:48.829496 sshd[1602]: Connection closed by 10.0.0.1 port 57986 Mar 19 11:43:48.829793 sshd-session[1600]: pam_unix(sshd:session): session closed for user core Mar 19 11:43:48.840363 systemd[1]: sshd@1-10.0.0.95:22-10.0.0.1:57986.service: Deactivated successfully. Mar 19 11:43:48.841813 systemd[1]: session-2.scope: Deactivated successfully. Mar 19 11:43:48.843979 systemd-logind[1471]: Session 2 logged out. Waiting for processes to exit. Mar 19 11:43:48.845072 systemd[1]: Started sshd@2-10.0.0.95:22-10.0.0.1:57992.service - OpenSSH per-connection server daemon (10.0.0.1:57992). Mar 19 11:43:48.847293 systemd-logind[1471]: Removed session 2. Mar 19 11:43:48.888485 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 57992 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:43:48.889507 sshd-session[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:43:48.893285 systemd-logind[1471]: New session 3 of user core. Mar 19 11:43:48.901375 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 19 11:43:48.947475 sshd[1610]: Connection closed by 10.0.0.1 port 57992 Mar 19 11:43:48.947863 sshd-session[1607]: pam_unix(sshd:session): session closed for user core Mar 19 11:43:48.965474 systemd[1]: sshd@2-10.0.0.95:22-10.0.0.1:57992.service: Deactivated successfully. Mar 19 11:43:48.966976 systemd[1]: session-3.scope: Deactivated successfully. Mar 19 11:43:48.967610 systemd-logind[1471]: Session 3 logged out. Waiting for processes to exit. Mar 19 11:43:48.969280 systemd[1]: Started sshd@3-10.0.0.95:22-10.0.0.1:57994.service - OpenSSH per-connection server daemon (10.0.0.1:57994). Mar 19 11:43:48.970886 systemd-logind[1471]: Removed session 3. Mar 19 11:43:49.012931 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 57994 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:43:49.013819 sshd-session[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:43:49.017794 systemd-logind[1471]: New session 4 of user core. Mar 19 11:43:49.028372 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 19 11:43:49.077319 sshd[1618]: Connection closed by 10.0.0.1 port 57994 Mar 19 11:43:49.077561 sshd-session[1615]: pam_unix(sshd:session): session closed for user core Mar 19 11:43:49.089266 systemd[1]: sshd@3-10.0.0.95:22-10.0.0.1:57994.service: Deactivated successfully. Mar 19 11:43:49.090697 systemd[1]: session-4.scope: Deactivated successfully. Mar 19 11:43:49.092362 systemd-logind[1471]: Session 4 logged out. Waiting for processes to exit. Mar 19 11:43:49.094101 systemd[1]: Started sshd@4-10.0.0.95:22-10.0.0.1:57996.service - OpenSSH per-connection server daemon (10.0.0.1:57996). Mar 19 11:43:49.095037 systemd-logind[1471]: Removed session 4. Mar 19 11:43:49.138025 sshd[1623]: Accepted publickey for core from 10.0.0.1 port 57996 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:43:49.139053 sshd-session[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:43:49.143259 systemd-logind[1471]: New session 5 of user core. Mar 19 11:43:49.149370 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 19 11:43:49.206508 sudo[1627]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 19 11:43:49.206770 sudo[1627]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 11:43:49.220977 sudo[1627]: pam_unix(sudo:session): session closed for user root Mar 19 11:43:49.222211 sshd[1626]: Connection closed by 10.0.0.1 port 57996 Mar 19 11:43:49.222565 sshd-session[1623]: pam_unix(sshd:session): session closed for user core Mar 19 11:43:49.237399 systemd[1]: sshd@4-10.0.0.95:22-10.0.0.1:57996.service: Deactivated successfully. Mar 19 11:43:49.238784 systemd[1]: session-5.scope: Deactivated successfully. Mar 19 11:43:49.242380 systemd-logind[1471]: Session 5 logged out. Waiting for processes to exit. Mar 19 11:43:49.258503 systemd[1]: Started sshd@5-10.0.0.95:22-10.0.0.1:58000.service - OpenSSH per-connection server daemon (10.0.0.1:58000). Mar 19 11:43:49.259529 systemd-logind[1471]: Removed session 5. Mar 19 11:43:49.299075 sshd[1632]: Accepted publickey for core from 10.0.0.1 port 58000 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:43:49.300073 sshd-session[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:43:49.304289 systemd-logind[1471]: New session 6 of user core. Mar 19 11:43:49.323393 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 19 11:43:49.372347 sudo[1637]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 19 11:43:49.372909 sudo[1637]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 11:43:49.376095 sudo[1637]: pam_unix(sudo:session): session closed for user root Mar 19 11:43:49.380575 sudo[1636]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 19 11:43:49.380826 sudo[1636]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 11:43:49.398552 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 19 11:43:49.419612 augenrules[1659]: No rules Mar 19 11:43:49.420763 systemd[1]: audit-rules.service: Deactivated successfully. Mar 19 11:43:49.422269 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 19 11:43:49.423428 sudo[1636]: pam_unix(sudo:session): session closed for user root Mar 19 11:43:49.424672 sshd[1635]: Connection closed by 10.0.0.1 port 58000 Mar 19 11:43:49.424992 sshd-session[1632]: pam_unix(sshd:session): session closed for user core Mar 19 11:43:49.435324 systemd[1]: sshd@5-10.0.0.95:22-10.0.0.1:58000.service: Deactivated successfully. Mar 19 11:43:49.436666 systemd[1]: session-6.scope: Deactivated successfully. Mar 19 11:43:49.438852 systemd-logind[1471]: Session 6 logged out. Waiting for processes to exit. Mar 19 11:43:49.439219 systemd[1]: Started sshd@6-10.0.0.95:22-10.0.0.1:58002.service - OpenSSH per-connection server daemon (10.0.0.1:58002). Mar 19 11:43:49.440847 systemd-logind[1471]: Removed session 6. Mar 19 11:43:49.482978 sshd[1667]: Accepted publickey for core from 10.0.0.1 port 58002 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:43:49.484045 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:43:49.487785 systemd-logind[1471]: New session 7 of user core. Mar 19 11:43:49.500410 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 19 11:43:49.548769 sudo[1671]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 19 11:43:49.549046 sudo[1671]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 11:43:49.876483 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 19 11:43:49.876539 (dockerd)[1691]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 19 11:43:50.119832 dockerd[1691]: time="2025-03-19T11:43:50.119769982Z" level=info msg="Starting up" Mar 19 11:43:50.272580 dockerd[1691]: time="2025-03-19T11:43:50.272454248Z" level=info msg="Loading containers: start." Mar 19 11:43:50.477341 kernel: Initializing XFRM netlink socket Mar 19 11:43:50.574131 systemd-networkd[1414]: docker0: Link UP Mar 19 11:43:50.613629 dockerd[1691]: time="2025-03-19T11:43:50.613524304Z" level=info msg="Loading containers: done." Mar 19 11:43:50.632664 dockerd[1691]: time="2025-03-19T11:43:50.632610819Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 19 11:43:50.632828 dockerd[1691]: time="2025-03-19T11:43:50.632717387Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Mar 19 11:43:50.632966 dockerd[1691]: time="2025-03-19T11:43:50.632937548Z" level=info msg="Daemon has completed initialization" Mar 19 11:43:50.661251 dockerd[1691]: time="2025-03-19T11:43:50.661157240Z" level=info msg="API listen on /run/docker.sock" Mar 19 11:43:50.661359 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 19 11:43:51.135494 containerd[1491]: time="2025-03-19T11:43:51.135378386Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\"" Mar 19 11:43:51.846136 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1012857369.mount: Deactivated successfully. Mar 19 11:43:53.037184 containerd[1491]: time="2025-03-19T11:43:53.037137263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:43:53.038159 containerd[1491]: time="2025-03-19T11:43:53.038105806Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.3: active requests=0, bytes read=26231952" Mar 19 11:43:53.039014 containerd[1491]: time="2025-03-19T11:43:53.038979909Z" level=info msg="ImageCreate event name:\"sha256:25dd33975ea35cef2fa9b105778dbe3369de267e9ddf81427b7b82e98ff374e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:43:53.042251 containerd[1491]: time="2025-03-19T11:43:53.042196628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:43:53.044302 containerd[1491]: time="2025-03-19T11:43:53.043660640Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.3\" with image id \"sha256:25dd33975ea35cef2fa9b105778dbe3369de267e9ddf81427b7b82e98ff374e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e\", size \"26228750\" in 1.908237332s" Mar 19 11:43:53.044302 containerd[1491]: time="2025-03-19T11:43:53.043696386Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\" returns image reference \"sha256:25dd33975ea35cef2fa9b105778dbe3369de267e9ddf81427b7b82e98ff374e5\"" Mar 19 11:43:53.044495 containerd[1491]: time="2025-03-19T11:43:53.044443106Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\"" Mar 19 11:43:54.274680 containerd[1491]: time="2025-03-19T11:43:54.274633806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:43:54.275558 containerd[1491]: time="2025-03-19T11:43:54.275332338Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.3: active requests=0, bytes read=22530034" Mar 19 11:43:54.276195 containerd[1491]: time="2025-03-19T11:43:54.276163321Z" level=info msg="ImageCreate event name:\"sha256:9e29b4db8c5cdf9970961ed3a47137ea71ad067643b8e5cccb58085f22a9b315\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:43:54.279442 containerd[1491]: time="2025-03-19T11:43:54.279412759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:43:54.280648 containerd[1491]: time="2025-03-19T11:43:54.280531194Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.3\" with image id \"sha256:9e29b4db8c5cdf9970961ed3a47137ea71ad067643b8e5cccb58085f22a9b315\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411\", size \"23970828\" in 1.236054621s" Mar 19 11:43:54.280648 containerd[1491]: time="2025-03-19T11:43:54.280559401Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\" returns image reference \"sha256:9e29b4db8c5cdf9970961ed3a47137ea71ad067643b8e5cccb58085f22a9b315\"" Mar 19 11:43:54.281316 containerd[1491]: time="2025-03-19T11:43:54.281119785Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\"" Mar 19 11:43:54.681255 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 19 11:43:54.691497 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:43:54.785664 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:43:54.788810 (kubelet)[1952]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:43:54.823776 kubelet[1952]: E0319 11:43:54.823688 1952 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:43:54.827430 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:43:54.827576 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:43:54.827897 systemd[1]: kubelet.service: Consumed 127ms CPU time, 104.5M memory peak. Mar 19 11:43:55.533066 containerd[1491]: time="2025-03-19T11:43:55.533020129Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:43:55.534070 containerd[1491]: time="2025-03-19T11:43:55.534027327Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.3: active requests=0, bytes read=17482563" Mar 19 11:43:55.534890 containerd[1491]: time="2025-03-19T11:43:55.534823848Z" level=info msg="ImageCreate event name:\"sha256:6b8dfebcc65dc9d4765a91d2923c304e13beca7111c57dfc99f1c3267a6e9f30\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:43:55.537416 containerd[1491]: time="2025-03-19T11:43:55.537361374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:43:55.538584 containerd[1491]: time="2025-03-19T11:43:55.538533732Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.3\" with image id \"sha256:6b8dfebcc65dc9d4765a91d2923c304e13beca7111c57dfc99f1c3267a6e9f30\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df\", size \"18923375\" in 1.25738185s" Mar 19 11:43:55.538584 containerd[1491]: time="2025-03-19T11:43:55.538565227Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\" returns image reference \"sha256:6b8dfebcc65dc9d4765a91d2923c304e13beca7111c57dfc99f1c3267a6e9f30\"" Mar 19 11:43:55.539329 containerd[1491]: time="2025-03-19T11:43:55.539038044Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\"" Mar 19 11:43:56.552647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3458798611.mount: Deactivated successfully. Mar 19 11:43:56.763006 containerd[1491]: time="2025-03-19T11:43:56.762455182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:43:56.763006 containerd[1491]: time="2025-03-19T11:43:56.762966689Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.3: active requests=0, bytes read=27370097" Mar 19 11:43:56.763772 containerd[1491]: time="2025-03-19T11:43:56.763743866Z" level=info msg="ImageCreate event name:\"sha256:2a637602f3e88e76046aa1a75bccdb37b25b2fcba99a380412e2c27ccd55c547\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:43:56.765579 containerd[1491]: time="2025-03-19T11:43:56.765543503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:43:56.766748 containerd[1491]: time="2025-03-19T11:43:56.766716290Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.3\" with image id \"sha256:2a637602f3e88e76046aa1a75bccdb37b25b2fcba99a380412e2c27ccd55c547\", repo tag \"registry.k8s.io/kube-proxy:v1.32.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\", size \"27369114\" in 1.227648722s" Mar 19 11:43:56.766748 containerd[1491]: time="2025-03-19T11:43:56.766747228Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\" returns image reference \"sha256:2a637602f3e88e76046aa1a75bccdb37b25b2fcba99a380412e2c27ccd55c547\"" Mar 19 11:43:56.767201 containerd[1491]: time="2025-03-19T11:43:56.767162272Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Mar 19 11:43:57.297731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount229209109.mount: Deactivated successfully. Mar 19 11:43:58.385126 containerd[1491]: time="2025-03-19T11:43:58.385058506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:43:58.385641 containerd[1491]: time="2025-03-19T11:43:58.385595272Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Mar 19 11:43:58.386474 containerd[1491]: time="2025-03-19T11:43:58.386434083Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:43:58.389601 containerd[1491]: time="2025-03-19T11:43:58.389573286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:43:58.390989 containerd[1491]: time="2025-03-19T11:43:58.390864767Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.623668328s" Mar 19 11:43:58.390989 containerd[1491]: time="2025-03-19T11:43:58.390900456Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Mar 19 11:43:58.391469 containerd[1491]: time="2025-03-19T11:43:58.391440521Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 19 11:43:58.791791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3301051181.mount: Deactivated successfully. Mar 19 11:43:58.795573 containerd[1491]: time="2025-03-19T11:43:58.795527802Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:43:58.796122 containerd[1491]: time="2025-03-19T11:43:58.796087023Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Mar 19 11:43:58.796722 containerd[1491]: time="2025-03-19T11:43:58.796685708Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:43:58.799274 containerd[1491]: time="2025-03-19T11:43:58.798900017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:43:58.799918 containerd[1491]: time="2025-03-19T11:43:58.799656243Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 408.081232ms" Mar 19 11:43:58.799918 containerd[1491]: time="2025-03-19T11:43:58.799681082Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 19 11:43:58.800141 containerd[1491]: time="2025-03-19T11:43:58.800115987Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Mar 19 11:43:59.288845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2176654701.mount: Deactivated successfully. Mar 19 11:44:01.119481 containerd[1491]: time="2025-03-19T11:44:01.119435238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:44:01.120396 containerd[1491]: time="2025-03-19T11:44:01.120205107Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812431" Mar 19 11:44:01.121084 containerd[1491]: time="2025-03-19T11:44:01.121058534Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:44:01.124297 containerd[1491]: time="2025-03-19T11:44:01.124269323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:44:01.126185 containerd[1491]: time="2025-03-19T11:44:01.126153929Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.326005536s" Mar 19 11:44:01.126821 containerd[1491]: time="2025-03-19T11:44:01.126798898Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Mar 19 11:44:05.056487 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 19 11:44:05.069499 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:44:05.078729 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 19 11:44:05.078820 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 19 11:44:05.079105 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:44:05.090553 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:44:05.111188 systemd[1]: Reload requested from client PID 2117 ('systemctl') (unit session-7.scope)... Mar 19 11:44:05.111204 systemd[1]: Reloading... Mar 19 11:44:05.181340 zram_generator::config[2161]: No configuration found. Mar 19 11:44:05.296685 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:44:05.382992 systemd[1]: Reloading finished in 271 ms. Mar 19 11:44:05.426851 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:44:05.429553 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:44:05.430399 systemd[1]: kubelet.service: Deactivated successfully. Mar 19 11:44:05.430594 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:44:05.430635 systemd[1]: kubelet.service: Consumed 82ms CPU time, 90.2M memory peak. Mar 19 11:44:05.432168 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:44:05.528111 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:44:05.531421 (kubelet)[2208]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 19 11:44:05.566018 kubelet[2208]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:44:05.566018 kubelet[2208]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 19 11:44:05.566018 kubelet[2208]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:44:05.566391 kubelet[2208]: I0319 11:44:05.566092 2208 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 19 11:44:06.658856 kubelet[2208]: I0319 11:44:06.658805 2208 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 19 11:44:06.658856 kubelet[2208]: I0319 11:44:06.658838 2208 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 19 11:44:06.660078 kubelet[2208]: I0319 11:44:06.659279 2208 server.go:954] "Client rotation is on, will bootstrap in background" Mar 19 11:44:06.688786 kubelet[2208]: E0319 11:44:06.688740 2208 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.95:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:44:06.690359 kubelet[2208]: I0319 11:44:06.690332 2208 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 19 11:44:06.698121 kubelet[2208]: E0319 11:44:06.698097 2208 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 19 11:44:06.698228 kubelet[2208]: I0319 11:44:06.698143 2208 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 19 11:44:06.701175 kubelet[2208]: I0319 11:44:06.701152 2208 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 19 11:44:06.702300 kubelet[2208]: I0319 11:44:06.702260 2208 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 19 11:44:06.702466 kubelet[2208]: I0319 11:44:06.702296 2208 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 19 11:44:06.702566 kubelet[2208]: I0319 11:44:06.702529 2208 topology_manager.go:138] "Creating topology manager with none policy" Mar 19 11:44:06.702566 kubelet[2208]: I0319 11:44:06.702538 2208 container_manager_linux.go:304] "Creating device plugin manager" Mar 19 11:44:06.702752 kubelet[2208]: I0319 11:44:06.702722 2208 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:44:06.710326 kubelet[2208]: I0319 11:44:06.710306 2208 kubelet.go:446] "Attempting to sync node with API server" Mar 19 11:44:06.710359 kubelet[2208]: I0319 11:44:06.710334 2208 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 19 11:44:06.710359 kubelet[2208]: I0319 11:44:06.710352 2208 kubelet.go:352] "Adding apiserver pod source" Mar 19 11:44:06.710359 kubelet[2208]: I0319 11:44:06.710361 2208 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 19 11:44:06.713810 kubelet[2208]: W0319 11:44:06.713650 2208 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Mar 19 11:44:06.713810 kubelet[2208]: E0319 11:44:06.713708 2208 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:44:06.714715 kubelet[2208]: W0319 11:44:06.714675 2208 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.95:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Mar 19 11:44:06.714764 kubelet[2208]: E0319 11:44:06.714724 2208 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.95:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:44:06.715097 kubelet[2208]: I0319 11:44:06.715060 2208 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 19 11:44:06.718163 kubelet[2208]: I0319 11:44:06.718071 2208 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 19 11:44:06.720476 kubelet[2208]: W0319 11:44:06.720459 2208 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 19 11:44:06.725932 kubelet[2208]: I0319 11:44:06.725904 2208 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 19 11:44:06.725989 kubelet[2208]: I0319 11:44:06.725951 2208 server.go:1287] "Started kubelet" Mar 19 11:44:06.726264 kubelet[2208]: I0319 11:44:06.726055 2208 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 19 11:44:06.726264 kubelet[2208]: I0319 11:44:06.726201 2208 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 19 11:44:06.729122 kubelet[2208]: I0319 11:44:06.729087 2208 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 19 11:44:06.729756 kubelet[2208]: I0319 11:44:06.729725 2208 server.go:490] "Adding debug handlers to kubelet server" Mar 19 11:44:06.731168 kubelet[2208]: I0319 11:44:06.731055 2208 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 19 11:44:06.733341 kubelet[2208]: I0319 11:44:06.733165 2208 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 19 11:44:06.734787 kubelet[2208]: E0319 11:44:06.734760 2208 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 19 11:44:06.735300 kubelet[2208]: E0319 11:44:06.735203 2208 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="200ms" Mar 19 11:44:06.735508 kubelet[2208]: I0319 11:44:06.735479 2208 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 19 11:44:06.735566 kubelet[2208]: I0319 11:44:06.735555 2208 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 19 11:44:06.735592 kubelet[2208]: I0319 11:44:06.735585 2208 reconciler.go:26] "Reconciler: start to sync state" Mar 19 11:44:06.735977 kubelet[2208]: W0319 11:44:06.735860 2208 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Mar 19 11:44:06.735977 kubelet[2208]: E0319 11:44:06.735908 2208 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:44:06.736103 kubelet[2208]: E0319 11:44:06.735673 2208 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.95:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.95:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182e319aca317e6c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-19 11:44:06.725926508 +0000 UTC m=+1.191652608,LastTimestamp:2025-03-19 11:44:06.725926508 +0000 UTC m=+1.191652608,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 19 11:44:06.736187 kubelet[2208]: I0319 11:44:06.736126 2208 factory.go:221] Registration of the systemd container factory successfully Mar 19 11:44:06.736336 kubelet[2208]: I0319 11:44:06.736316 2208 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 19 11:44:06.736448 kubelet[2208]: E0319 11:44:06.736426 2208 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 19 11:44:06.737276 kubelet[2208]: I0319 11:44:06.737253 2208 factory.go:221] Registration of the containerd container factory successfully Mar 19 11:44:06.744085 kubelet[2208]: I0319 11:44:06.744043 2208 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 19 11:44:06.745660 kubelet[2208]: I0319 11:44:06.745642 2208 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 19 11:44:06.745660 kubelet[2208]: I0319 11:44:06.745658 2208 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 19 11:44:06.745770 kubelet[2208]: I0319 11:44:06.745674 2208 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:44:06.746549 kubelet[2208]: I0319 11:44:06.746410 2208 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 19 11:44:06.746549 kubelet[2208]: I0319 11:44:06.746444 2208 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 19 11:44:06.746549 kubelet[2208]: I0319 11:44:06.746461 2208 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 19 11:44:06.746549 kubelet[2208]: I0319 11:44:06.746467 2208 kubelet.go:2388] "Starting kubelet main sync loop" Mar 19 11:44:06.746549 kubelet[2208]: E0319 11:44:06.746507 2208 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 19 11:44:06.748179 kubelet[2208]: W0319 11:44:06.748138 2208 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Mar 19 11:44:06.748325 kubelet[2208]: E0319 11:44:06.748189 2208 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:44:06.748325 kubelet[2208]: I0319 11:44:06.748320 2208 policy_none.go:49] "None policy: Start" Mar 19 11:44:06.748379 kubelet[2208]: I0319 11:44:06.748334 2208 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 19 11:44:06.748379 kubelet[2208]: I0319 11:44:06.748344 2208 state_mem.go:35] "Initializing new in-memory state store" Mar 19 11:44:06.753781 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 19 11:44:06.766972 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 19 11:44:06.770034 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 19 11:44:06.781060 kubelet[2208]: I0319 11:44:06.781026 2208 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 19 11:44:06.781486 kubelet[2208]: I0319 11:44:06.781212 2208 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 19 11:44:06.781486 kubelet[2208]: I0319 11:44:06.781246 2208 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 19 11:44:06.781486 kubelet[2208]: I0319 11:44:06.781426 2208 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 19 11:44:06.782459 kubelet[2208]: E0319 11:44:06.782431 2208 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 19 11:44:06.782513 kubelet[2208]: E0319 11:44:06.782475 2208 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 19 11:44:06.853951 systemd[1]: Created slice kubepods-burstable-pod6f9153565b634bc8b85c4dba7500c00f.slice - libcontainer container kubepods-burstable-pod6f9153565b634bc8b85c4dba7500c00f.slice. Mar 19 11:44:06.882490 kubelet[2208]: E0319 11:44:06.882449 2208 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 19 11:44:06.883448 kubelet[2208]: I0319 11:44:06.883313 2208 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 19 11:44:06.883914 kubelet[2208]: E0319 11:44:06.883877 2208 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.95:6443/api/v1/nodes\": dial tcp 10.0.0.95:6443: connect: connection refused" node="localhost" Mar 19 11:44:06.885185 systemd[1]: Created slice kubepods-burstable-podcbbb394ff48414687df77e1bc213eeb5.slice - libcontainer container kubepods-burstable-podcbbb394ff48414687df77e1bc213eeb5.slice. Mar 19 11:44:06.897461 kubelet[2208]: E0319 11:44:06.897288 2208 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 19 11:44:06.899463 systemd[1]: Created slice kubepods-burstable-pod3700e556aa2777679a324159272023f1.slice - libcontainer container kubepods-burstable-pod3700e556aa2777679a324159272023f1.slice. Mar 19 11:44:06.902710 kubelet[2208]: E0319 11:44:06.902687 2208 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 19 11:44:06.936060 kubelet[2208]: E0319 11:44:06.935969 2208 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="400ms" Mar 19 11:44:06.937454 kubelet[2208]: I0319 11:44:06.937424 2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:44:06.937508 kubelet[2208]: I0319 11:44:06.937462 2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:44:06.937508 kubelet[2208]: I0319 11:44:06.937483 2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:44:06.937549 kubelet[2208]: I0319 11:44:06.937511 2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3700e556aa2777679a324159272023f1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"3700e556aa2777679a324159272023f1\") " pod="kube-system/kube-scheduler-localhost" Mar 19 11:44:06.937571 kubelet[2208]: I0319 11:44:06.937550 2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:44:06.937615 kubelet[2208]: I0319 11:44:06.937598 2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6f9153565b634bc8b85c4dba7500c00f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6f9153565b634bc8b85c4dba7500c00f\") " pod="kube-system/kube-apiserver-localhost" Mar 19 11:44:06.937641 kubelet[2208]: I0319 11:44:06.937627 2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6f9153565b634bc8b85c4dba7500c00f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6f9153565b634bc8b85c4dba7500c00f\") " pod="kube-system/kube-apiserver-localhost" Mar 19 11:44:06.937661 kubelet[2208]: I0319 11:44:06.937646 2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:44:06.937684 kubelet[2208]: I0319 11:44:06.937670 2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6f9153565b634bc8b85c4dba7500c00f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6f9153565b634bc8b85c4dba7500c00f\") " pod="kube-system/kube-apiserver-localhost" Mar 19 11:44:07.085735 kubelet[2208]: I0319 11:44:07.085703 2208 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 19 11:44:07.086011 kubelet[2208]: E0319 11:44:07.085981 2208 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.95:6443/api/v1/nodes\": dial tcp 10.0.0.95:6443: connect: connection refused" node="localhost" Mar 19 11:44:07.186251 containerd[1491]: time="2025-03-19T11:44:07.186128074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6f9153565b634bc8b85c4dba7500c00f,Namespace:kube-system,Attempt:0,}" Mar 19 11:44:07.198308 containerd[1491]: time="2025-03-19T11:44:07.198274346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:cbbb394ff48414687df77e1bc213eeb5,Namespace:kube-system,Attempt:0,}" Mar 19 11:44:07.203951 containerd[1491]: time="2025-03-19T11:44:07.203858648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:3700e556aa2777679a324159272023f1,Namespace:kube-system,Attempt:0,}" Mar 19 11:44:07.337298 kubelet[2208]: E0319 11:44:07.337260 2208 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="800ms" Mar 19 11:44:07.488346 kubelet[2208]: I0319 11:44:07.487962 2208 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 19 11:44:07.488346 kubelet[2208]: E0319 11:44:07.488306 2208 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.95:6443/api/v1/nodes\": dial tcp 10.0.0.95:6443: connect: connection refused" node="localhost" Mar 19 11:44:07.588583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1788928859.mount: Deactivated successfully. Mar 19 11:44:07.593020 containerd[1491]: time="2025-03-19T11:44:07.592962072Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:44:07.594647 containerd[1491]: time="2025-03-19T11:44:07.594602854Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:44:07.595685 containerd[1491]: time="2025-03-19T11:44:07.595646991Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Mar 19 11:44:07.596139 containerd[1491]: time="2025-03-19T11:44:07.596111452Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 19 11:44:07.597670 containerd[1491]: time="2025-03-19T11:44:07.597633064Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:44:07.598783 containerd[1491]: time="2025-03-19T11:44:07.598740758Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 19 11:44:07.602262 containerd[1491]: time="2025-03-19T11:44:07.602212234Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:44:07.603254 containerd[1491]: time="2025-03-19T11:44:07.603151295Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 404.813872ms" Mar 19 11:44:07.603872 containerd[1491]: time="2025-03-19T11:44:07.603722269Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 399.751638ms" Mar 19 11:44:07.604649 containerd[1491]: time="2025-03-19T11:44:07.604620529Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 418.412649ms" Mar 19 11:44:07.605291 containerd[1491]: time="2025-03-19T11:44:07.605226635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:44:07.637521 kubelet[2208]: W0319 11:44:07.637414 2208 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Mar 19 11:44:07.637521 kubelet[2208]: E0319 11:44:07.637474 2208 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:44:07.674391 kubelet[2208]: W0319 11:44:07.674300 2208 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Mar 19 11:44:07.674391 kubelet[2208]: E0319 11:44:07.674363 2208 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:44:07.742128 containerd[1491]: time="2025-03-19T11:44:07.741329269Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:44:07.742128 containerd[1491]: time="2025-03-19T11:44:07.741738955Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:44:07.742128 containerd[1491]: time="2025-03-19T11:44:07.741824190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:44:07.742128 containerd[1491]: time="2025-03-19T11:44:07.741963161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:44:07.743430 containerd[1491]: time="2025-03-19T11:44:07.743224318Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:44:07.743430 containerd[1491]: time="2025-03-19T11:44:07.743291907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:44:07.743430 containerd[1491]: time="2025-03-19T11:44:07.743302367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:44:07.743430 containerd[1491]: time="2025-03-19T11:44:07.743373948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:44:07.748166 containerd[1491]: time="2025-03-19T11:44:07.747544150Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:44:07.748166 containerd[1491]: time="2025-03-19T11:44:07.747596888Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:44:07.748166 containerd[1491]: time="2025-03-19T11:44:07.747612538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:44:07.748166 containerd[1491]: time="2025-03-19T11:44:07.747763166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:44:07.763504 systemd[1]: Started cri-containerd-485fa0e43109e3ec6a096ddd24eaab50c2462d168f68132b27becbe838411814.scope - libcontainer container 485fa0e43109e3ec6a096ddd24eaab50c2462d168f68132b27becbe838411814. Mar 19 11:44:07.764978 systemd[1]: Started cri-containerd-7f1efc9bf27d83028efae3c03042b6b403b55f08ce50a00f36556644e5689f11.scope - libcontainer container 7f1efc9bf27d83028efae3c03042b6b403b55f08ce50a00f36556644e5689f11. Mar 19 11:44:07.769224 systemd[1]: Started cri-containerd-dcc60f74f61b4abbe873cfcff3503a62c48cd6df56cdeff6ddf6890df7f6d142.scope - libcontainer container dcc60f74f61b4abbe873cfcff3503a62c48cd6df56cdeff6ddf6890df7f6d142. Mar 19 11:44:07.800512 containerd[1491]: time="2025-03-19T11:44:07.800477253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:cbbb394ff48414687df77e1bc213eeb5,Namespace:kube-system,Attempt:0,} returns sandbox id \"dcc60f74f61b4abbe873cfcff3503a62c48cd6df56cdeff6ddf6890df7f6d142\"" Mar 19 11:44:07.801730 containerd[1491]: time="2025-03-19T11:44:07.801704595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6f9153565b634bc8b85c4dba7500c00f,Namespace:kube-system,Attempt:0,} returns sandbox id \"485fa0e43109e3ec6a096ddd24eaab50c2462d168f68132b27becbe838411814\"" Mar 19 11:44:07.803804 containerd[1491]: time="2025-03-19T11:44:07.803773508Z" level=info msg="CreateContainer within sandbox \"dcc60f74f61b4abbe873cfcff3503a62c48cd6df56cdeff6ddf6890df7f6d142\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 19 11:44:07.804128 containerd[1491]: time="2025-03-19T11:44:07.804035600Z" level=info msg="CreateContainer within sandbox \"485fa0e43109e3ec6a096ddd24eaab50c2462d168f68132b27becbe838411814\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 19 11:44:07.806416 containerd[1491]: time="2025-03-19T11:44:07.806317460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:3700e556aa2777679a324159272023f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f1efc9bf27d83028efae3c03042b6b403b55f08ce50a00f36556644e5689f11\"" Mar 19 11:44:07.808281 containerd[1491]: time="2025-03-19T11:44:07.808256344Z" level=info msg="CreateContainer within sandbox \"7f1efc9bf27d83028efae3c03042b6b403b55f08ce50a00f36556644e5689f11\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 19 11:44:07.820610 containerd[1491]: time="2025-03-19T11:44:07.820484058Z" level=info msg="CreateContainer within sandbox \"dcc60f74f61b4abbe873cfcff3503a62c48cd6df56cdeff6ddf6890df7f6d142\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"47762c6379e719807f682d2259e1b715d86abc0d914352fdb94668aac2b62f27\"" Mar 19 11:44:07.821138 containerd[1491]: time="2025-03-19T11:44:07.821114596Z" level=info msg="StartContainer for \"47762c6379e719807f682d2259e1b715d86abc0d914352fdb94668aac2b62f27\"" Mar 19 11:44:07.821487 containerd[1491]: time="2025-03-19T11:44:07.821372936Z" level=info msg="CreateContainer within sandbox \"485fa0e43109e3ec6a096ddd24eaab50c2462d168f68132b27becbe838411814\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fb080aa95fd5c6405de8e67ac2d457305f24c5825b3a6770100ee682f0a5f8c4\"" Mar 19 11:44:07.821725 containerd[1491]: time="2025-03-19T11:44:07.821699703Z" level=info msg="StartContainer for \"fb080aa95fd5c6405de8e67ac2d457305f24c5825b3a6770100ee682f0a5f8c4\"" Mar 19 11:44:07.822625 containerd[1491]: time="2025-03-19T11:44:07.822574608Z" level=info msg="CreateContainer within sandbox \"7f1efc9bf27d83028efae3c03042b6b403b55f08ce50a00f36556644e5689f11\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"aa362f259d375cf31f49de60f9e9ef0343f9bbcd97c635c77500bdc67ecb34d7\"" Mar 19 11:44:07.822997 containerd[1491]: time="2025-03-19T11:44:07.822906605Z" level=info msg="StartContainer for \"aa362f259d375cf31f49de60f9e9ef0343f9bbcd97c635c77500bdc67ecb34d7\"" Mar 19 11:44:07.856449 systemd[1]: Started cri-containerd-47762c6379e719807f682d2259e1b715d86abc0d914352fdb94668aac2b62f27.scope - libcontainer container 47762c6379e719807f682d2259e1b715d86abc0d914352fdb94668aac2b62f27. Mar 19 11:44:07.857722 systemd[1]: Started cri-containerd-aa362f259d375cf31f49de60f9e9ef0343f9bbcd97c635c77500bdc67ecb34d7.scope - libcontainer container aa362f259d375cf31f49de60f9e9ef0343f9bbcd97c635c77500bdc67ecb34d7. Mar 19 11:44:07.858612 systemd[1]: Started cri-containerd-fb080aa95fd5c6405de8e67ac2d457305f24c5825b3a6770100ee682f0a5f8c4.scope - libcontainer container fb080aa95fd5c6405de8e67ac2d457305f24c5825b3a6770100ee682f0a5f8c4. Mar 19 11:44:07.906449 containerd[1491]: time="2025-03-19T11:44:07.902941409Z" level=info msg="StartContainer for \"47762c6379e719807f682d2259e1b715d86abc0d914352fdb94668aac2b62f27\" returns successfully" Mar 19 11:44:07.906553 containerd[1491]: time="2025-03-19T11:44:07.902942566Z" level=info msg="StartContainer for \"aa362f259d375cf31f49de60f9e9ef0343f9bbcd97c635c77500bdc67ecb34d7\" returns successfully" Mar 19 11:44:07.910518 containerd[1491]: time="2025-03-19T11:44:07.910481203Z" level=info msg="StartContainer for \"fb080aa95fd5c6405de8e67ac2d457305f24c5825b3a6770100ee682f0a5f8c4\" returns successfully" Mar 19 11:44:07.912238 kubelet[2208]: W0319 11:44:07.912181 2208 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.95:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Mar 19 11:44:07.912294 kubelet[2208]: E0319 11:44:07.912264 2208 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.95:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:44:08.055187 kubelet[2208]: W0319 11:44:08.055007 2208 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Mar 19 11:44:08.055187 kubelet[2208]: E0319 11:44:08.055074 2208 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:44:08.290336 kubelet[2208]: I0319 11:44:08.289593 2208 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 19 11:44:08.756617 kubelet[2208]: E0319 11:44:08.756493 2208 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 19 11:44:08.759356 kubelet[2208]: E0319 11:44:08.759014 2208 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 19 11:44:08.760271 kubelet[2208]: E0319 11:44:08.760064 2208 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 19 11:44:09.673822 kubelet[2208]: E0319 11:44:09.673764 2208 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 19 11:44:09.712816 kubelet[2208]: I0319 11:44:09.712612 2208 apiserver.go:52] "Watching apiserver" Mar 19 11:44:09.723820 kubelet[2208]: I0319 11:44:09.723654 2208 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Mar 19 11:44:09.723820 kubelet[2208]: E0319 11:44:09.723688 2208 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 19 11:44:09.735170 kubelet[2208]: I0319 11:44:09.734950 2208 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 19 11:44:09.736025 kubelet[2208]: I0319 11:44:09.735987 2208 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 19 11:44:09.761129 kubelet[2208]: I0319 11:44:09.760807 2208 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 19 11:44:09.761129 kubelet[2208]: I0319 11:44:09.761127 2208 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 19 11:44:09.761435 kubelet[2208]: I0319 11:44:09.761422 2208 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 19 11:44:09.774981 kubelet[2208]: E0319 11:44:09.774862 2208 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.182e319aca317e6c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-19 11:44:06.725926508 +0000 UTC m=+1.191652608,LastTimestamp:2025-03-19 11:44:06.725926508 +0000 UTC m=+1.191652608,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 19 11:44:09.792981 kubelet[2208]: E0319 11:44:09.792754 2208 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 19 11:44:09.792981 kubelet[2208]: I0319 11:44:09.792786 2208 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 19 11:44:09.795075 kubelet[2208]: E0319 11:44:09.795032 2208 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 19 11:44:09.795172 kubelet[2208]: I0319 11:44:09.795157 2208 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 19 11:44:09.797343 kubelet[2208]: E0319 11:44:09.797303 2208 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 19 11:44:09.817845 kubelet[2208]: E0319 11:44:09.817816 2208 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 19 11:44:09.818020 kubelet[2208]: E0319 11:44:09.817825 2208 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 19 11:44:09.818843 kubelet[2208]: E0319 11:44:09.818802 2208 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 19 11:44:11.494218 systemd[1]: Reload requested from client PID 2492 ('systemctl') (unit session-7.scope)... Mar 19 11:44:11.494248 systemd[1]: Reloading... Mar 19 11:44:11.570258 zram_generator::config[2542]: No configuration found. Mar 19 11:44:11.733617 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:44:11.833457 systemd[1]: Reloading finished in 338 ms. Mar 19 11:44:11.854266 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:44:11.868627 systemd[1]: kubelet.service: Deactivated successfully. Mar 19 11:44:11.868848 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:44:11.868893 systemd[1]: kubelet.service: Consumed 1.577s CPU time, 123.5M memory peak. Mar 19 11:44:11.879541 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:44:11.976921 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:44:11.980056 (kubelet)[2578]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 19 11:44:12.015562 kubelet[2578]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:44:12.015562 kubelet[2578]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 19 11:44:12.015562 kubelet[2578]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:44:12.016221 kubelet[2578]: I0319 11:44:12.015639 2578 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 19 11:44:12.022181 kubelet[2578]: I0319 11:44:12.022137 2578 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 19 11:44:12.022181 kubelet[2578]: I0319 11:44:12.022167 2578 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 19 11:44:12.022445 kubelet[2578]: I0319 11:44:12.022411 2578 server.go:954] "Client rotation is on, will bootstrap in background" Mar 19 11:44:12.023638 kubelet[2578]: I0319 11:44:12.023610 2578 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 19 11:44:12.025933 kubelet[2578]: I0319 11:44:12.025909 2578 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 19 11:44:12.028826 kubelet[2578]: E0319 11:44:12.028796 2578 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 19 11:44:12.028826 kubelet[2578]: I0319 11:44:12.028827 2578 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 19 11:44:12.031722 kubelet[2578]: I0319 11:44:12.031692 2578 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 19 11:44:12.032011 kubelet[2578]: I0319 11:44:12.031891 2578 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 19 11:44:12.032168 kubelet[2578]: I0319 11:44:12.031927 2578 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 19 11:44:12.032267 kubelet[2578]: I0319 11:44:12.032182 2578 topology_manager.go:138] "Creating topology manager with none policy" Mar 19 11:44:12.032267 kubelet[2578]: I0319 11:44:12.032193 2578 container_manager_linux.go:304] "Creating device plugin manager" Mar 19 11:44:12.032267 kubelet[2578]: I0319 11:44:12.032256 2578 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:44:12.032497 kubelet[2578]: I0319 11:44:12.032400 2578 kubelet.go:446] "Attempting to sync node with API server" Mar 19 11:44:12.032497 kubelet[2578]: I0319 11:44:12.032422 2578 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 19 11:44:12.032497 kubelet[2578]: I0319 11:44:12.032448 2578 kubelet.go:352] "Adding apiserver pod source" Mar 19 11:44:12.032497 kubelet[2578]: I0319 11:44:12.032457 2578 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 19 11:44:12.033216 kubelet[2578]: I0319 11:44:12.033183 2578 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 19 11:44:12.037238 kubelet[2578]: I0319 11:44:12.034274 2578 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 19 11:44:12.037973 kubelet[2578]: I0319 11:44:12.037952 2578 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 19 11:44:12.038299 kubelet[2578]: I0319 11:44:12.037989 2578 server.go:1287] "Started kubelet" Mar 19 11:44:12.038299 kubelet[2578]: I0319 11:44:12.038080 2578 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 19 11:44:12.038557 kubelet[2578]: I0319 11:44:12.038535 2578 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 19 11:44:12.038707 kubelet[2578]: I0319 11:44:12.038677 2578 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 19 11:44:12.041505 kubelet[2578]: I0319 11:44:12.041470 2578 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 19 11:44:12.046056 kubelet[2578]: I0319 11:44:12.046030 2578 server.go:490] "Adding debug handlers to kubelet server" Mar 19 11:44:12.046400 kubelet[2578]: I0319 11:44:12.046373 2578 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 19 11:44:12.046854 kubelet[2578]: I0319 11:44:12.046827 2578 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 19 11:44:12.049290 kubelet[2578]: E0319 11:44:12.048686 2578 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 19 11:44:12.049290 kubelet[2578]: I0319 11:44:12.049151 2578 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 19 11:44:12.049290 kubelet[2578]: I0319 11:44:12.049285 2578 reconciler.go:26] "Reconciler: start to sync state" Mar 19 11:44:12.049781 kubelet[2578]: I0319 11:44:12.049749 2578 factory.go:221] Registration of the systemd container factory successfully Mar 19 11:44:12.049893 kubelet[2578]: I0319 11:44:12.049867 2578 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 19 11:44:12.052203 kubelet[2578]: E0319 11:44:12.052007 2578 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 19 11:44:12.053200 kubelet[2578]: I0319 11:44:12.052384 2578 factory.go:221] Registration of the containerd container factory successfully Mar 19 11:44:12.061630 kubelet[2578]: I0319 11:44:12.061578 2578 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 19 11:44:12.064767 kubelet[2578]: I0319 11:44:12.064733 2578 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 19 11:44:12.064767 kubelet[2578]: I0319 11:44:12.064762 2578 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 19 11:44:12.064767 kubelet[2578]: I0319 11:44:12.064781 2578 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 19 11:44:12.064905 kubelet[2578]: I0319 11:44:12.064788 2578 kubelet.go:2388] "Starting kubelet main sync loop" Mar 19 11:44:12.064905 kubelet[2578]: E0319 11:44:12.064844 2578 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 19 11:44:12.089168 kubelet[2578]: I0319 11:44:12.088212 2578 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 19 11:44:12.089168 kubelet[2578]: I0319 11:44:12.088242 2578 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 19 11:44:12.089168 kubelet[2578]: I0319 11:44:12.088261 2578 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:44:12.089168 kubelet[2578]: I0319 11:44:12.088401 2578 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 19 11:44:12.089168 kubelet[2578]: I0319 11:44:12.088412 2578 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 19 11:44:12.089168 kubelet[2578]: I0319 11:44:12.088429 2578 policy_none.go:49] "None policy: Start" Mar 19 11:44:12.089168 kubelet[2578]: I0319 11:44:12.088437 2578 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 19 11:44:12.089168 kubelet[2578]: I0319 11:44:12.088445 2578 state_mem.go:35] "Initializing new in-memory state store" Mar 19 11:44:12.089168 kubelet[2578]: I0319 11:44:12.088741 2578 state_mem.go:75] "Updated machine memory state" Mar 19 11:44:12.094394 kubelet[2578]: I0319 11:44:12.094365 2578 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 19 11:44:12.094574 kubelet[2578]: I0319 11:44:12.094556 2578 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 19 11:44:12.094636 kubelet[2578]: I0319 11:44:12.094574 2578 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 19 11:44:12.095273 kubelet[2578]: I0319 11:44:12.095119 2578 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 19 11:44:12.095864 kubelet[2578]: E0319 11:44:12.095835 2578 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 19 11:44:12.166134 kubelet[2578]: I0319 11:44:12.166066 2578 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 19 11:44:12.166298 kubelet[2578]: I0319 11:44:12.166181 2578 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 19 11:44:12.166298 kubelet[2578]: I0319 11:44:12.166077 2578 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 19 11:44:12.196213 kubelet[2578]: I0319 11:44:12.196187 2578 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 19 11:44:12.203263 kubelet[2578]: I0319 11:44:12.202873 2578 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Mar 19 11:44:12.203263 kubelet[2578]: I0319 11:44:12.202944 2578 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Mar 19 11:44:12.250439 kubelet[2578]: I0319 11:44:12.250375 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:44:12.250439 kubelet[2578]: I0319 11:44:12.250447 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:44:12.250614 kubelet[2578]: I0319 11:44:12.250469 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:44:12.250614 kubelet[2578]: I0319 11:44:12.250489 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6f9153565b634bc8b85c4dba7500c00f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6f9153565b634bc8b85c4dba7500c00f\") " pod="kube-system/kube-apiserver-localhost" Mar 19 11:44:12.250614 kubelet[2578]: I0319 11:44:12.250507 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6f9153565b634bc8b85c4dba7500c00f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6f9153565b634bc8b85c4dba7500c00f\") " pod="kube-system/kube-apiserver-localhost" Mar 19 11:44:12.250614 kubelet[2578]: I0319 11:44:12.250522 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:44:12.250614 kubelet[2578]: I0319 11:44:12.250537 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:44:12.250720 kubelet[2578]: I0319 11:44:12.250560 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3700e556aa2777679a324159272023f1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"3700e556aa2777679a324159272023f1\") " pod="kube-system/kube-scheduler-localhost" Mar 19 11:44:12.250720 kubelet[2578]: I0319 11:44:12.250595 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6f9153565b634bc8b85c4dba7500c00f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6f9153565b634bc8b85c4dba7500c00f\") " pod="kube-system/kube-apiserver-localhost" Mar 19 11:44:12.494354 sudo[2611]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 19 11:44:12.494617 sudo[2611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 19 11:44:12.924088 sudo[2611]: pam_unix(sudo:session): session closed for user root Mar 19 11:44:13.033228 kubelet[2578]: I0319 11:44:13.033187 2578 apiserver.go:52] "Watching apiserver" Mar 19 11:44:13.049763 kubelet[2578]: I0319 11:44:13.049701 2578 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 19 11:44:13.076355 kubelet[2578]: I0319 11:44:13.076047 2578 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 19 11:44:13.076486 kubelet[2578]: I0319 11:44:13.076370 2578 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 19 11:44:13.087248 kubelet[2578]: E0319 11:44:13.087196 2578 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 19 11:44:13.087456 kubelet[2578]: E0319 11:44:13.087431 2578 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 19 11:44:13.118106 kubelet[2578]: I0319 11:44:13.118048 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.118030527 podStartE2EDuration="1.118030527s" podCreationTimestamp="2025-03-19 11:44:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:44:13.104885503 +0000 UTC m=+1.120409509" watchObservedRunningTime="2025-03-19 11:44:13.118030527 +0000 UTC m=+1.133554534" Mar 19 11:44:13.125846 kubelet[2578]: I0319 11:44:13.125794 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.125777999 podStartE2EDuration="1.125777999s" podCreationTimestamp="2025-03-19 11:44:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:44:13.118532731 +0000 UTC m=+1.134056738" watchObservedRunningTime="2025-03-19 11:44:13.125777999 +0000 UTC m=+1.141302006" Mar 19 11:44:13.126013 kubelet[2578]: I0319 11:44:13.125929 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.125923193 podStartE2EDuration="1.125923193s" podCreationTimestamp="2025-03-19 11:44:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:44:13.125256052 +0000 UTC m=+1.140780059" watchObservedRunningTime="2025-03-19 11:44:13.125923193 +0000 UTC m=+1.141447160" Mar 19 11:44:14.701918 sudo[1671]: pam_unix(sudo:session): session closed for user root Mar 19 11:44:14.703611 sshd[1670]: Connection closed by 10.0.0.1 port 58002 Mar 19 11:44:14.704044 sshd-session[1667]: pam_unix(sshd:session): session closed for user core Mar 19 11:44:14.707139 systemd[1]: sshd@6-10.0.0.95:22-10.0.0.1:58002.service: Deactivated successfully. Mar 19 11:44:14.708836 systemd[1]: session-7.scope: Deactivated successfully. Mar 19 11:44:14.709009 systemd[1]: session-7.scope: Consumed 6.418s CPU time, 261.8M memory peak. Mar 19 11:44:14.710559 systemd-logind[1471]: Session 7 logged out. Waiting for processes to exit. Mar 19 11:44:14.711545 systemd-logind[1471]: Removed session 7. Mar 19 11:44:18.750005 kubelet[2578]: I0319 11:44:18.749776 2578 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 19 11:44:18.750368 kubelet[2578]: I0319 11:44:18.750336 2578 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 19 11:44:18.750398 containerd[1491]: time="2025-03-19T11:44:18.750141520Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 19 11:44:19.467779 systemd[1]: Created slice kubepods-besteffort-pod07da77ca_df5d_4af0_952d_a4948473d1be.slice - libcontainer container kubepods-besteffort-pod07da77ca_df5d_4af0_952d_a4948473d1be.slice. Mar 19 11:44:19.483791 systemd[1]: Created slice kubepods-burstable-pod65589817_8584_4d99_b7ad_f59a59741a65.slice - libcontainer container kubepods-burstable-pod65589817_8584_4d99_b7ad_f59a59741a65.slice. Mar 19 11:44:19.597714 kubelet[2578]: I0319 11:44:19.597668 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/65589817-8584-4d99-b7ad-f59a59741a65-cilium-run\") pod \"cilium-6xnxm\" (UID: \"65589817-8584-4d99-b7ad-f59a59741a65\") " pod="kube-system/cilium-6xnxm" Mar 19 11:44:19.597714 kubelet[2578]: I0319 11:44:19.597710 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/65589817-8584-4d99-b7ad-f59a59741a65-hostproc\") pod \"cilium-6xnxm\" (UID: \"65589817-8584-4d99-b7ad-f59a59741a65\") " pod="kube-system/cilium-6xnxm" Mar 19 11:44:19.597714 kubelet[2578]: I0319 11:44:19.597726 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/65589817-8584-4d99-b7ad-f59a59741a65-xtables-lock\") pod \"cilium-6xnxm\" (UID: \"65589817-8584-4d99-b7ad-f59a59741a65\") " pod="kube-system/cilium-6xnxm" Mar 19 11:44:19.597714 kubelet[2578]: I0319 11:44:19.597744 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/07da77ca-df5d-4af0-952d-a4948473d1be-kube-proxy\") pod \"kube-proxy-dnxgb\" (UID: \"07da77ca-df5d-4af0-952d-a4948473d1be\") " pod="kube-system/kube-proxy-dnxgb" Mar 19 11:44:19.597714 kubelet[2578]: I0319 11:44:19.597758 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/65589817-8584-4d99-b7ad-f59a59741a65-host-proc-sys-net\") pod \"cilium-6xnxm\" (UID: \"65589817-8584-4d99-b7ad-f59a59741a65\") " pod="kube-system/cilium-6xnxm" Mar 19 11:44:19.597714 kubelet[2578]: I0319 11:44:19.597775 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/65589817-8584-4d99-b7ad-f59a59741a65-hubble-tls\") pod \"cilium-6xnxm\" (UID: \"65589817-8584-4d99-b7ad-f59a59741a65\") " pod="kube-system/cilium-6xnxm" Mar 19 11:44:19.598067 kubelet[2578]: I0319 11:44:19.597792 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fwqs\" (UniqueName: \"kubernetes.io/projected/65589817-8584-4d99-b7ad-f59a59741a65-kube-api-access-7fwqs\") pod \"cilium-6xnxm\" (UID: \"65589817-8584-4d99-b7ad-f59a59741a65\") " pod="kube-system/cilium-6xnxm" Mar 19 11:44:19.598067 kubelet[2578]: I0319 11:44:19.597811 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/07da77ca-df5d-4af0-952d-a4948473d1be-xtables-lock\") pod \"kube-proxy-dnxgb\" (UID: \"07da77ca-df5d-4af0-952d-a4948473d1be\") " pod="kube-system/kube-proxy-dnxgb" Mar 19 11:44:19.598067 kubelet[2578]: I0319 11:44:19.597837 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/07da77ca-df5d-4af0-952d-a4948473d1be-lib-modules\") pod \"kube-proxy-dnxgb\" (UID: \"07da77ca-df5d-4af0-952d-a4948473d1be\") " pod="kube-system/kube-proxy-dnxgb" Mar 19 11:44:19.598067 kubelet[2578]: I0319 11:44:19.597853 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/65589817-8584-4d99-b7ad-f59a59741a65-host-proc-sys-kernel\") pod \"cilium-6xnxm\" (UID: \"65589817-8584-4d99-b7ad-f59a59741a65\") " pod="kube-system/cilium-6xnxm" Mar 19 11:44:19.598067 kubelet[2578]: I0319 11:44:19.597884 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4twt\" (UniqueName: \"kubernetes.io/projected/07da77ca-df5d-4af0-952d-a4948473d1be-kube-api-access-b4twt\") pod \"kube-proxy-dnxgb\" (UID: \"07da77ca-df5d-4af0-952d-a4948473d1be\") " pod="kube-system/kube-proxy-dnxgb" Mar 19 11:44:19.598171 kubelet[2578]: I0319 11:44:19.597905 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/65589817-8584-4d99-b7ad-f59a59741a65-cilium-config-path\") pod \"cilium-6xnxm\" (UID: \"65589817-8584-4d99-b7ad-f59a59741a65\") " pod="kube-system/cilium-6xnxm" Mar 19 11:44:19.598171 kubelet[2578]: I0319 11:44:19.597920 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/65589817-8584-4d99-b7ad-f59a59741a65-bpf-maps\") pod \"cilium-6xnxm\" (UID: \"65589817-8584-4d99-b7ad-f59a59741a65\") " pod="kube-system/cilium-6xnxm" Mar 19 11:44:19.598171 kubelet[2578]: I0319 11:44:19.597935 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/65589817-8584-4d99-b7ad-f59a59741a65-etc-cni-netd\") pod \"cilium-6xnxm\" (UID: \"65589817-8584-4d99-b7ad-f59a59741a65\") " pod="kube-system/cilium-6xnxm" Mar 19 11:44:19.598171 kubelet[2578]: I0319 11:44:19.597952 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/65589817-8584-4d99-b7ad-f59a59741a65-clustermesh-secrets\") pod \"cilium-6xnxm\" (UID: \"65589817-8584-4d99-b7ad-f59a59741a65\") " pod="kube-system/cilium-6xnxm" Mar 19 11:44:19.598171 kubelet[2578]: I0319 11:44:19.597998 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/65589817-8584-4d99-b7ad-f59a59741a65-lib-modules\") pod \"cilium-6xnxm\" (UID: \"65589817-8584-4d99-b7ad-f59a59741a65\") " pod="kube-system/cilium-6xnxm" Mar 19 11:44:19.598171 kubelet[2578]: I0319 11:44:19.598034 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/65589817-8584-4d99-b7ad-f59a59741a65-cilium-cgroup\") pod \"cilium-6xnxm\" (UID: \"65589817-8584-4d99-b7ad-f59a59741a65\") " pod="kube-system/cilium-6xnxm" Mar 19 11:44:19.598341 kubelet[2578]: I0319 11:44:19.598049 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/65589817-8584-4d99-b7ad-f59a59741a65-cni-path\") pod \"cilium-6xnxm\" (UID: \"65589817-8584-4d99-b7ad-f59a59741a65\") " pod="kube-system/cilium-6xnxm" Mar 19 11:44:19.782264 containerd[1491]: time="2025-03-19T11:44:19.781693653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dnxgb,Uid:07da77ca-df5d-4af0-952d-a4948473d1be,Namespace:kube-system,Attempt:0,}" Mar 19 11:44:19.796996 containerd[1491]: time="2025-03-19T11:44:19.796711945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6xnxm,Uid:65589817-8584-4d99-b7ad-f59a59741a65,Namespace:kube-system,Attempt:0,}" Mar 19 11:44:19.805675 containerd[1491]: time="2025-03-19T11:44:19.805435047Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:44:19.805675 containerd[1491]: time="2025-03-19T11:44:19.805497421Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:44:19.805675 containerd[1491]: time="2025-03-19T11:44:19.805511865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:44:19.810459 containerd[1491]: time="2025-03-19T11:44:19.809980480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:44:19.828239 systemd[1]: Created slice kubepods-besteffort-pode08a3a4a_7164_4344_b0f6_ed58bc991168.slice - libcontainer container kubepods-besteffort-pode08a3a4a_7164_4344_b0f6_ed58bc991168.slice. Mar 19 11:44:19.844436 systemd[1]: Started cri-containerd-505d8ebf6ef95ad9a8c03536d181dd19f16335731c2cd6f86940e4883d2a5dba.scope - libcontainer container 505d8ebf6ef95ad9a8c03536d181dd19f16335731c2cd6f86940e4883d2a5dba. Mar 19 11:44:19.847661 containerd[1491]: time="2025-03-19T11:44:19.847567220Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:44:19.847661 containerd[1491]: time="2025-03-19T11:44:19.847630515Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:44:19.847661 containerd[1491]: time="2025-03-19T11:44:19.847644998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:44:19.847848 containerd[1491]: time="2025-03-19T11:44:19.847743060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:44:19.865443 systemd[1]: Started cri-containerd-c298a95023bca70f299f6753affb341a9df90f19934827972a8cdb74bee9b075.scope - libcontainer container c298a95023bca70f299f6753affb341a9df90f19934827972a8cdb74bee9b075. Mar 19 11:44:19.873817 containerd[1491]: time="2025-03-19T11:44:19.873776456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dnxgb,Uid:07da77ca-df5d-4af0-952d-a4948473d1be,Namespace:kube-system,Attempt:0,} returns sandbox id \"505d8ebf6ef95ad9a8c03536d181dd19f16335731c2cd6f86940e4883d2a5dba\"" Mar 19 11:44:19.877683 containerd[1491]: time="2025-03-19T11:44:19.877589802Z" level=info msg="CreateContainer within sandbox \"505d8ebf6ef95ad9a8c03536d181dd19f16335731c2cd6f86940e4883d2a5dba\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 19 11:44:19.892623 containerd[1491]: time="2025-03-19T11:44:19.892576447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6xnxm,Uid:65589817-8584-4d99-b7ad-f59a59741a65,Namespace:kube-system,Attempt:0,} returns sandbox id \"c298a95023bca70f299f6753affb341a9df90f19934827972a8cdb74bee9b075\"" Mar 19 11:44:19.895379 containerd[1491]: time="2025-03-19T11:44:19.895257896Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 19 11:44:19.901498 kubelet[2578]: I0319 11:44:19.901464 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c46bl\" (UniqueName: \"kubernetes.io/projected/e08a3a4a-7164-4344-b0f6-ed58bc991168-kube-api-access-c46bl\") pod \"cilium-operator-6c4d7847fc-s7kjp\" (UID: \"e08a3a4a-7164-4344-b0f6-ed58bc991168\") " pod="kube-system/cilium-operator-6c4d7847fc-s7kjp" Mar 19 11:44:19.901927 kubelet[2578]: I0319 11:44:19.901874 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e08a3a4a-7164-4344-b0f6-ed58bc991168-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-s7kjp\" (UID: \"e08a3a4a-7164-4344-b0f6-ed58bc991168\") " pod="kube-system/cilium-operator-6c4d7847fc-s7kjp" Mar 19 11:44:19.931421 containerd[1491]: time="2025-03-19T11:44:19.931377503Z" level=info msg="CreateContainer within sandbox \"505d8ebf6ef95ad9a8c03536d181dd19f16335731c2cd6f86940e4883d2a5dba\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cba8cd2f529b6be520ee9a5590946418fec86b3251d2652117afa580e185f062\"" Mar 19 11:44:19.932320 containerd[1491]: time="2025-03-19T11:44:19.931902343Z" level=info msg="StartContainer for \"cba8cd2f529b6be520ee9a5590946418fec86b3251d2652117afa580e185f062\"" Mar 19 11:44:19.963474 systemd[1]: Started cri-containerd-cba8cd2f529b6be520ee9a5590946418fec86b3251d2652117afa580e185f062.scope - libcontainer container cba8cd2f529b6be520ee9a5590946418fec86b3251d2652117afa580e185f062. Mar 19 11:44:19.988314 containerd[1491]: time="2025-03-19T11:44:19.988269670Z" level=info msg="StartContainer for \"cba8cd2f529b6be520ee9a5590946418fec86b3251d2652117afa580e185f062\" returns successfully" Mar 19 11:44:20.100441 kubelet[2578]: I0319 11:44:20.100070 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dnxgb" podStartSLOduration=1.100038299 podStartE2EDuration="1.100038299s" podCreationTimestamp="2025-03-19 11:44:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:44:20.099864741 +0000 UTC m=+8.115388748" watchObservedRunningTime="2025-03-19 11:44:20.100038299 +0000 UTC m=+8.115562306" Mar 19 11:44:20.132721 containerd[1491]: time="2025-03-19T11:44:20.132673035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-s7kjp,Uid:e08a3a4a-7164-4344-b0f6-ed58bc991168,Namespace:kube-system,Attempt:0,}" Mar 19 11:44:20.159546 containerd[1491]: time="2025-03-19T11:44:20.159283996Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:44:20.159546 containerd[1491]: time="2025-03-19T11:44:20.159352011Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:44:20.159546 containerd[1491]: time="2025-03-19T11:44:20.159367094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:44:20.159546 containerd[1491]: time="2025-03-19T11:44:20.159450032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:44:20.180540 systemd[1]: Started cri-containerd-bcaf3c324290590e60840ec64d04d1526cb2eba2feb87b0cfe00b34e3277e217.scope - libcontainer container bcaf3c324290590e60840ec64d04d1526cb2eba2feb87b0cfe00b34e3277e217. Mar 19 11:44:20.212340 containerd[1491]: time="2025-03-19T11:44:20.212274109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-s7kjp,Uid:e08a3a4a-7164-4344-b0f6-ed58bc991168,Namespace:kube-system,Attempt:0,} returns sandbox id \"bcaf3c324290590e60840ec64d04d1526cb2eba2feb87b0cfe00b34e3277e217\"" Mar 19 11:44:28.145624 update_engine[1474]: I20250319 11:44:28.145282 1474 update_attempter.cc:509] Updating boot flags... Mar 19 11:44:28.186613 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2952) Mar 19 11:44:28.226354 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2955) Mar 19 11:44:28.253271 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2955) Mar 19 11:44:32.898223 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount595302516.mount: Deactivated successfully. Mar 19 11:44:34.403099 containerd[1491]: time="2025-03-19T11:44:34.403050806Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:44:34.404138 containerd[1491]: time="2025-03-19T11:44:34.404093836Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Mar 19 11:44:34.405170 containerd[1491]: time="2025-03-19T11:44:34.405130986Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:44:34.406608 containerd[1491]: time="2025-03-19T11:44:34.406478569Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 14.511180704s" Mar 19 11:44:34.406608 containerd[1491]: time="2025-03-19T11:44:34.406519573Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 19 11:44:34.411653 containerd[1491]: time="2025-03-19T11:44:34.411452136Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 19 11:44:34.415590 containerd[1491]: time="2025-03-19T11:44:34.415561171Z" level=info msg="CreateContainer within sandbox \"c298a95023bca70f299f6753affb341a9df90f19934827972a8cdb74bee9b075\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 19 11:44:34.435512 containerd[1491]: time="2025-03-19T11:44:34.435423955Z" level=info msg="CreateContainer within sandbox \"c298a95023bca70f299f6753affb341a9df90f19934827972a8cdb74bee9b075\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"751bf077e1ba7f49b98a15eb7e14fdfdeb4cf5334f2c66ed08e89715db26e547\"" Mar 19 11:44:34.436326 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1505532962.mount: Deactivated successfully. Mar 19 11:44:34.437294 containerd[1491]: time="2025-03-19T11:44:34.436695770Z" level=info msg="StartContainer for \"751bf077e1ba7f49b98a15eb7e14fdfdeb4cf5334f2c66ed08e89715db26e547\"" Mar 19 11:44:34.457140 systemd[1]: run-containerd-runc-k8s.io-751bf077e1ba7f49b98a15eb7e14fdfdeb4cf5334f2c66ed08e89715db26e547-runc.0c6aTt.mount: Deactivated successfully. Mar 19 11:44:34.470468 systemd[1]: Started cri-containerd-751bf077e1ba7f49b98a15eb7e14fdfdeb4cf5334f2c66ed08e89715db26e547.scope - libcontainer container 751bf077e1ba7f49b98a15eb7e14fdfdeb4cf5334f2c66ed08e89715db26e547. Mar 19 11:44:34.491001 containerd[1491]: time="2025-03-19T11:44:34.490966000Z" level=info msg="StartContainer for \"751bf077e1ba7f49b98a15eb7e14fdfdeb4cf5334f2c66ed08e89715db26e547\" returns successfully" Mar 19 11:44:34.536482 systemd[1]: cri-containerd-751bf077e1ba7f49b98a15eb7e14fdfdeb4cf5334f2c66ed08e89715db26e547.scope: Deactivated successfully. Mar 19 11:44:34.677410 containerd[1491]: time="2025-03-19T11:44:34.667209631Z" level=info msg="shim disconnected" id=751bf077e1ba7f49b98a15eb7e14fdfdeb4cf5334f2c66ed08e89715db26e547 namespace=k8s.io Mar 19 11:44:34.677410 containerd[1491]: time="2025-03-19T11:44:34.677398230Z" level=warning msg="cleaning up after shim disconnected" id=751bf077e1ba7f49b98a15eb7e14fdfdeb4cf5334f2c66ed08e89715db26e547 namespace=k8s.io Mar 19 11:44:34.677410 containerd[1491]: time="2025-03-19T11:44:34.677408831Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:44:35.142861 containerd[1491]: time="2025-03-19T11:44:35.142591778Z" level=info msg="CreateContainer within sandbox \"c298a95023bca70f299f6753affb341a9df90f19934827972a8cdb74bee9b075\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 19 11:44:35.152976 containerd[1491]: time="2025-03-19T11:44:35.152923665Z" level=info msg="CreateContainer within sandbox \"c298a95023bca70f299f6753affb341a9df90f19934827972a8cdb74bee9b075\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b0ff1aaaf3a9b67b831b5a6b03300d39d02a7a9e55dea50ab86bf64470eef735\"" Mar 19 11:44:35.154386 containerd[1491]: time="2025-03-19T11:44:35.153382151Z" level=info msg="StartContainer for \"b0ff1aaaf3a9b67b831b5a6b03300d39d02a7a9e55dea50ab86bf64470eef735\"" Mar 19 11:44:35.181375 systemd[1]: Started cri-containerd-b0ff1aaaf3a9b67b831b5a6b03300d39d02a7a9e55dea50ab86bf64470eef735.scope - libcontainer container b0ff1aaaf3a9b67b831b5a6b03300d39d02a7a9e55dea50ab86bf64470eef735. Mar 19 11:44:35.202923 containerd[1491]: time="2025-03-19T11:44:35.202880925Z" level=info msg="StartContainer for \"b0ff1aaaf3a9b67b831b5a6b03300d39d02a7a9e55dea50ab86bf64470eef735\" returns successfully" Mar 19 11:44:35.230932 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 19 11:44:35.231152 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:44:35.231696 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 19 11:44:35.240551 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 19 11:44:35.240761 systemd[1]: cri-containerd-b0ff1aaaf3a9b67b831b5a6b03300d39d02a7a9e55dea50ab86bf64470eef735.scope: Deactivated successfully. Mar 19 11:44:35.256705 containerd[1491]: time="2025-03-19T11:44:35.256612688Z" level=info msg="shim disconnected" id=b0ff1aaaf3a9b67b831b5a6b03300d39d02a7a9e55dea50ab86bf64470eef735 namespace=k8s.io Mar 19 11:44:35.256705 containerd[1491]: time="2025-03-19T11:44:35.256681615Z" level=warning msg="cleaning up after shim disconnected" id=b0ff1aaaf3a9b67b831b5a6b03300d39d02a7a9e55dea50ab86bf64470eef735 namespace=k8s.io Mar 19 11:44:35.256705 containerd[1491]: time="2025-03-19T11:44:35.256689576Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:44:35.257124 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:44:35.434373 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-751bf077e1ba7f49b98a15eb7e14fdfdeb4cf5334f2c66ed08e89715db26e547-rootfs.mount: Deactivated successfully. Mar 19 11:44:35.510856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1538232361.mount: Deactivated successfully. Mar 19 11:44:36.145048 containerd[1491]: time="2025-03-19T11:44:36.144999895Z" level=info msg="CreateContainer within sandbox \"c298a95023bca70f299f6753affb341a9df90f19934827972a8cdb74bee9b075\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 19 11:44:36.159798 containerd[1491]: time="2025-03-19T11:44:36.159749405Z" level=info msg="CreateContainer within sandbox \"c298a95023bca70f299f6753affb341a9df90f19934827972a8cdb74bee9b075\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b87341741450e20eca4067fd6c9ac9ca5a9129883bb36c075120880c24f7ecd4\"" Mar 19 11:44:36.160581 containerd[1491]: time="2025-03-19T11:44:36.160494317Z" level=info msg="StartContainer for \"b87341741450e20eca4067fd6c9ac9ca5a9129883bb36c075120880c24f7ecd4\"" Mar 19 11:44:36.184405 systemd[1]: Started cri-containerd-b87341741450e20eca4067fd6c9ac9ca5a9129883bb36c075120880c24f7ecd4.scope - libcontainer container b87341741450e20eca4067fd6c9ac9ca5a9129883bb36c075120880c24f7ecd4. Mar 19 11:44:36.218591 containerd[1491]: time="2025-03-19T11:44:36.218525863Z" level=info msg="StartContainer for \"b87341741450e20eca4067fd6c9ac9ca5a9129883bb36c075120880c24f7ecd4\" returns successfully" Mar 19 11:44:36.252419 systemd[1]: cri-containerd-b87341741450e20eca4067fd6c9ac9ca5a9129883bb36c075120880c24f7ecd4.scope: Deactivated successfully. Mar 19 11:44:36.275466 containerd[1491]: time="2025-03-19T11:44:36.275393256Z" level=info msg="shim disconnected" id=b87341741450e20eca4067fd6c9ac9ca5a9129883bb36c075120880c24f7ecd4 namespace=k8s.io Mar 19 11:44:36.275466 containerd[1491]: time="2025-03-19T11:44:36.275441580Z" level=warning msg="cleaning up after shim disconnected" id=b87341741450e20eca4067fd6c9ac9ca5a9129883bb36c075120880c24f7ecd4 namespace=k8s.io Mar 19 11:44:36.275466 containerd[1491]: time="2025-03-19T11:44:36.275450341Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:44:37.150046 containerd[1491]: time="2025-03-19T11:44:37.149864984Z" level=info msg="CreateContainer within sandbox \"c298a95023bca70f299f6753affb341a9df90f19934827972a8cdb74bee9b075\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 19 11:44:37.161886 containerd[1491]: time="2025-03-19T11:44:37.161839016Z" level=info msg="CreateContainer within sandbox \"c298a95023bca70f299f6753affb341a9df90f19934827972a8cdb74bee9b075\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0c5dd1613674f5ec9ddc21270f29d61d4f509c30657ba7c0a28e2b1f29cd8a16\"" Mar 19 11:44:37.162481 containerd[1491]: time="2025-03-19T11:44:37.162450513Z" level=info msg="StartContainer for \"0c5dd1613674f5ec9ddc21270f29d61d4f509c30657ba7c0a28e2b1f29cd8a16\"" Mar 19 11:44:37.191416 systemd[1]: Started cri-containerd-0c5dd1613674f5ec9ddc21270f29d61d4f509c30657ba7c0a28e2b1f29cd8a16.scope - libcontainer container 0c5dd1613674f5ec9ddc21270f29d61d4f509c30657ba7c0a28e2b1f29cd8a16. Mar 19 11:44:37.210461 systemd[1]: cri-containerd-0c5dd1613674f5ec9ddc21270f29d61d4f509c30657ba7c0a28e2b1f29cd8a16.scope: Deactivated successfully. Mar 19 11:44:37.211784 containerd[1491]: time="2025-03-19T11:44:37.211730529Z" level=info msg="StartContainer for \"0c5dd1613674f5ec9ddc21270f29d61d4f509c30657ba7c0a28e2b1f29cd8a16\" returns successfully" Mar 19 11:44:37.231562 containerd[1491]: time="2025-03-19T11:44:37.231501685Z" level=info msg="shim disconnected" id=0c5dd1613674f5ec9ddc21270f29d61d4f509c30657ba7c0a28e2b1f29cd8a16 namespace=k8s.io Mar 19 11:44:37.231562 containerd[1491]: time="2025-03-19T11:44:37.231559251Z" level=warning msg="cleaning up after shim disconnected" id=0c5dd1613674f5ec9ddc21270f29d61d4f509c30657ba7c0a28e2b1f29cd8a16 namespace=k8s.io Mar 19 11:44:37.231774 containerd[1491]: time="2025-03-19T11:44:37.231569252Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:44:37.433992 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c5dd1613674f5ec9ddc21270f29d61d4f509c30657ba7c0a28e2b1f29cd8a16-rootfs.mount: Deactivated successfully. Mar 19 11:44:38.161530 containerd[1491]: time="2025-03-19T11:44:38.161485112Z" level=info msg="CreateContainer within sandbox \"c298a95023bca70f299f6753affb341a9df90f19934827972a8cdb74bee9b075\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 19 11:44:38.191919 containerd[1491]: time="2025-03-19T11:44:38.191865137Z" level=info msg="CreateContainer within sandbox \"c298a95023bca70f299f6753affb341a9df90f19934827972a8cdb74bee9b075\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6a0db4a8192712fd251f894b86d3d475140c6133c49e2b1ab1bd6bb219d3e83c\"" Mar 19 11:44:38.192935 containerd[1491]: time="2025-03-19T11:44:38.192904909Z" level=info msg="StartContainer for \"6a0db4a8192712fd251f894b86d3d475140c6133c49e2b1ab1bd6bb219d3e83c\"" Mar 19 11:44:38.227411 systemd[1]: Started cri-containerd-6a0db4a8192712fd251f894b86d3d475140c6133c49e2b1ab1bd6bb219d3e83c.scope - libcontainer container 6a0db4a8192712fd251f894b86d3d475140c6133c49e2b1ab1bd6bb219d3e83c. Mar 19 11:44:38.253024 containerd[1491]: time="2025-03-19T11:44:38.252971057Z" level=info msg="StartContainer for \"6a0db4a8192712fd251f894b86d3d475140c6133c49e2b1ab1bd6bb219d3e83c\" returns successfully" Mar 19 11:44:38.382291 kubelet[2578]: I0319 11:44:38.382015 2578 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Mar 19 11:44:38.410934 systemd[1]: Created slice kubepods-burstable-pod109055c4_c095_4e8d_9af0_7497d75ab4b5.slice - libcontainer container kubepods-burstable-pod109055c4_c095_4e8d_9af0_7497d75ab4b5.slice. Mar 19 11:44:38.417751 systemd[1]: Created slice kubepods-burstable-podcadb6a38_d74c_4d3e_9a1b_1591d4b2042b.slice - libcontainer container kubepods-burstable-podcadb6a38_d74c_4d3e_9a1b_1591d4b2042b.slice. Mar 19 11:44:38.545101 kubelet[2578]: I0319 11:44:38.545053 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfmpq\" (UniqueName: \"kubernetes.io/projected/109055c4-c095-4e8d-9af0-7497d75ab4b5-kube-api-access-gfmpq\") pod \"coredns-668d6bf9bc-w2nnl\" (UID: \"109055c4-c095-4e8d-9af0-7497d75ab4b5\") " pod="kube-system/coredns-668d6bf9bc-w2nnl" Mar 19 11:44:38.545101 kubelet[2578]: I0319 11:44:38.545101 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cadb6a38-d74c-4d3e-9a1b-1591d4b2042b-config-volume\") pod \"coredns-668d6bf9bc-5mvjg\" (UID: \"cadb6a38-d74c-4d3e-9a1b-1591d4b2042b\") " pod="kube-system/coredns-668d6bf9bc-5mvjg" Mar 19 11:44:38.545277 kubelet[2578]: I0319 11:44:38.545122 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/109055c4-c095-4e8d-9af0-7497d75ab4b5-config-volume\") pod \"coredns-668d6bf9bc-w2nnl\" (UID: \"109055c4-c095-4e8d-9af0-7497d75ab4b5\") " pod="kube-system/coredns-668d6bf9bc-w2nnl" Mar 19 11:44:38.545277 kubelet[2578]: I0319 11:44:38.545140 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvlc8\" (UniqueName: \"kubernetes.io/projected/cadb6a38-d74c-4d3e-9a1b-1591d4b2042b-kube-api-access-vvlc8\") pod \"coredns-668d6bf9bc-5mvjg\" (UID: \"cadb6a38-d74c-4d3e-9a1b-1591d4b2042b\") " pod="kube-system/coredns-668d6bf9bc-5mvjg" Mar 19 11:44:38.679084 systemd[1]: Started sshd@7-10.0.0.95:22-10.0.0.1:37516.service - OpenSSH per-connection server daemon (10.0.0.1:37516). Mar 19 11:44:38.716767 containerd[1491]: time="2025-03-19T11:44:38.716710388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w2nnl,Uid:109055c4-c095-4e8d-9af0-7497d75ab4b5,Namespace:kube-system,Attempt:0,}" Mar 19 11:44:38.721815 containerd[1491]: time="2025-03-19T11:44:38.721769598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5mvjg,Uid:cadb6a38-d74c-4d3e-9a1b-1591d4b2042b,Namespace:kube-system,Attempt:0,}" Mar 19 11:44:38.728894 sshd[3324]: Accepted publickey for core from 10.0.0.1 port 37516 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:44:38.730158 sshd-session[3324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:44:38.735991 systemd-logind[1471]: New session 8 of user core. Mar 19 11:44:38.741379 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 19 11:44:38.908663 sshd[3358]: Connection closed by 10.0.0.1 port 37516 Mar 19 11:44:38.909088 sshd-session[3324]: pam_unix(sshd:session): session closed for user core Mar 19 11:44:38.912453 systemd[1]: sshd@7-10.0.0.95:22-10.0.0.1:37516.service: Deactivated successfully. Mar 19 11:44:38.915273 systemd[1]: session-8.scope: Deactivated successfully. Mar 19 11:44:38.916377 systemd-logind[1471]: Session 8 logged out. Waiting for processes to exit. Mar 19 11:44:38.917196 systemd-logind[1471]: Removed session 8. Mar 19 11:44:39.177603 kubelet[2578]: I0319 11:44:39.177538 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6xnxm" podStartSLOduration=5.660237511 podStartE2EDuration="20.177521943s" podCreationTimestamp="2025-03-19 11:44:19 +0000 UTC" firstStartedPulling="2025-03-19 11:44:19.893963522 +0000 UTC m=+7.909487529" lastFinishedPulling="2025-03-19 11:44:34.411247954 +0000 UTC m=+22.426771961" observedRunningTime="2025-03-19 11:44:39.177130109 +0000 UTC m=+27.192654116" watchObservedRunningTime="2025-03-19 11:44:39.177521943 +0000 UTC m=+27.193045950" Mar 19 11:44:39.654857 containerd[1491]: time="2025-03-19T11:44:39.654731721Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:44:39.655498 containerd[1491]: time="2025-03-19T11:44:39.655346014Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Mar 19 11:44:39.656508 containerd[1491]: time="2025-03-19T11:44:39.656478351Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:44:39.658111 containerd[1491]: time="2025-03-19T11:44:39.658044924Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 5.246556905s" Mar 19 11:44:39.658111 containerd[1491]: time="2025-03-19T11:44:39.658076567Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 19 11:44:39.660166 containerd[1491]: time="2025-03-19T11:44:39.660138183Z" level=info msg="CreateContainer within sandbox \"bcaf3c324290590e60840ec64d04d1526cb2eba2feb87b0cfe00b34e3277e217\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 19 11:44:39.672423 containerd[1491]: time="2025-03-19T11:44:39.672386670Z" level=info msg="CreateContainer within sandbox \"bcaf3c324290590e60840ec64d04d1526cb2eba2feb87b0cfe00b34e3277e217\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"69a4aa7628ca5fcf5e78be74764bf43046e8ef5b474b9d781ea87cf543427832\"" Mar 19 11:44:39.673049 containerd[1491]: time="2025-03-19T11:44:39.673020564Z" level=info msg="StartContainer for \"69a4aa7628ca5fcf5e78be74764bf43046e8ef5b474b9d781ea87cf543427832\"" Mar 19 11:44:39.708548 systemd[1]: Started cri-containerd-69a4aa7628ca5fcf5e78be74764bf43046e8ef5b474b9d781ea87cf543427832.scope - libcontainer container 69a4aa7628ca5fcf5e78be74764bf43046e8ef5b474b9d781ea87cf543427832. Mar 19 11:44:39.733324 containerd[1491]: time="2025-03-19T11:44:39.733280953Z" level=info msg="StartContainer for \"69a4aa7628ca5fcf5e78be74764bf43046e8ef5b474b9d781ea87cf543427832\" returns successfully" Mar 19 11:44:40.176111 kubelet[2578]: I0319 11:44:40.176043 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-s7kjp" podStartSLOduration=1.730796818 podStartE2EDuration="21.176016236s" podCreationTimestamp="2025-03-19 11:44:19 +0000 UTC" firstStartedPulling="2025-03-19 11:44:20.213431037 +0000 UTC m=+8.228955045" lastFinishedPulling="2025-03-19 11:44:39.658650496 +0000 UTC m=+27.674174463" observedRunningTime="2025-03-19 11:44:40.175904667 +0000 UTC m=+28.191428674" watchObservedRunningTime="2025-03-19 11:44:40.176016236 +0000 UTC m=+28.191540203" Mar 19 11:44:42.282511 systemd-networkd[1414]: cilium_host: Link UP Mar 19 11:44:42.282629 systemd-networkd[1414]: cilium_net: Link UP Mar 19 11:44:42.282751 systemd-networkd[1414]: cilium_net: Gained carrier Mar 19 11:44:42.282856 systemd-networkd[1414]: cilium_host: Gained carrier Mar 19 11:44:42.371995 systemd-networkd[1414]: cilium_vxlan: Link UP Mar 19 11:44:42.372001 systemd-networkd[1414]: cilium_vxlan: Gained carrier Mar 19 11:44:42.628337 systemd-networkd[1414]: cilium_host: Gained IPv6LL Mar 19 11:44:42.683705 kernel: NET: Registered PF_ALG protocol family Mar 19 11:44:42.780409 systemd-networkd[1414]: cilium_net: Gained IPv6LL Mar 19 11:44:43.246541 systemd-networkd[1414]: lxc_health: Link UP Mar 19 11:44:43.249370 systemd-networkd[1414]: lxc_health: Gained carrier Mar 19 11:44:43.356963 systemd-networkd[1414]: lxce5a1f4278817: Link UP Mar 19 11:44:43.368340 kernel: eth0: renamed from tmp205f5 Mar 19 11:44:43.379062 systemd-networkd[1414]: lxce5a1f4278817: Gained carrier Mar 19 11:44:43.379361 systemd-networkd[1414]: lxc1d828b0a1dfa: Link UP Mar 19 11:44:43.384263 kernel: eth0: renamed from tmp8fe3d Mar 19 11:44:43.389357 systemd-networkd[1414]: lxc1d828b0a1dfa: Gained carrier Mar 19 11:44:43.524677 systemd-networkd[1414]: cilium_vxlan: Gained IPv6LL Mar 19 11:44:43.922623 systemd[1]: Started sshd@8-10.0.0.95:22-10.0.0.1:36334.service - OpenSSH per-connection server daemon (10.0.0.1:36334). Mar 19 11:44:43.970903 sshd[3827]: Accepted publickey for core from 10.0.0.1 port 36334 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:44:43.971416 sshd-session[3827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:44:43.975294 systemd-logind[1471]: New session 9 of user core. Mar 19 11:44:43.986422 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 19 11:44:44.124954 sshd[3829]: Connection closed by 10.0.0.1 port 36334 Mar 19 11:44:44.124445 sshd-session[3827]: pam_unix(sshd:session): session closed for user core Mar 19 11:44:44.127044 systemd[1]: sshd@8-10.0.0.95:22-10.0.0.1:36334.service: Deactivated successfully. Mar 19 11:44:44.128877 systemd[1]: session-9.scope: Deactivated successfully. Mar 19 11:44:44.130322 systemd-logind[1471]: Session 9 logged out. Waiting for processes to exit. Mar 19 11:44:44.131193 systemd-logind[1471]: Removed session 9. Mar 19 11:44:44.420728 systemd-networkd[1414]: lxce5a1f4278817: Gained IPv6LL Mar 19 11:44:45.255636 systemd-networkd[1414]: lxc_health: Gained IPv6LL Mar 19 11:44:45.444592 systemd-networkd[1414]: lxc1d828b0a1dfa: Gained IPv6LL Mar 19 11:44:46.840749 containerd[1491]: time="2025-03-19T11:44:46.840505286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:44:46.840749 containerd[1491]: time="2025-03-19T11:44:46.840574851Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:44:46.840749 containerd[1491]: time="2025-03-19T11:44:46.840588852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:44:46.840749 containerd[1491]: time="2025-03-19T11:44:46.840663937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:44:46.841124 containerd[1491]: time="2025-03-19T11:44:46.840902632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:44:46.841124 containerd[1491]: time="2025-03-19T11:44:46.840952836Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:44:46.841124 containerd[1491]: time="2025-03-19T11:44:46.840969997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:44:46.841124 containerd[1491]: time="2025-03-19T11:44:46.841050362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:44:46.856855 systemd[1]: run-containerd-runc-k8s.io-8fe3dbaf043d0aff7416adbd1d393b03fdde54fe71cafe8b18766ea837cbce94-runc.WKL1KF.mount: Deactivated successfully. Mar 19 11:44:46.869454 systemd[1]: Started cri-containerd-205f5005f6327492f54c57bb744f71b11b61946402c307b341d24698e910b90f.scope - libcontainer container 205f5005f6327492f54c57bb744f71b11b61946402c307b341d24698e910b90f. Mar 19 11:44:46.870765 systemd[1]: Started cri-containerd-8fe3dbaf043d0aff7416adbd1d393b03fdde54fe71cafe8b18766ea837cbce94.scope - libcontainer container 8fe3dbaf043d0aff7416adbd1d393b03fdde54fe71cafe8b18766ea837cbce94. Mar 19 11:44:46.883017 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 19 11:44:46.884808 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 19 11:44:46.900925 containerd[1491]: time="2025-03-19T11:44:46.900833062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w2nnl,Uid:109055c4-c095-4e8d-9af0-7497d75ab4b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"205f5005f6327492f54c57bb744f71b11b61946402c307b341d24698e910b90f\"" Mar 19 11:44:46.903333 containerd[1491]: time="2025-03-19T11:44:46.903239740Z" level=info msg="CreateContainer within sandbox \"205f5005f6327492f54c57bb744f71b11b61946402c307b341d24698e910b90f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 19 11:44:46.907442 containerd[1491]: time="2025-03-19T11:44:46.907397854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5mvjg,Uid:cadb6a38-d74c-4d3e-9a1b-1591d4b2042b,Namespace:kube-system,Attempt:0,} returns sandbox id \"8fe3dbaf043d0aff7416adbd1d393b03fdde54fe71cafe8b18766ea837cbce94\"" Mar 19 11:44:46.911221 containerd[1491]: time="2025-03-19T11:44:46.911133580Z" level=info msg="CreateContainer within sandbox \"8fe3dbaf043d0aff7416adbd1d393b03fdde54fe71cafe8b18766ea837cbce94\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 19 11:44:46.916225 containerd[1491]: time="2025-03-19T11:44:46.916183113Z" level=info msg="CreateContainer within sandbox \"205f5005f6327492f54c57bb744f71b11b61946402c307b341d24698e910b90f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4e849eca634f2d55663bc8ab529a2f18723c9c0f78b93e99979b582da9e10ef1\"" Mar 19 11:44:46.916842 containerd[1491]: time="2025-03-19T11:44:46.916619902Z" level=info msg="StartContainer for \"4e849eca634f2d55663bc8ab529a2f18723c9c0f78b93e99979b582da9e10ef1\"" Mar 19 11:44:46.922208 containerd[1491]: time="2025-03-19T11:44:46.922171788Z" level=info msg="CreateContainer within sandbox \"8fe3dbaf043d0aff7416adbd1d393b03fdde54fe71cafe8b18766ea837cbce94\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"25a35eafd6fcded34ee853d16f2bca42db9a13893dbc273cf4f4d3551a12acda\"" Mar 19 11:44:46.923963 containerd[1491]: time="2025-03-19T11:44:46.923887421Z" level=info msg="StartContainer for \"25a35eafd6fcded34ee853d16f2bca42db9a13893dbc273cf4f4d3551a12acda\"" Mar 19 11:44:46.943416 systemd[1]: Started cri-containerd-4e849eca634f2d55663bc8ab529a2f18723c9c0f78b93e99979b582da9e10ef1.scope - libcontainer container 4e849eca634f2d55663bc8ab529a2f18723c9c0f78b93e99979b582da9e10ef1. Mar 19 11:44:46.946887 systemd[1]: Started cri-containerd-25a35eafd6fcded34ee853d16f2bca42db9a13893dbc273cf4f4d3551a12acda.scope - libcontainer container 25a35eafd6fcded34ee853d16f2bca42db9a13893dbc273cf4f4d3551a12acda. Mar 19 11:44:46.976457 containerd[1491]: time="2025-03-19T11:44:46.976406362Z" level=info msg="StartContainer for \"4e849eca634f2d55663bc8ab529a2f18723c9c0f78b93e99979b582da9e10ef1\" returns successfully" Mar 19 11:44:46.976588 containerd[1491]: time="2025-03-19T11:44:46.976475006Z" level=info msg="StartContainer for \"25a35eafd6fcded34ee853d16f2bca42db9a13893dbc273cf4f4d3551a12acda\" returns successfully" Mar 19 11:44:47.188554 kubelet[2578]: I0319 11:44:47.188481 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-5mvjg" podStartSLOduration=28.188449214 podStartE2EDuration="28.188449214s" podCreationTimestamp="2025-03-19 11:44:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:44:47.187682485 +0000 UTC m=+35.203206492" watchObservedRunningTime="2025-03-19 11:44:47.188449214 +0000 UTC m=+35.203973221" Mar 19 11:44:47.211162 kubelet[2578]: I0319 11:44:47.211094 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-w2nnl" podStartSLOduration=28.211078817 podStartE2EDuration="28.211078817s" podCreationTimestamp="2025-03-19 11:44:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:44:47.197410305 +0000 UTC m=+35.212934312" watchObservedRunningTime="2025-03-19 11:44:47.211078817 +0000 UTC m=+35.226602784" Mar 19 11:44:49.135700 systemd[1]: Started sshd@9-10.0.0.95:22-10.0.0.1:36338.service - OpenSSH per-connection server daemon (10.0.0.1:36338). Mar 19 11:44:49.186972 sshd[4022]: Accepted publickey for core from 10.0.0.1 port 36338 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:44:49.188494 sshd-session[4022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:44:49.198159 systemd-logind[1471]: New session 10 of user core. Mar 19 11:44:49.214420 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 19 11:44:49.326118 sshd[4024]: Connection closed by 10.0.0.1 port 36338 Mar 19 11:44:49.326468 sshd-session[4022]: pam_unix(sshd:session): session closed for user core Mar 19 11:44:49.329548 systemd[1]: sshd@9-10.0.0.95:22-10.0.0.1:36338.service: Deactivated successfully. Mar 19 11:44:49.331456 systemd[1]: session-10.scope: Deactivated successfully. Mar 19 11:44:49.332131 systemd-logind[1471]: Session 10 logged out. Waiting for processes to exit. Mar 19 11:44:49.332836 systemd-logind[1471]: Removed session 10. Mar 19 11:44:54.339620 systemd[1]: Started sshd@10-10.0.0.95:22-10.0.0.1:35080.service - OpenSSH per-connection server daemon (10.0.0.1:35080). Mar 19 11:44:54.383916 sshd[4043]: Accepted publickey for core from 10.0.0.1 port 35080 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:44:54.385113 sshd-session[4043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:44:54.389291 systemd-logind[1471]: New session 11 of user core. Mar 19 11:44:54.395373 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 19 11:44:54.507655 sshd[4045]: Connection closed by 10.0.0.1 port 35080 Mar 19 11:44:54.508173 sshd-session[4043]: pam_unix(sshd:session): session closed for user core Mar 19 11:44:54.512079 systemd[1]: sshd@10-10.0.0.95:22-10.0.0.1:35080.service: Deactivated successfully. Mar 19 11:44:54.514278 systemd[1]: session-11.scope: Deactivated successfully. Mar 19 11:44:54.515077 systemd-logind[1471]: Session 11 logged out. Waiting for processes to exit. Mar 19 11:44:54.515854 systemd-logind[1471]: Removed session 11. Mar 19 11:44:59.523619 systemd[1]: Started sshd@11-10.0.0.95:22-10.0.0.1:35090.service - OpenSSH per-connection server daemon (10.0.0.1:35090). Mar 19 11:44:59.568399 sshd[4060]: Accepted publickey for core from 10.0.0.1 port 35090 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:44:59.569486 sshd-session[4060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:44:59.573053 systemd-logind[1471]: New session 12 of user core. Mar 19 11:44:59.584382 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 19 11:44:59.689570 sshd[4062]: Connection closed by 10.0.0.1 port 35090 Mar 19 11:44:59.690107 sshd-session[4060]: pam_unix(sshd:session): session closed for user core Mar 19 11:44:59.699383 systemd[1]: sshd@11-10.0.0.95:22-10.0.0.1:35090.service: Deactivated successfully. Mar 19 11:44:59.700753 systemd[1]: session-12.scope: Deactivated successfully. Mar 19 11:44:59.701386 systemd-logind[1471]: Session 12 logged out. Waiting for processes to exit. Mar 19 11:44:59.712490 systemd[1]: Started sshd@12-10.0.0.95:22-10.0.0.1:35094.service - OpenSSH per-connection server daemon (10.0.0.1:35094). Mar 19 11:44:59.713967 systemd-logind[1471]: Removed session 12. Mar 19 11:44:59.755524 sshd[4075]: Accepted publickey for core from 10.0.0.1 port 35094 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:44:59.756671 sshd-session[4075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:44:59.761183 systemd-logind[1471]: New session 13 of user core. Mar 19 11:44:59.771369 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 19 11:44:59.919177 sshd[4078]: Connection closed by 10.0.0.1 port 35094 Mar 19 11:44:59.919538 sshd-session[4075]: pam_unix(sshd:session): session closed for user core Mar 19 11:44:59.934497 systemd[1]: sshd@12-10.0.0.95:22-10.0.0.1:35094.service: Deactivated successfully. Mar 19 11:44:59.937202 systemd[1]: session-13.scope: Deactivated successfully. Mar 19 11:44:59.939588 systemd-logind[1471]: Session 13 logged out. Waiting for processes to exit. Mar 19 11:44:59.944489 systemd[1]: Started sshd@13-10.0.0.95:22-10.0.0.1:35110.service - OpenSSH per-connection server daemon (10.0.0.1:35110). Mar 19 11:44:59.945737 systemd-logind[1471]: Removed session 13. Mar 19 11:44:59.987890 sshd[4089]: Accepted publickey for core from 10.0.0.1 port 35110 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:44:59.989076 sshd-session[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:44:59.993392 systemd-logind[1471]: New session 14 of user core. Mar 19 11:45:00.009380 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 19 11:45:00.118023 sshd[4092]: Connection closed by 10.0.0.1 port 35110 Mar 19 11:45:00.118379 sshd-session[4089]: pam_unix(sshd:session): session closed for user core Mar 19 11:45:00.121430 systemd[1]: sshd@13-10.0.0.95:22-10.0.0.1:35110.service: Deactivated successfully. Mar 19 11:45:00.122936 systemd[1]: session-14.scope: Deactivated successfully. Mar 19 11:45:00.123579 systemd-logind[1471]: Session 14 logged out. Waiting for processes to exit. Mar 19 11:45:00.124460 systemd-logind[1471]: Removed session 14. Mar 19 11:45:05.133169 systemd[1]: Started sshd@14-10.0.0.95:22-10.0.0.1:59548.service - OpenSSH per-connection server daemon (10.0.0.1:59548). Mar 19 11:45:05.178448 sshd[4105]: Accepted publickey for core from 10.0.0.1 port 59548 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:45:05.179655 sshd-session[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:45:05.183958 systemd-logind[1471]: New session 15 of user core. Mar 19 11:45:05.193386 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 19 11:45:05.301372 sshd[4107]: Connection closed by 10.0.0.1 port 59548 Mar 19 11:45:05.301709 sshd-session[4105]: pam_unix(sshd:session): session closed for user core Mar 19 11:45:05.304769 systemd[1]: sshd@14-10.0.0.95:22-10.0.0.1:59548.service: Deactivated successfully. Mar 19 11:45:05.306501 systemd[1]: session-15.scope: Deactivated successfully. Mar 19 11:45:05.308501 systemd-logind[1471]: Session 15 logged out. Waiting for processes to exit. Mar 19 11:45:05.309256 systemd-logind[1471]: Removed session 15. Mar 19 11:45:10.313723 systemd[1]: Started sshd@15-10.0.0.95:22-10.0.0.1:59558.service - OpenSSH per-connection server daemon (10.0.0.1:59558). Mar 19 11:45:10.358692 sshd[4123]: Accepted publickey for core from 10.0.0.1 port 59558 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:45:10.359785 sshd-session[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:45:10.363254 systemd-logind[1471]: New session 16 of user core. Mar 19 11:45:10.374367 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 19 11:45:10.481380 sshd[4125]: Connection closed by 10.0.0.1 port 59558 Mar 19 11:45:10.481734 sshd-session[4123]: pam_unix(sshd:session): session closed for user core Mar 19 11:45:10.494587 systemd[1]: sshd@15-10.0.0.95:22-10.0.0.1:59558.service: Deactivated successfully. Mar 19 11:45:10.496211 systemd[1]: session-16.scope: Deactivated successfully. Mar 19 11:45:10.497003 systemd-logind[1471]: Session 16 logged out. Waiting for processes to exit. Mar 19 11:45:10.508591 systemd[1]: Started sshd@16-10.0.0.95:22-10.0.0.1:59566.service - OpenSSH per-connection server daemon (10.0.0.1:59566). Mar 19 11:45:10.509788 systemd-logind[1471]: Removed session 16. Mar 19 11:45:10.550531 sshd[4138]: Accepted publickey for core from 10.0.0.1 port 59566 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:45:10.551638 sshd-session[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:45:10.555192 systemd-logind[1471]: New session 17 of user core. Mar 19 11:45:10.561356 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 19 11:45:10.766669 sshd[4141]: Connection closed by 10.0.0.1 port 59566 Mar 19 11:45:10.767283 sshd-session[4138]: pam_unix(sshd:session): session closed for user core Mar 19 11:45:10.781620 systemd[1]: sshd@16-10.0.0.95:22-10.0.0.1:59566.service: Deactivated successfully. Mar 19 11:45:10.783134 systemd[1]: session-17.scope: Deactivated successfully. Mar 19 11:45:10.785300 systemd-logind[1471]: Session 17 logged out. Waiting for processes to exit. Mar 19 11:45:10.787052 systemd-logind[1471]: Removed session 17. Mar 19 11:45:10.788715 systemd[1]: Started sshd@17-10.0.0.95:22-10.0.0.1:59576.service - OpenSSH per-connection server daemon (10.0.0.1:59576). Mar 19 11:45:10.840313 sshd[4151]: Accepted publickey for core from 10.0.0.1 port 59576 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:45:10.841616 sshd-session[4151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:45:10.846129 systemd-logind[1471]: New session 18 of user core. Mar 19 11:45:10.856375 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 19 11:45:11.585497 sshd[4154]: Connection closed by 10.0.0.1 port 59576 Mar 19 11:45:11.586391 sshd-session[4151]: pam_unix(sshd:session): session closed for user core Mar 19 11:45:11.596587 systemd[1]: sshd@17-10.0.0.95:22-10.0.0.1:59576.service: Deactivated successfully. Mar 19 11:45:11.598127 systemd[1]: session-18.scope: Deactivated successfully. Mar 19 11:45:11.599580 systemd-logind[1471]: Session 18 logged out. Waiting for processes to exit. Mar 19 11:45:11.606606 systemd[1]: Started sshd@18-10.0.0.95:22-10.0.0.1:59578.service - OpenSSH per-connection server daemon (10.0.0.1:59578). Mar 19 11:45:11.610889 systemd-logind[1471]: Removed session 18. Mar 19 11:45:11.650182 sshd[4174]: Accepted publickey for core from 10.0.0.1 port 59578 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:45:11.650782 sshd-session[4174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:45:11.655154 systemd-logind[1471]: New session 19 of user core. Mar 19 11:45:11.668443 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 19 11:45:11.880725 sshd[4177]: Connection closed by 10.0.0.1 port 59578 Mar 19 11:45:11.881748 sshd-session[4174]: pam_unix(sshd:session): session closed for user core Mar 19 11:45:11.895102 systemd[1]: sshd@18-10.0.0.95:22-10.0.0.1:59578.service: Deactivated successfully. Mar 19 11:45:11.896975 systemd[1]: session-19.scope: Deactivated successfully. Mar 19 11:45:11.897941 systemd-logind[1471]: Session 19 logged out. Waiting for processes to exit. Mar 19 11:45:11.911559 systemd[1]: Started sshd@19-10.0.0.95:22-10.0.0.1:59594.service - OpenSSH per-connection server daemon (10.0.0.1:59594). Mar 19 11:45:11.913077 systemd-logind[1471]: Removed session 19. Mar 19 11:45:11.953966 sshd[4187]: Accepted publickey for core from 10.0.0.1 port 59594 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:45:11.955367 sshd-session[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:45:11.959853 systemd-logind[1471]: New session 20 of user core. Mar 19 11:45:11.969370 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 19 11:45:12.079107 sshd[4190]: Connection closed by 10.0.0.1 port 59594 Mar 19 11:45:12.079434 sshd-session[4187]: pam_unix(sshd:session): session closed for user core Mar 19 11:45:12.083277 systemd[1]: sshd@19-10.0.0.95:22-10.0.0.1:59594.service: Deactivated successfully. Mar 19 11:45:12.085488 systemd[1]: session-20.scope: Deactivated successfully. Mar 19 11:45:12.087303 systemd-logind[1471]: Session 20 logged out. Waiting for processes to exit. Mar 19 11:45:12.089090 systemd-logind[1471]: Removed session 20. Mar 19 11:45:17.091664 systemd[1]: Started sshd@20-10.0.0.95:22-10.0.0.1:33954.service - OpenSSH per-connection server daemon (10.0.0.1:33954). Mar 19 11:45:17.136126 sshd[4208]: Accepted publickey for core from 10.0.0.1 port 33954 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:45:17.137260 sshd-session[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:45:17.141288 systemd-logind[1471]: New session 21 of user core. Mar 19 11:45:17.147374 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 19 11:45:17.252557 sshd[4210]: Connection closed by 10.0.0.1 port 33954 Mar 19 11:45:17.253442 sshd-session[4208]: pam_unix(sshd:session): session closed for user core Mar 19 11:45:17.256701 systemd[1]: sshd@20-10.0.0.95:22-10.0.0.1:33954.service: Deactivated successfully. Mar 19 11:45:17.258978 systemd[1]: session-21.scope: Deactivated successfully. Mar 19 11:45:17.259929 systemd-logind[1471]: Session 21 logged out. Waiting for processes to exit. Mar 19 11:45:17.260770 systemd-logind[1471]: Removed session 21. Mar 19 11:45:22.265112 systemd[1]: Started sshd@21-10.0.0.95:22-10.0.0.1:33968.service - OpenSSH per-connection server daemon (10.0.0.1:33968). Mar 19 11:45:22.313205 sshd[4225]: Accepted publickey for core from 10.0.0.1 port 33968 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:45:22.314491 sshd-session[4225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:45:22.318926 systemd-logind[1471]: New session 22 of user core. Mar 19 11:45:22.328411 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 19 11:45:22.437690 sshd[4227]: Connection closed by 10.0.0.1 port 33968 Mar 19 11:45:22.438199 sshd-session[4225]: pam_unix(sshd:session): session closed for user core Mar 19 11:45:22.441357 systemd[1]: sshd@21-10.0.0.95:22-10.0.0.1:33968.service: Deactivated successfully. Mar 19 11:45:22.442888 systemd[1]: session-22.scope: Deactivated successfully. Mar 19 11:45:22.443726 systemd-logind[1471]: Session 22 logged out. Waiting for processes to exit. Mar 19 11:45:22.444593 systemd-logind[1471]: Removed session 22. Mar 19 11:45:27.450119 systemd[1]: Started sshd@22-10.0.0.95:22-10.0.0.1:38792.service - OpenSSH per-connection server daemon (10.0.0.1:38792). Mar 19 11:45:27.499073 sshd[4241]: Accepted publickey for core from 10.0.0.1 port 38792 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:45:27.499569 sshd-session[4241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:45:27.503910 systemd-logind[1471]: New session 23 of user core. Mar 19 11:45:27.510416 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 19 11:45:27.629443 sshd[4243]: Connection closed by 10.0.0.1 port 38792 Mar 19 11:45:27.628710 sshd-session[4241]: pam_unix(sshd:session): session closed for user core Mar 19 11:45:27.638780 systemd[1]: sshd@22-10.0.0.95:22-10.0.0.1:38792.service: Deactivated successfully. Mar 19 11:45:27.640555 systemd[1]: session-23.scope: Deactivated successfully. Mar 19 11:45:27.641451 systemd-logind[1471]: Session 23 logged out. Waiting for processes to exit. Mar 19 11:45:27.647533 systemd[1]: Started sshd@23-10.0.0.95:22-10.0.0.1:38794.service - OpenSSH per-connection server daemon (10.0.0.1:38794). Mar 19 11:45:27.648699 systemd-logind[1471]: Removed session 23. Mar 19 11:45:27.694247 sshd[4255]: Accepted publickey for core from 10.0.0.1 port 38794 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:45:27.695412 sshd-session[4255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:45:27.699783 systemd-logind[1471]: New session 24 of user core. Mar 19 11:45:27.705378 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 19 11:45:29.478169 containerd[1491]: time="2025-03-19T11:45:29.478126550Z" level=info msg="StopContainer for \"69a4aa7628ca5fcf5e78be74764bf43046e8ef5b474b9d781ea87cf543427832\" with timeout 30 (s)" Mar 19 11:45:29.479931 containerd[1491]: time="2025-03-19T11:45:29.479895137Z" level=info msg="Stop container \"69a4aa7628ca5fcf5e78be74764bf43046e8ef5b474b9d781ea87cf543427832\" with signal terminated" Mar 19 11:45:29.489530 systemd[1]: cri-containerd-69a4aa7628ca5fcf5e78be74764bf43046e8ef5b474b9d781ea87cf543427832.scope: Deactivated successfully. Mar 19 11:45:29.509913 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69a4aa7628ca5fcf5e78be74764bf43046e8ef5b474b9d781ea87cf543427832-rootfs.mount: Deactivated successfully. Mar 19 11:45:29.512867 containerd[1491]: time="2025-03-19T11:45:29.512823836Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 19 11:45:29.513104 containerd[1491]: time="2025-03-19T11:45:29.513036954Z" level=info msg="StopContainer for \"6a0db4a8192712fd251f894b86d3d475140c6133c49e2b1ab1bd6bb219d3e83c\" with timeout 2 (s)" Mar 19 11:45:29.513374 containerd[1491]: time="2025-03-19T11:45:29.513350472Z" level=info msg="Stop container \"6a0db4a8192712fd251f894b86d3d475140c6133c49e2b1ab1bd6bb219d3e83c\" with signal terminated" Mar 19 11:45:29.518154 containerd[1491]: time="2025-03-19T11:45:29.517599278Z" level=info msg="shim disconnected" id=69a4aa7628ca5fcf5e78be74764bf43046e8ef5b474b9d781ea87cf543427832 namespace=k8s.io Mar 19 11:45:29.518154 containerd[1491]: time="2025-03-19T11:45:29.518146034Z" level=warning msg="cleaning up after shim disconnected" id=69a4aa7628ca5fcf5e78be74764bf43046e8ef5b474b9d781ea87cf543427832 namespace=k8s.io Mar 19 11:45:29.518154 containerd[1491]: time="2025-03-19T11:45:29.518159194Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:45:29.518895 systemd-networkd[1414]: lxc_health: Link DOWN Mar 19 11:45:29.518901 systemd-networkd[1414]: lxc_health: Lost carrier Mar 19 11:45:29.534517 systemd[1]: cri-containerd-6a0db4a8192712fd251f894b86d3d475140c6133c49e2b1ab1bd6bb219d3e83c.scope: Deactivated successfully. Mar 19 11:45:29.534840 systemd[1]: cri-containerd-6a0db4a8192712fd251f894b86d3d475140c6133c49e2b1ab1bd6bb219d3e83c.scope: Consumed 6.412s CPU time, 124.2M memory peak, 136K read from disk, 12.9M written to disk. Mar 19 11:45:29.551043 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a0db4a8192712fd251f894b86d3d475140c6133c49e2b1ab1bd6bb219d3e83c-rootfs.mount: Deactivated successfully. Mar 19 11:45:29.558056 containerd[1491]: time="2025-03-19T11:45:29.557992159Z" level=info msg="shim disconnected" id=6a0db4a8192712fd251f894b86d3d475140c6133c49e2b1ab1bd6bb219d3e83c namespace=k8s.io Mar 19 11:45:29.558353 containerd[1491]: time="2025-03-19T11:45:29.558150758Z" level=warning msg="cleaning up after shim disconnected" id=6a0db4a8192712fd251f894b86d3d475140c6133c49e2b1ab1bd6bb219d3e83c namespace=k8s.io Mar 19 11:45:29.558353 containerd[1491]: time="2025-03-19T11:45:29.558163518Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:45:29.565736 containerd[1491]: time="2025-03-19T11:45:29.565692858Z" level=info msg="StopContainer for \"69a4aa7628ca5fcf5e78be74764bf43046e8ef5b474b9d781ea87cf543427832\" returns successfully" Mar 19 11:45:29.566531 containerd[1491]: time="2025-03-19T11:45:29.566503132Z" level=info msg="StopPodSandbox for \"bcaf3c324290590e60840ec64d04d1526cb2eba2feb87b0cfe00b34e3277e217\"" Mar 19 11:45:29.570734 containerd[1491]: time="2025-03-19T11:45:29.570679059Z" level=info msg="Container to stop \"69a4aa7628ca5fcf5e78be74764bf43046e8ef5b474b9d781ea87cf543427832\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 11:45:29.571688 containerd[1491]: time="2025-03-19T11:45:29.571645091Z" level=info msg="StopContainer for \"6a0db4a8192712fd251f894b86d3d475140c6133c49e2b1ab1bd6bb219d3e83c\" returns successfully" Mar 19 11:45:29.572411 containerd[1491]: time="2025-03-19T11:45:29.572383405Z" level=info msg="StopPodSandbox for \"c298a95023bca70f299f6753affb341a9df90f19934827972a8cdb74bee9b075\"" Mar 19 11:45:29.572477 containerd[1491]: time="2025-03-19T11:45:29.572430365Z" level=info msg="Container to stop \"751bf077e1ba7f49b98a15eb7e14fdfdeb4cf5334f2c66ed08e89715db26e547\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 11:45:29.572477 containerd[1491]: time="2025-03-19T11:45:29.572443285Z" level=info msg="Container to stop \"b0ff1aaaf3a9b67b831b5a6b03300d39d02a7a9e55dea50ab86bf64470eef735\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 11:45:29.572477 containerd[1491]: time="2025-03-19T11:45:29.572451805Z" level=info msg="Container to stop \"b87341741450e20eca4067fd6c9ac9ca5a9129883bb36c075120880c24f7ecd4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 11:45:29.572477 containerd[1491]: time="2025-03-19T11:45:29.572459845Z" level=info msg="Container to stop \"0c5dd1613674f5ec9ddc21270f29d61d4f509c30657ba7c0a28e2b1f29cd8a16\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 11:45:29.572477 containerd[1491]: time="2025-03-19T11:45:29.572467165Z" level=info msg="Container to stop \"6a0db4a8192712fd251f894b86d3d475140c6133c49e2b1ab1bd6bb219d3e83c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 11:45:29.572553 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bcaf3c324290590e60840ec64d04d1526cb2eba2feb87b0cfe00b34e3277e217-shm.mount: Deactivated successfully. Mar 19 11:45:29.575243 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c298a95023bca70f299f6753affb341a9df90f19934827972a8cdb74bee9b075-shm.mount: Deactivated successfully. Mar 19 11:45:29.578592 systemd[1]: cri-containerd-bcaf3c324290590e60840ec64d04d1526cb2eba2feb87b0cfe00b34e3277e217.scope: Deactivated successfully. Mar 19 11:45:29.580879 systemd[1]: cri-containerd-c298a95023bca70f299f6753affb341a9df90f19934827972a8cdb74bee9b075.scope: Deactivated successfully. Mar 19 11:45:29.605610 containerd[1491]: time="2025-03-19T11:45:29.605549023Z" level=info msg="shim disconnected" id=bcaf3c324290590e60840ec64d04d1526cb2eba2feb87b0cfe00b34e3277e217 namespace=k8s.io Mar 19 11:45:29.605610 containerd[1491]: time="2025-03-19T11:45:29.605603783Z" level=warning msg="cleaning up after shim disconnected" id=bcaf3c324290590e60840ec64d04d1526cb2eba2feb87b0cfe00b34e3277e217 namespace=k8s.io Mar 19 11:45:29.605610 containerd[1491]: time="2025-03-19T11:45:29.605612622Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:45:29.612643 containerd[1491]: time="2025-03-19T11:45:29.612554168Z" level=info msg="shim disconnected" id=c298a95023bca70f299f6753affb341a9df90f19934827972a8cdb74bee9b075 namespace=k8s.io Mar 19 11:45:29.612643 containerd[1491]: time="2025-03-19T11:45:29.612640247Z" level=warning msg="cleaning up after shim disconnected" id=c298a95023bca70f299f6753affb341a9df90f19934827972a8cdb74bee9b075 namespace=k8s.io Mar 19 11:45:29.612643 containerd[1491]: time="2025-03-19T11:45:29.612649607Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:45:29.616045 containerd[1491]: time="2025-03-19T11:45:29.616011740Z" level=info msg="TearDown network for sandbox \"bcaf3c324290590e60840ec64d04d1526cb2eba2feb87b0cfe00b34e3277e217\" successfully" Mar 19 11:45:29.616045 containerd[1491]: time="2025-03-19T11:45:29.616042780Z" level=info msg="StopPodSandbox for \"bcaf3c324290590e60840ec64d04d1526cb2eba2feb87b0cfe00b34e3277e217\" returns successfully" Mar 19 11:45:29.627480 containerd[1491]: time="2025-03-19T11:45:29.627446570Z" level=info msg="TearDown network for sandbox \"c298a95023bca70f299f6753affb341a9df90f19934827972a8cdb74bee9b075\" successfully" Mar 19 11:45:29.627480 containerd[1491]: time="2025-03-19T11:45:29.627476610Z" level=info msg="StopPodSandbox for \"c298a95023bca70f299f6753affb341a9df90f19934827972a8cdb74bee9b075\" returns successfully" Mar 19 11:45:29.657410 kubelet[2578]: I0319 11:45:29.657371 2578 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c46bl\" (UniqueName: \"kubernetes.io/projected/e08a3a4a-7164-4344-b0f6-ed58bc991168-kube-api-access-c46bl\") pod \"e08a3a4a-7164-4344-b0f6-ed58bc991168\" (UID: \"e08a3a4a-7164-4344-b0f6-ed58bc991168\") " Mar 19 11:45:29.657410 kubelet[2578]: I0319 11:45:29.657417 2578 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e08a3a4a-7164-4344-b0f6-ed58bc991168-cilium-config-path\") pod \"e08a3a4a-7164-4344-b0f6-ed58bc991168\" (UID: \"e08a3a4a-7164-4344-b0f6-ed58bc991168\") " Mar 19 11:45:29.675794 kubelet[2578]: I0319 11:45:29.675737 2578 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e08a3a4a-7164-4344-b0f6-ed58bc991168-kube-api-access-c46bl" (OuterVolumeSpecName: "kube-api-access-c46bl") pod "e08a3a4a-7164-4344-b0f6-ed58bc991168" (UID: "e08a3a4a-7164-4344-b0f6-ed58bc991168"). InnerVolumeSpecName "kube-api-access-c46bl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 19 11:45:29.675999 kubelet[2578]: I0319 11:45:29.675956 2578 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e08a3a4a-7164-4344-b0f6-ed58bc991168-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e08a3a4a-7164-4344-b0f6-ed58bc991168" (UID: "e08a3a4a-7164-4344-b0f6-ed58bc991168"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 19 11:45:29.758873 kubelet[2578]: I0319 11:45:29.758065 2578 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/65589817-8584-4d99-b7ad-f59a59741a65-cilium-config-path\") pod \"65589817-8584-4d99-b7ad-f59a59741a65\" (UID: \"65589817-8584-4d99-b7ad-f59a59741a65\") " Mar 19 11:45:29.758873 kubelet[2578]: I0319 11:45:29.758110 2578 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/65589817-8584-4d99-b7ad-f59a59741a65-hostproc\") pod \"65589817-8584-4d99-b7ad-f59a59741a65\" (UID: \"65589817-8584-4d99-b7ad-f59a59741a65\") " Mar 19 11:45:29.758873 kubelet[2578]: I0319 11:45:29.758131 2578 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fwqs\" (UniqueName: \"kubernetes.io/projected/65589817-8584-4d99-b7ad-f59a59741a65-kube-api-access-7fwqs\") pod \"65589817-8584-4d99-b7ad-f59a59741a65\" (UID: \"65589817-8584-4d99-b7ad-f59a59741a65\") " Mar 19 11:45:29.758873 kubelet[2578]: I0319 11:45:29.758146 2578 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/65589817-8584-4d99-b7ad-f59a59741a65-lib-modules\") pod \"65589817-8584-4d99-b7ad-f59a59741a65\" (UID: \"65589817-8584-4d99-b7ad-f59a59741a65\") " Mar 19 11:45:29.758873 kubelet[2578]: I0319 11:45:29.758164 2578 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/65589817-8584-4d99-b7ad-f59a59741a65-host-proc-sys-kernel\") pod \"65589817-8584-4d99-b7ad-f59a59741a65\" (UID: \"65589817-8584-4d99-b7ad-f59a59741a65\") " Mar 19 11:45:29.758873 kubelet[2578]: I0319 11:45:29.758181 2578 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/65589817-8584-4d99-b7ad-f59a59741a65-xtables-lock\") pod \"65589817-8584-4d99-b7ad-f59a59741a65\" (UID: \"65589817-8584-4d99-b7ad-f59a59741a65\") " Mar 19 11:45:29.759076 kubelet[2578]: I0319 11:45:29.758194 2578 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/65589817-8584-4d99-b7ad-f59a59741a65-bpf-maps\") pod \"65589817-8584-4d99-b7ad-f59a59741a65\" (UID: \"65589817-8584-4d99-b7ad-f59a59741a65\") " Mar 19 11:45:29.759076 kubelet[2578]: I0319 11:45:29.758211 2578 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/65589817-8584-4d99-b7ad-f59a59741a65-clustermesh-secrets\") pod \"65589817-8584-4d99-b7ad-f59a59741a65\" (UID: \"65589817-8584-4d99-b7ad-f59a59741a65\") " Mar 19 11:45:29.759076 kubelet[2578]: I0319 11:45:29.758226 2578 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/65589817-8584-4d99-b7ad-f59a59741a65-cilium-cgroup\") pod \"65589817-8584-4d99-b7ad-f59a59741a65\" (UID: \"65589817-8584-4d99-b7ad-f59a59741a65\") " Mar 19 11:45:29.759076 kubelet[2578]: I0319 11:45:29.758260 2578 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/65589817-8584-4d99-b7ad-f59a59741a65-cilium-run\") pod \"65589817-8584-4d99-b7ad-f59a59741a65\" (UID: \"65589817-8584-4d99-b7ad-f59a59741a65\") " Mar 19 11:45:29.759076 kubelet[2578]: I0319 11:45:29.758277 2578 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/65589817-8584-4d99-b7ad-f59a59741a65-etc-cni-netd\") pod \"65589817-8584-4d99-b7ad-f59a59741a65\" (UID: \"65589817-8584-4d99-b7ad-f59a59741a65\") " Mar 19 11:45:29.759076 kubelet[2578]: I0319 11:45:29.758292 2578 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/65589817-8584-4d99-b7ad-f59a59741a65-cni-path\") pod \"65589817-8584-4d99-b7ad-f59a59741a65\" (UID: \"65589817-8584-4d99-b7ad-f59a59741a65\") " Mar 19 11:45:29.759193 kubelet[2578]: I0319 11:45:29.758308 2578 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/65589817-8584-4d99-b7ad-f59a59741a65-host-proc-sys-net\") pod \"65589817-8584-4d99-b7ad-f59a59741a65\" (UID: \"65589817-8584-4d99-b7ad-f59a59741a65\") " Mar 19 11:45:29.759193 kubelet[2578]: I0319 11:45:29.758330 2578 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/65589817-8584-4d99-b7ad-f59a59741a65-hubble-tls\") pod \"65589817-8584-4d99-b7ad-f59a59741a65\" (UID: \"65589817-8584-4d99-b7ad-f59a59741a65\") " Mar 19 11:45:29.759193 kubelet[2578]: I0319 11:45:29.758318 2578 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65589817-8584-4d99-b7ad-f59a59741a65-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "65589817-8584-4d99-b7ad-f59a59741a65" (UID: "65589817-8584-4d99-b7ad-f59a59741a65"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 19 11:45:29.759193 kubelet[2578]: I0319 11:45:29.758345 2578 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65589817-8584-4d99-b7ad-f59a59741a65-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "65589817-8584-4d99-b7ad-f59a59741a65" (UID: "65589817-8584-4d99-b7ad-f59a59741a65"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 19 11:45:29.759193 kubelet[2578]: I0319 11:45:29.758403 2578 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c46bl\" (UniqueName: \"kubernetes.io/projected/e08a3a4a-7164-4344-b0f6-ed58bc991168-kube-api-access-c46bl\") on node \"localhost\" DevicePath \"\"" Mar 19 11:45:29.759193 kubelet[2578]: I0319 11:45:29.758416 2578 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e08a3a4a-7164-4344-b0f6-ed58bc991168-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 19 11:45:29.759341 kubelet[2578]: I0319 11:45:29.758425 2578 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/65589817-8584-4d99-b7ad-f59a59741a65-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 19 11:45:29.759341 kubelet[2578]: I0319 11:45:29.758434 2578 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/65589817-8584-4d99-b7ad-f59a59741a65-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 19 11:45:29.759341 kubelet[2578]: I0319 11:45:29.758451 2578 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65589817-8584-4d99-b7ad-f59a59741a65-hostproc" (OuterVolumeSpecName: "hostproc") pod "65589817-8584-4d99-b7ad-f59a59741a65" (UID: "65589817-8584-4d99-b7ad-f59a59741a65"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 19 11:45:29.760326 kubelet[2578]: I0319 11:45:29.760107 2578 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65589817-8584-4d99-b7ad-f59a59741a65-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "65589817-8584-4d99-b7ad-f59a59741a65" (UID: "65589817-8584-4d99-b7ad-f59a59741a65"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 19 11:45:29.760326 kubelet[2578]: I0319 11:45:29.760160 2578 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65589817-8584-4d99-b7ad-f59a59741a65-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "65589817-8584-4d99-b7ad-f59a59741a65" (UID: "65589817-8584-4d99-b7ad-f59a59741a65"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 19 11:45:29.760326 kubelet[2578]: I0319 11:45:29.760178 2578 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65589817-8584-4d99-b7ad-f59a59741a65-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "65589817-8584-4d99-b7ad-f59a59741a65" (UID: "65589817-8584-4d99-b7ad-f59a59741a65"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 19 11:45:29.760326 kubelet[2578]: I0319 11:45:29.760194 2578 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65589817-8584-4d99-b7ad-f59a59741a65-cni-path" (OuterVolumeSpecName: "cni-path") pod "65589817-8584-4d99-b7ad-f59a59741a65" (UID: "65589817-8584-4d99-b7ad-f59a59741a65"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 19 11:45:29.760326 kubelet[2578]: I0319 11:45:29.760211 2578 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65589817-8584-4d99-b7ad-f59a59741a65-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "65589817-8584-4d99-b7ad-f59a59741a65" (UID: "65589817-8584-4d99-b7ad-f59a59741a65"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 19 11:45:29.760504 kubelet[2578]: I0319 11:45:29.760225 2578 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65589817-8584-4d99-b7ad-f59a59741a65-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "65589817-8584-4d99-b7ad-f59a59741a65" (UID: "65589817-8584-4d99-b7ad-f59a59741a65"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 19 11:45:29.760504 kubelet[2578]: I0319 11:45:29.760270 2578 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65589817-8584-4d99-b7ad-f59a59741a65-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "65589817-8584-4d99-b7ad-f59a59741a65" (UID: "65589817-8584-4d99-b7ad-f59a59741a65"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 19 11:45:29.760504 kubelet[2578]: I0319 11:45:29.758323 2578 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65589817-8584-4d99-b7ad-f59a59741a65-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "65589817-8584-4d99-b7ad-f59a59741a65" (UID: "65589817-8584-4d99-b7ad-f59a59741a65"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 19 11:45:29.760767 kubelet[2578]: I0319 11:45:29.760737 2578 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65589817-8584-4d99-b7ad-f59a59741a65-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "65589817-8584-4d99-b7ad-f59a59741a65" (UID: "65589817-8584-4d99-b7ad-f59a59741a65"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 19 11:45:29.761168 kubelet[2578]: I0319 11:45:29.761139 2578 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65589817-8584-4d99-b7ad-f59a59741a65-kube-api-access-7fwqs" (OuterVolumeSpecName: "kube-api-access-7fwqs") pod "65589817-8584-4d99-b7ad-f59a59741a65" (UID: "65589817-8584-4d99-b7ad-f59a59741a65"). InnerVolumeSpecName "kube-api-access-7fwqs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 19 11:45:29.765894 kubelet[2578]: I0319 11:45:29.765858 2578 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65589817-8584-4d99-b7ad-f59a59741a65-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "65589817-8584-4d99-b7ad-f59a59741a65" (UID: "65589817-8584-4d99-b7ad-f59a59741a65"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 19 11:45:29.858983 kubelet[2578]: I0319 11:45:29.858828 2578 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/65589817-8584-4d99-b7ad-f59a59741a65-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 19 11:45:29.858983 kubelet[2578]: I0319 11:45:29.858873 2578 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7fwqs\" (UniqueName: \"kubernetes.io/projected/65589817-8584-4d99-b7ad-f59a59741a65-kube-api-access-7fwqs\") on node \"localhost\" DevicePath \"\"" Mar 19 11:45:29.858983 kubelet[2578]: I0319 11:45:29.858886 2578 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/65589817-8584-4d99-b7ad-f59a59741a65-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 19 11:45:29.858983 kubelet[2578]: I0319 11:45:29.858896 2578 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/65589817-8584-4d99-b7ad-f59a59741a65-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 19 11:45:29.858983 kubelet[2578]: I0319 11:45:29.858905 2578 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/65589817-8584-4d99-b7ad-f59a59741a65-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 19 11:45:29.858983 kubelet[2578]: I0319 11:45:29.858913 2578 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/65589817-8584-4d99-b7ad-f59a59741a65-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 19 11:45:29.858983 kubelet[2578]: I0319 11:45:29.858923 2578 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/65589817-8584-4d99-b7ad-f59a59741a65-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 19 11:45:29.858983 kubelet[2578]: I0319 11:45:29.858931 2578 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/65589817-8584-4d99-b7ad-f59a59741a65-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 19 11:45:29.859271 kubelet[2578]: I0319 11:45:29.858940 2578 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/65589817-8584-4d99-b7ad-f59a59741a65-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 19 11:45:29.859271 kubelet[2578]: I0319 11:45:29.858949 2578 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/65589817-8584-4d99-b7ad-f59a59741a65-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 19 11:45:29.859271 kubelet[2578]: I0319 11:45:29.858956 2578 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/65589817-8584-4d99-b7ad-f59a59741a65-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 19 11:45:29.859271 kubelet[2578]: I0319 11:45:29.858963 2578 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/65589817-8584-4d99-b7ad-f59a59741a65-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 19 11:45:30.073434 systemd[1]: Removed slice kubepods-burstable-pod65589817_8584_4d99_b7ad_f59a59741a65.slice - libcontainer container kubepods-burstable-pod65589817_8584_4d99_b7ad_f59a59741a65.slice. Mar 19 11:45:30.073760 systemd[1]: kubepods-burstable-pod65589817_8584_4d99_b7ad_f59a59741a65.slice: Consumed 6.584s CPU time, 124.5M memory peak, 156K read from disk, 12.9M written to disk. Mar 19 11:45:30.076324 systemd[1]: Removed slice kubepods-besteffort-pode08a3a4a_7164_4344_b0f6_ed58bc991168.slice - libcontainer container kubepods-besteffort-pode08a3a4a_7164_4344_b0f6_ed58bc991168.slice. Mar 19 11:45:30.260824 kubelet[2578]: I0319 11:45:30.260776 2578 scope.go:117] "RemoveContainer" containerID="6a0db4a8192712fd251f894b86d3d475140c6133c49e2b1ab1bd6bb219d3e83c" Mar 19 11:45:30.263179 containerd[1491]: time="2025-03-19T11:45:30.262856222Z" level=info msg="RemoveContainer for \"6a0db4a8192712fd251f894b86d3d475140c6133c49e2b1ab1bd6bb219d3e83c\"" Mar 19 11:45:30.266598 containerd[1491]: time="2025-03-19T11:45:30.266564317Z" level=info msg="RemoveContainer for \"6a0db4a8192712fd251f894b86d3d475140c6133c49e2b1ab1bd6bb219d3e83c\" returns successfully" Mar 19 11:45:30.266959 kubelet[2578]: I0319 11:45:30.266935 2578 scope.go:117] "RemoveContainer" containerID="0c5dd1613674f5ec9ddc21270f29d61d4f509c30657ba7c0a28e2b1f29cd8a16" Mar 19 11:45:30.268122 containerd[1491]: time="2025-03-19T11:45:30.268087466Z" level=info msg="RemoveContainer for \"0c5dd1613674f5ec9ddc21270f29d61d4f509c30657ba7c0a28e2b1f29cd8a16\"" Mar 19 11:45:30.270839 containerd[1491]: time="2025-03-19T11:45:30.270760488Z" level=info msg="RemoveContainer for \"0c5dd1613674f5ec9ddc21270f29d61d4f509c30657ba7c0a28e2b1f29cd8a16\" returns successfully" Mar 19 11:45:30.271311 kubelet[2578]: I0319 11:45:30.271004 2578 scope.go:117] "RemoveContainer" containerID="b87341741450e20eca4067fd6c9ac9ca5a9129883bb36c075120880c24f7ecd4" Mar 19 11:45:30.272149 containerd[1491]: time="2025-03-19T11:45:30.272125479Z" level=info msg="RemoveContainer for \"b87341741450e20eca4067fd6c9ac9ca5a9129883bb36c075120880c24f7ecd4\"" Mar 19 11:45:30.276696 containerd[1491]: time="2025-03-19T11:45:30.276066492Z" level=info msg="RemoveContainer for \"b87341741450e20eca4067fd6c9ac9ca5a9129883bb36c075120880c24f7ecd4\" returns successfully" Mar 19 11:45:30.277201 kubelet[2578]: I0319 11:45:30.277065 2578 scope.go:117] "RemoveContainer" containerID="b0ff1aaaf3a9b67b831b5a6b03300d39d02a7a9e55dea50ab86bf64470eef735" Mar 19 11:45:30.278608 containerd[1491]: time="2025-03-19T11:45:30.278518395Z" level=info msg="RemoveContainer for \"b0ff1aaaf3a9b67b831b5a6b03300d39d02a7a9e55dea50ab86bf64470eef735\"" Mar 19 11:45:30.281631 containerd[1491]: time="2025-03-19T11:45:30.281592974Z" level=info msg="RemoveContainer for \"b0ff1aaaf3a9b67b831b5a6b03300d39d02a7a9e55dea50ab86bf64470eef735\" returns successfully" Mar 19 11:45:30.281791 kubelet[2578]: I0319 11:45:30.281745 2578 scope.go:117] "RemoveContainer" containerID="751bf077e1ba7f49b98a15eb7e14fdfdeb4cf5334f2c66ed08e89715db26e547" Mar 19 11:45:30.282835 containerd[1491]: time="2025-03-19T11:45:30.282803885Z" level=info msg="RemoveContainer for \"751bf077e1ba7f49b98a15eb7e14fdfdeb4cf5334f2c66ed08e89715db26e547\"" Mar 19 11:45:30.285581 containerd[1491]: time="2025-03-19T11:45:30.285512107Z" level=info msg="RemoveContainer for \"751bf077e1ba7f49b98a15eb7e14fdfdeb4cf5334f2c66ed08e89715db26e547\" returns successfully" Mar 19 11:45:30.285690 kubelet[2578]: I0319 11:45:30.285653 2578 scope.go:117] "RemoveContainer" containerID="6a0db4a8192712fd251f894b86d3d475140c6133c49e2b1ab1bd6bb219d3e83c" Mar 19 11:45:30.285979 containerd[1491]: time="2025-03-19T11:45:30.285824305Z" level=error msg="ContainerStatus for \"6a0db4a8192712fd251f894b86d3d475140c6133c49e2b1ab1bd6bb219d3e83c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6a0db4a8192712fd251f894b86d3d475140c6133c49e2b1ab1bd6bb219d3e83c\": not found" Mar 19 11:45:30.286032 kubelet[2578]: E0319 11:45:30.285985 2578 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6a0db4a8192712fd251f894b86d3d475140c6133c49e2b1ab1bd6bb219d3e83c\": not found" containerID="6a0db4a8192712fd251f894b86d3d475140c6133c49e2b1ab1bd6bb219d3e83c" Mar 19 11:45:30.286089 kubelet[2578]: I0319 11:45:30.286007 2578 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6a0db4a8192712fd251f894b86d3d475140c6133c49e2b1ab1bd6bb219d3e83c"} err="failed to get container status \"6a0db4a8192712fd251f894b86d3d475140c6133c49e2b1ab1bd6bb219d3e83c\": rpc error: code = NotFound desc = an error occurred when try to find container \"6a0db4a8192712fd251f894b86d3d475140c6133c49e2b1ab1bd6bb219d3e83c\": not found" Mar 19 11:45:30.286089 kubelet[2578]: I0319 11:45:30.286084 2578 scope.go:117] "RemoveContainer" containerID="0c5dd1613674f5ec9ddc21270f29d61d4f509c30657ba7c0a28e2b1f29cd8a16" Mar 19 11:45:30.286513 containerd[1491]: time="2025-03-19T11:45:30.286470380Z" level=error msg="ContainerStatus for \"0c5dd1613674f5ec9ddc21270f29d61d4f509c30657ba7c0a28e2b1f29cd8a16\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0c5dd1613674f5ec9ddc21270f29d61d4f509c30657ba7c0a28e2b1f29cd8a16\": not found" Mar 19 11:45:30.287345 kubelet[2578]: E0319 11:45:30.286727 2578 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0c5dd1613674f5ec9ddc21270f29d61d4f509c30657ba7c0a28e2b1f29cd8a16\": not found" containerID="0c5dd1613674f5ec9ddc21270f29d61d4f509c30657ba7c0a28e2b1f29cd8a16" Mar 19 11:45:30.287345 kubelet[2578]: I0319 11:45:30.286749 2578 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0c5dd1613674f5ec9ddc21270f29d61d4f509c30657ba7c0a28e2b1f29cd8a16"} err="failed to get container status \"0c5dd1613674f5ec9ddc21270f29d61d4f509c30657ba7c0a28e2b1f29cd8a16\": rpc error: code = NotFound desc = an error occurred when try to find container \"0c5dd1613674f5ec9ddc21270f29d61d4f509c30657ba7c0a28e2b1f29cd8a16\": not found" Mar 19 11:45:30.287345 kubelet[2578]: I0319 11:45:30.286762 2578 scope.go:117] "RemoveContainer" containerID="b87341741450e20eca4067fd6c9ac9ca5a9129883bb36c075120880c24f7ecd4" Mar 19 11:45:30.287345 kubelet[2578]: E0319 11:45:30.287091 2578 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b87341741450e20eca4067fd6c9ac9ca5a9129883bb36c075120880c24f7ecd4\": not found" containerID="b87341741450e20eca4067fd6c9ac9ca5a9129883bb36c075120880c24f7ecd4" Mar 19 11:45:30.287345 kubelet[2578]: I0319 11:45:30.287119 2578 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b87341741450e20eca4067fd6c9ac9ca5a9129883bb36c075120880c24f7ecd4"} err="failed to get container status \"b87341741450e20eca4067fd6c9ac9ca5a9129883bb36c075120880c24f7ecd4\": rpc error: code = NotFound desc = an error occurred when try to find container \"b87341741450e20eca4067fd6c9ac9ca5a9129883bb36c075120880c24f7ecd4\": not found" Mar 19 11:45:30.287345 kubelet[2578]: I0319 11:45:30.287204 2578 scope.go:117] "RemoveContainer" containerID="b0ff1aaaf3a9b67b831b5a6b03300d39d02a7a9e55dea50ab86bf64470eef735" Mar 19 11:45:30.287588 containerd[1491]: time="2025-03-19T11:45:30.286957497Z" level=error msg="ContainerStatus for \"b87341741450e20eca4067fd6c9ac9ca5a9129883bb36c075120880c24f7ecd4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b87341741450e20eca4067fd6c9ac9ca5a9129883bb36c075120880c24f7ecd4\": not found" Mar 19 11:45:30.287674 containerd[1491]: time="2025-03-19T11:45:30.287641412Z" level=error msg="ContainerStatus for \"b0ff1aaaf3a9b67b831b5a6b03300d39d02a7a9e55dea50ab86bf64470eef735\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b0ff1aaaf3a9b67b831b5a6b03300d39d02a7a9e55dea50ab86bf64470eef735\": not found" Mar 19 11:45:30.287928 kubelet[2578]: E0319 11:45:30.287844 2578 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b0ff1aaaf3a9b67b831b5a6b03300d39d02a7a9e55dea50ab86bf64470eef735\": not found" containerID="b0ff1aaaf3a9b67b831b5a6b03300d39d02a7a9e55dea50ab86bf64470eef735" Mar 19 11:45:30.287928 kubelet[2578]: I0319 11:45:30.287871 2578 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b0ff1aaaf3a9b67b831b5a6b03300d39d02a7a9e55dea50ab86bf64470eef735"} err="failed to get container status \"b0ff1aaaf3a9b67b831b5a6b03300d39d02a7a9e55dea50ab86bf64470eef735\": rpc error: code = NotFound desc = an error occurred when try to find container \"b0ff1aaaf3a9b67b831b5a6b03300d39d02a7a9e55dea50ab86bf64470eef735\": not found" Mar 19 11:45:30.287928 kubelet[2578]: I0319 11:45:30.287886 2578 scope.go:117] "RemoveContainer" containerID="751bf077e1ba7f49b98a15eb7e14fdfdeb4cf5334f2c66ed08e89715db26e547" Mar 19 11:45:30.288279 containerd[1491]: time="2025-03-19T11:45:30.288255648Z" level=error msg="ContainerStatus for \"751bf077e1ba7f49b98a15eb7e14fdfdeb4cf5334f2c66ed08e89715db26e547\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"751bf077e1ba7f49b98a15eb7e14fdfdeb4cf5334f2c66ed08e89715db26e547\": not found" Mar 19 11:45:30.288530 kubelet[2578]: E0319 11:45:30.288414 2578 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"751bf077e1ba7f49b98a15eb7e14fdfdeb4cf5334f2c66ed08e89715db26e547\": not found" containerID="751bf077e1ba7f49b98a15eb7e14fdfdeb4cf5334f2c66ed08e89715db26e547" Mar 19 11:45:30.288530 kubelet[2578]: I0319 11:45:30.288453 2578 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"751bf077e1ba7f49b98a15eb7e14fdfdeb4cf5334f2c66ed08e89715db26e547"} err="failed to get container status \"751bf077e1ba7f49b98a15eb7e14fdfdeb4cf5334f2c66ed08e89715db26e547\": rpc error: code = NotFound desc = an error occurred when try to find container \"751bf077e1ba7f49b98a15eb7e14fdfdeb4cf5334f2c66ed08e89715db26e547\": not found" Mar 19 11:45:30.288530 kubelet[2578]: I0319 11:45:30.288466 2578 scope.go:117] "RemoveContainer" containerID="69a4aa7628ca5fcf5e78be74764bf43046e8ef5b474b9d781ea87cf543427832" Mar 19 11:45:30.289440 containerd[1491]: time="2025-03-19T11:45:30.289415800Z" level=info msg="RemoveContainer for \"69a4aa7628ca5fcf5e78be74764bf43046e8ef5b474b9d781ea87cf543427832\"" Mar 19 11:45:30.291469 containerd[1491]: time="2025-03-19T11:45:30.291429786Z" level=info msg="RemoveContainer for \"69a4aa7628ca5fcf5e78be74764bf43046e8ef5b474b9d781ea87cf543427832\" returns successfully" Mar 19 11:45:30.291705 kubelet[2578]: I0319 11:45:30.291620 2578 scope.go:117] "RemoveContainer" containerID="69a4aa7628ca5fcf5e78be74764bf43046e8ef5b474b9d781ea87cf543427832" Mar 19 11:45:30.291826 containerd[1491]: time="2025-03-19T11:45:30.291773024Z" level=error msg="ContainerStatus for \"69a4aa7628ca5fcf5e78be74764bf43046e8ef5b474b9d781ea87cf543427832\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"69a4aa7628ca5fcf5e78be74764bf43046e8ef5b474b9d781ea87cf543427832\": not found" Mar 19 11:45:30.292001 kubelet[2578]: E0319 11:45:30.291947 2578 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"69a4aa7628ca5fcf5e78be74764bf43046e8ef5b474b9d781ea87cf543427832\": not found" containerID="69a4aa7628ca5fcf5e78be74764bf43046e8ef5b474b9d781ea87cf543427832" Mar 19 11:45:30.292102 kubelet[2578]: I0319 11:45:30.292077 2578 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"69a4aa7628ca5fcf5e78be74764bf43046e8ef5b474b9d781ea87cf543427832"} err="failed to get container status \"69a4aa7628ca5fcf5e78be74764bf43046e8ef5b474b9d781ea87cf543427832\": rpc error: code = NotFound desc = an error occurred when try to find container \"69a4aa7628ca5fcf5e78be74764bf43046e8ef5b474b9d781ea87cf543427832\": not found" Mar 19 11:45:30.496092 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bcaf3c324290590e60840ec64d04d1526cb2eba2feb87b0cfe00b34e3277e217-rootfs.mount: Deactivated successfully. Mar 19 11:45:30.496210 systemd[1]: var-lib-kubelet-pods-e08a3a4a\x2d7164\x2d4344\x2db0f6\x2ded58bc991168-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc46bl.mount: Deactivated successfully. Mar 19 11:45:30.496289 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c298a95023bca70f299f6753affb341a9df90f19934827972a8cdb74bee9b075-rootfs.mount: Deactivated successfully. Mar 19 11:45:30.496342 systemd[1]: var-lib-kubelet-pods-65589817\x2d8584\x2d4d99\x2db7ad\x2df59a59741a65-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7fwqs.mount: Deactivated successfully. Mar 19 11:45:30.496402 systemd[1]: var-lib-kubelet-pods-65589817\x2d8584\x2d4d99\x2db7ad\x2df59a59741a65-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 19 11:45:30.496452 systemd[1]: var-lib-kubelet-pods-65589817\x2d8584\x2d4d99\x2db7ad\x2df59a59741a65-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 19 11:45:31.442588 sshd[4258]: Connection closed by 10.0.0.1 port 38794 Mar 19 11:45:31.443142 sshd-session[4255]: pam_unix(sshd:session): session closed for user core Mar 19 11:45:31.453627 systemd[1]: sshd@23-10.0.0.95:22-10.0.0.1:38794.service: Deactivated successfully. Mar 19 11:45:31.455326 systemd[1]: session-24.scope: Deactivated successfully. Mar 19 11:45:31.455617 systemd[1]: session-24.scope: Consumed 1.117s CPU time, 24.9M memory peak. Mar 19 11:45:31.456771 systemd-logind[1471]: Session 24 logged out. Waiting for processes to exit. Mar 19 11:45:31.463489 systemd[1]: Started sshd@24-10.0.0.95:22-10.0.0.1:38802.service - OpenSSH per-connection server daemon (10.0.0.1:38802). Mar 19 11:45:31.464569 systemd-logind[1471]: Removed session 24. Mar 19 11:45:31.505630 sshd[4417]: Accepted publickey for core from 10.0.0.1 port 38802 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:45:31.506651 sshd-session[4417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:45:31.510979 systemd-logind[1471]: New session 25 of user core. Mar 19 11:45:31.521380 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 19 11:45:32.069519 kubelet[2578]: I0319 11:45:32.068694 2578 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65589817-8584-4d99-b7ad-f59a59741a65" path="/var/lib/kubelet/pods/65589817-8584-4d99-b7ad-f59a59741a65/volumes" Mar 19 11:45:32.069519 kubelet[2578]: I0319 11:45:32.069267 2578 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e08a3a4a-7164-4344-b0f6-ed58bc991168" path="/var/lib/kubelet/pods/e08a3a4a-7164-4344-b0f6-ed58bc991168/volumes" Mar 19 11:45:32.123868 kubelet[2578]: E0319 11:45:32.123814 2578 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 19 11:45:32.205443 sshd[4420]: Connection closed by 10.0.0.1 port 38802 Mar 19 11:45:32.206111 sshd-session[4417]: pam_unix(sshd:session): session closed for user core Mar 19 11:45:32.218799 systemd[1]: sshd@24-10.0.0.95:22-10.0.0.1:38802.service: Deactivated successfully. Mar 19 11:45:32.220458 systemd[1]: session-25.scope: Deactivated successfully. Mar 19 11:45:32.221408 systemd-logind[1471]: Session 25 logged out. Waiting for processes to exit. Mar 19 11:45:32.227295 kubelet[2578]: I0319 11:45:32.224368 2578 memory_manager.go:355] "RemoveStaleState removing state" podUID="65589817-8584-4d99-b7ad-f59a59741a65" containerName="cilium-agent" Mar 19 11:45:32.227295 kubelet[2578]: I0319 11:45:32.224398 2578 memory_manager.go:355] "RemoveStaleState removing state" podUID="e08a3a4a-7164-4344-b0f6-ed58bc991168" containerName="cilium-operator" Mar 19 11:45:32.230907 systemd[1]: Started sshd@25-10.0.0.95:22-10.0.0.1:38810.service - OpenSSH per-connection server daemon (10.0.0.1:38810). Mar 19 11:45:32.232791 systemd-logind[1471]: Removed session 25. Mar 19 11:45:32.246000 systemd[1]: Created slice kubepods-burstable-pod965d81d9_df39_4655_a3c7_f5a9d16cd70b.slice - libcontainer container kubepods-burstable-pod965d81d9_df39_4655_a3c7_f5a9d16cd70b.slice. Mar 19 11:45:32.276414 kubelet[2578]: I0319 11:45:32.274993 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/965d81d9-df39-4655-a3c7-f5a9d16cd70b-hostproc\") pod \"cilium-522z5\" (UID: \"965d81d9-df39-4655-a3c7-f5a9d16cd70b\") " pod="kube-system/cilium-522z5" Mar 19 11:45:32.276536 kubelet[2578]: I0319 11:45:32.276433 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/965d81d9-df39-4655-a3c7-f5a9d16cd70b-lib-modules\") pod \"cilium-522z5\" (UID: \"965d81d9-df39-4655-a3c7-f5a9d16cd70b\") " pod="kube-system/cilium-522z5" Mar 19 11:45:32.276536 kubelet[2578]: I0319 11:45:32.276459 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/965d81d9-df39-4655-a3c7-f5a9d16cd70b-xtables-lock\") pod \"cilium-522z5\" (UID: \"965d81d9-df39-4655-a3c7-f5a9d16cd70b\") " pod="kube-system/cilium-522z5" Mar 19 11:45:32.276536 kubelet[2578]: I0319 11:45:32.276477 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/965d81d9-df39-4655-a3c7-f5a9d16cd70b-hubble-tls\") pod \"cilium-522z5\" (UID: \"965d81d9-df39-4655-a3c7-f5a9d16cd70b\") " pod="kube-system/cilium-522z5" Mar 19 11:45:32.276536 kubelet[2578]: I0319 11:45:32.276513 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/965d81d9-df39-4655-a3c7-f5a9d16cd70b-bpf-maps\") pod \"cilium-522z5\" (UID: \"965d81d9-df39-4655-a3c7-f5a9d16cd70b\") " pod="kube-system/cilium-522z5" Mar 19 11:45:32.276536 kubelet[2578]: I0319 11:45:32.276536 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/965d81d9-df39-4655-a3c7-f5a9d16cd70b-clustermesh-secrets\") pod \"cilium-522z5\" (UID: \"965d81d9-df39-4655-a3c7-f5a9d16cd70b\") " pod="kube-system/cilium-522z5" Mar 19 11:45:32.276907 kubelet[2578]: I0319 11:45:32.276590 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/965d81d9-df39-4655-a3c7-f5a9d16cd70b-cilium-config-path\") pod \"cilium-522z5\" (UID: \"965d81d9-df39-4655-a3c7-f5a9d16cd70b\") " pod="kube-system/cilium-522z5" Mar 19 11:45:32.276907 kubelet[2578]: I0319 11:45:32.276625 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/965d81d9-df39-4655-a3c7-f5a9d16cd70b-cni-path\") pod \"cilium-522z5\" (UID: \"965d81d9-df39-4655-a3c7-f5a9d16cd70b\") " pod="kube-system/cilium-522z5" Mar 19 11:45:32.276907 kubelet[2578]: I0319 11:45:32.276642 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/965d81d9-df39-4655-a3c7-f5a9d16cd70b-host-proc-sys-net\") pod \"cilium-522z5\" (UID: \"965d81d9-df39-4655-a3c7-f5a9d16cd70b\") " pod="kube-system/cilium-522z5" Mar 19 11:45:32.276907 kubelet[2578]: I0319 11:45:32.276659 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/965d81d9-df39-4655-a3c7-f5a9d16cd70b-host-proc-sys-kernel\") pod \"cilium-522z5\" (UID: \"965d81d9-df39-4655-a3c7-f5a9d16cd70b\") " pod="kube-system/cilium-522z5" Mar 19 11:45:32.277992 kubelet[2578]: I0319 11:45:32.277403 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/965d81d9-df39-4655-a3c7-f5a9d16cd70b-cilium-run\") pod \"cilium-522z5\" (UID: \"965d81d9-df39-4655-a3c7-f5a9d16cd70b\") " pod="kube-system/cilium-522z5" Mar 19 11:45:32.277992 kubelet[2578]: I0319 11:45:32.277440 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/965d81d9-df39-4655-a3c7-f5a9d16cd70b-cilium-cgroup\") pod \"cilium-522z5\" (UID: \"965d81d9-df39-4655-a3c7-f5a9d16cd70b\") " pod="kube-system/cilium-522z5" Mar 19 11:45:32.277992 kubelet[2578]: I0319 11:45:32.277460 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/965d81d9-df39-4655-a3c7-f5a9d16cd70b-etc-cni-netd\") pod \"cilium-522z5\" (UID: \"965d81d9-df39-4655-a3c7-f5a9d16cd70b\") " pod="kube-system/cilium-522z5" Mar 19 11:45:32.277992 kubelet[2578]: I0319 11:45:32.277476 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/965d81d9-df39-4655-a3c7-f5a9d16cd70b-cilium-ipsec-secrets\") pod \"cilium-522z5\" (UID: \"965d81d9-df39-4655-a3c7-f5a9d16cd70b\") " pod="kube-system/cilium-522z5" Mar 19 11:45:32.277992 kubelet[2578]: I0319 11:45:32.277493 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng9wk\" (UniqueName: \"kubernetes.io/projected/965d81d9-df39-4655-a3c7-f5a9d16cd70b-kube-api-access-ng9wk\") pod \"cilium-522z5\" (UID: \"965d81d9-df39-4655-a3c7-f5a9d16cd70b\") " pod="kube-system/cilium-522z5" Mar 19 11:45:32.290224 sshd[4431]: Accepted publickey for core from 10.0.0.1 port 38810 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:45:32.291515 sshd-session[4431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:45:32.295952 systemd-logind[1471]: New session 26 of user core. Mar 19 11:45:32.304416 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 19 11:45:32.356210 sshd[4434]: Connection closed by 10.0.0.1 port 38810 Mar 19 11:45:32.356313 sshd-session[4431]: pam_unix(sshd:session): session closed for user core Mar 19 11:45:32.367094 systemd[1]: sshd@25-10.0.0.95:22-10.0.0.1:38810.service: Deactivated successfully. Mar 19 11:45:32.369090 systemd[1]: session-26.scope: Deactivated successfully. Mar 19 11:45:32.370860 systemd-logind[1471]: Session 26 logged out. Waiting for processes to exit. Mar 19 11:45:32.372559 systemd[1]: Started sshd@26-10.0.0.95:22-10.0.0.1:38822.service - OpenSSH per-connection server daemon (10.0.0.1:38822). Mar 19 11:45:32.373480 systemd-logind[1471]: Removed session 26. Mar 19 11:45:32.425197 sshd[4440]: Accepted publickey for core from 10.0.0.1 port 38822 ssh2: RSA SHA256:m+OHt/J3MiNhmiRtwZE4O3bs/RIw4O7lQdgYDDHmuIE Mar 19 11:45:32.426434 sshd-session[4440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:45:32.430480 systemd-logind[1471]: New session 27 of user core. Mar 19 11:45:32.446402 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 19 11:45:32.550508 containerd[1491]: time="2025-03-19T11:45:32.550456036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-522z5,Uid:965d81d9-df39-4655-a3c7-f5a9d16cd70b,Namespace:kube-system,Attempt:0,}" Mar 19 11:45:32.567067 containerd[1491]: time="2025-03-19T11:45:32.566963676Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:45:32.567067 containerd[1491]: time="2025-03-19T11:45:32.567042636Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:45:32.567067 containerd[1491]: time="2025-03-19T11:45:32.567060956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:45:32.567220 containerd[1491]: time="2025-03-19T11:45:32.567146315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:45:32.585442 systemd[1]: Started cri-containerd-35a8409cb6105f77ed7571ab8348de3375ea186220c266987392042d0d29abfb.scope - libcontainer container 35a8409cb6105f77ed7571ab8348de3375ea186220c266987392042d0d29abfb. Mar 19 11:45:32.604206 containerd[1491]: time="2025-03-19T11:45:32.604161696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-522z5,Uid:965d81d9-df39-4655-a3c7-f5a9d16cd70b,Namespace:kube-system,Attempt:0,} returns sandbox id \"35a8409cb6105f77ed7571ab8348de3375ea186220c266987392042d0d29abfb\"" Mar 19 11:45:32.614389 containerd[1491]: time="2025-03-19T11:45:32.614265807Z" level=info msg="CreateContainer within sandbox \"35a8409cb6105f77ed7571ab8348de3375ea186220c266987392042d0d29abfb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 19 11:45:32.635315 containerd[1491]: time="2025-03-19T11:45:32.635054347Z" level=info msg="CreateContainer within sandbox \"35a8409cb6105f77ed7571ab8348de3375ea186220c266987392042d0d29abfb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"503f8d92d81d61f2b1ccefbd39e9de30bd63a5de40a9e954ba39b7167e2cd439\"" Mar 19 11:45:32.635753 containerd[1491]: time="2025-03-19T11:45:32.635647064Z" level=info msg="StartContainer for \"503f8d92d81d61f2b1ccefbd39e9de30bd63a5de40a9e954ba39b7167e2cd439\"" Mar 19 11:45:32.664490 systemd[1]: Started cri-containerd-503f8d92d81d61f2b1ccefbd39e9de30bd63a5de40a9e954ba39b7167e2cd439.scope - libcontainer container 503f8d92d81d61f2b1ccefbd39e9de30bd63a5de40a9e954ba39b7167e2cd439. Mar 19 11:45:32.692313 containerd[1491]: time="2025-03-19T11:45:32.692248190Z" level=info msg="StartContainer for \"503f8d92d81d61f2b1ccefbd39e9de30bd63a5de40a9e954ba39b7167e2cd439\" returns successfully" Mar 19 11:45:32.710349 systemd[1]: cri-containerd-503f8d92d81d61f2b1ccefbd39e9de30bd63a5de40a9e954ba39b7167e2cd439.scope: Deactivated successfully. Mar 19 11:45:32.737457 containerd[1491]: time="2025-03-19T11:45:32.737185292Z" level=info msg="shim disconnected" id=503f8d92d81d61f2b1ccefbd39e9de30bd63a5de40a9e954ba39b7167e2cd439 namespace=k8s.io Mar 19 11:45:32.737457 containerd[1491]: time="2025-03-19T11:45:32.737256092Z" level=warning msg="cleaning up after shim disconnected" id=503f8d92d81d61f2b1ccefbd39e9de30bd63a5de40a9e954ba39b7167e2cd439 namespace=k8s.io Mar 19 11:45:32.737457 containerd[1491]: time="2025-03-19T11:45:32.737266332Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:45:32.746821 containerd[1491]: time="2025-03-19T11:45:32.746774326Z" level=warning msg="cleanup warnings time=\"2025-03-19T11:45:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 19 11:45:33.269572 containerd[1491]: time="2025-03-19T11:45:33.269406014Z" level=info msg="CreateContainer within sandbox \"35a8409cb6105f77ed7571ab8348de3375ea186220c266987392042d0d29abfb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 19 11:45:33.280204 containerd[1491]: time="2025-03-19T11:45:33.280121933Z" level=info msg="CreateContainer within sandbox \"35a8409cb6105f77ed7571ab8348de3375ea186220c266987392042d0d29abfb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f7f69fe82dc127041c04719a4b20dcd796e0d04ebfc143f61d5704677995151c\"" Mar 19 11:45:33.280861 containerd[1491]: time="2025-03-19T11:45:33.280823650Z" level=info msg="StartContainer for \"f7f69fe82dc127041c04719a4b20dcd796e0d04ebfc143f61d5704677995151c\"" Mar 19 11:45:33.304422 systemd[1]: Started cri-containerd-f7f69fe82dc127041c04719a4b20dcd796e0d04ebfc143f61d5704677995151c.scope - libcontainer container f7f69fe82dc127041c04719a4b20dcd796e0d04ebfc143f61d5704677995151c. Mar 19 11:45:33.324695 containerd[1491]: time="2025-03-19T11:45:33.324614600Z" level=info msg="StartContainer for \"f7f69fe82dc127041c04719a4b20dcd796e0d04ebfc143f61d5704677995151c\" returns successfully" Mar 19 11:45:33.332634 systemd[1]: cri-containerd-f7f69fe82dc127041c04719a4b20dcd796e0d04ebfc143f61d5704677995151c.scope: Deactivated successfully. Mar 19 11:45:33.351666 containerd[1491]: time="2025-03-19T11:45:33.351610575Z" level=info msg="shim disconnected" id=f7f69fe82dc127041c04719a4b20dcd796e0d04ebfc143f61d5704677995151c namespace=k8s.io Mar 19 11:45:33.351666 containerd[1491]: time="2025-03-19T11:45:33.351662255Z" level=warning msg="cleaning up after shim disconnected" id=f7f69fe82dc127041c04719a4b20dcd796e0d04ebfc143f61d5704677995151c namespace=k8s.io Mar 19 11:45:33.351666 containerd[1491]: time="2025-03-19T11:45:33.351670335Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:45:34.241376 kubelet[2578]: I0319 11:45:34.241320 2578 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-19T11:45:34Z","lastTransitionTime":"2025-03-19T11:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 19 11:45:34.273012 containerd[1491]: time="2025-03-19T11:45:34.272969653Z" level=info msg="CreateContainer within sandbox \"35a8409cb6105f77ed7571ab8348de3375ea186220c266987392042d0d29abfb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 19 11:45:34.284937 containerd[1491]: time="2025-03-19T11:45:34.284888018Z" level=info msg="CreateContainer within sandbox \"35a8409cb6105f77ed7571ab8348de3375ea186220c266987392042d0d29abfb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0c285310e76e813ef960b645a526e4e78bb86bdea851d513f20c753590c340f8\"" Mar 19 11:45:34.287355 containerd[1491]: time="2025-03-19T11:45:34.286709573Z" level=info msg="StartContainer for \"0c285310e76e813ef960b645a526e4e78bb86bdea851d513f20c753590c340f8\"" Mar 19 11:45:34.312398 systemd[1]: Started cri-containerd-0c285310e76e813ef960b645a526e4e78bb86bdea851d513f20c753590c340f8.scope - libcontainer container 0c285310e76e813ef960b645a526e4e78bb86bdea851d513f20c753590c340f8. Mar 19 11:45:34.336500 containerd[1491]: time="2025-03-19T11:45:34.336460786Z" level=info msg="StartContainer for \"0c285310e76e813ef960b645a526e4e78bb86bdea851d513f20c753590c340f8\" returns successfully" Mar 19 11:45:34.339119 systemd[1]: cri-containerd-0c285310e76e813ef960b645a526e4e78bb86bdea851d513f20c753590c340f8.scope: Deactivated successfully. Mar 19 11:45:34.361369 containerd[1491]: time="2025-03-19T11:45:34.361309633Z" level=info msg="shim disconnected" id=0c285310e76e813ef960b645a526e4e78bb86bdea851d513f20c753590c340f8 namespace=k8s.io Mar 19 11:45:34.361541 containerd[1491]: time="2025-03-19T11:45:34.361457152Z" level=warning msg="cleaning up after shim disconnected" id=0c285310e76e813ef960b645a526e4e78bb86bdea851d513f20c753590c340f8 namespace=k8s.io Mar 19 11:45:34.361541 containerd[1491]: time="2025-03-19T11:45:34.361470232Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:45:34.382348 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c285310e76e813ef960b645a526e4e78bb86bdea851d513f20c753590c340f8-rootfs.mount: Deactivated successfully. Mar 19 11:45:35.276832 containerd[1491]: time="2025-03-19T11:45:35.276786540Z" level=info msg="CreateContainer within sandbox \"35a8409cb6105f77ed7571ab8348de3375ea186220c266987392042d0d29abfb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 19 11:45:35.292566 containerd[1491]: time="2025-03-19T11:45:35.292523508Z" level=info msg="CreateContainer within sandbox \"35a8409cb6105f77ed7571ab8348de3375ea186220c266987392042d0d29abfb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3db96cbb9b1f7bd89a611c65119891e9be36a2c8e79ce73be1c2f7bbbecc5c89\"" Mar 19 11:45:35.304220 containerd[1491]: time="2025-03-19T11:45:35.304116084Z" level=info msg="StartContainer for \"3db96cbb9b1f7bd89a611c65119891e9be36a2c8e79ce73be1c2f7bbbecc5c89\"" Mar 19 11:45:35.341392 systemd[1]: Started cri-containerd-3db96cbb9b1f7bd89a611c65119891e9be36a2c8e79ce73be1c2f7bbbecc5c89.scope - libcontainer container 3db96cbb9b1f7bd89a611c65119891e9be36a2c8e79ce73be1c2f7bbbecc5c89. Mar 19 11:45:35.359691 systemd[1]: cri-containerd-3db96cbb9b1f7bd89a611c65119891e9be36a2c8e79ce73be1c2f7bbbecc5c89.scope: Deactivated successfully. Mar 19 11:45:35.361080 containerd[1491]: time="2025-03-19T11:45:35.360975247Z" level=info msg="StartContainer for \"3db96cbb9b1f7bd89a611c65119891e9be36a2c8e79ce73be1c2f7bbbecc5c89\" returns successfully" Mar 19 11:45:35.378396 containerd[1491]: time="2025-03-19T11:45:35.378342572Z" level=info msg="shim disconnected" id=3db96cbb9b1f7bd89a611c65119891e9be36a2c8e79ce73be1c2f7bbbecc5c89 namespace=k8s.io Mar 19 11:45:35.378396 containerd[1491]: time="2025-03-19T11:45:35.378392972Z" level=warning msg="cleaning up after shim disconnected" id=3db96cbb9b1f7bd89a611c65119891e9be36a2c8e79ce73be1c2f7bbbecc5c89 namespace=k8s.io Mar 19 11:45:35.378396 containerd[1491]: time="2025-03-19T11:45:35.378401212Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:45:35.382440 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3db96cbb9b1f7bd89a611c65119891e9be36a2c8e79ce73be1c2f7bbbecc5c89-rootfs.mount: Deactivated successfully. Mar 19 11:45:36.281262 containerd[1491]: time="2025-03-19T11:45:36.281092565Z" level=info msg="CreateContainer within sandbox \"35a8409cb6105f77ed7571ab8348de3375ea186220c266987392042d0d29abfb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 19 11:45:36.297254 containerd[1491]: time="2025-03-19T11:45:36.297201386Z" level=info msg="CreateContainer within sandbox \"35a8409cb6105f77ed7571ab8348de3375ea186220c266987392042d0d29abfb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"07e4925f148f73fbe6df10ecc89a9c60590d296770a0b0f5c2ecb57b15767baf\"" Mar 19 11:45:36.297734 containerd[1491]: time="2025-03-19T11:45:36.297698745Z" level=info msg="StartContainer for \"07e4925f148f73fbe6df10ecc89a9c60590d296770a0b0f5c2ecb57b15767baf\"" Mar 19 11:45:36.324409 systemd[1]: Started cri-containerd-07e4925f148f73fbe6df10ecc89a9c60590d296770a0b0f5c2ecb57b15767baf.scope - libcontainer container 07e4925f148f73fbe6df10ecc89a9c60590d296770a0b0f5c2ecb57b15767baf. Mar 19 11:45:36.346772 containerd[1491]: time="2025-03-19T11:45:36.346718128Z" level=info msg="StartContainer for \"07e4925f148f73fbe6df10ecc89a9c60590d296770a0b0f5c2ecb57b15767baf\" returns successfully" Mar 19 11:45:36.603272 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Mar 19 11:45:37.296750 kubelet[2578]: I0319 11:45:37.296691 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-522z5" podStartSLOduration=5.296674618 podStartE2EDuration="5.296674618s" podCreationTimestamp="2025-03-19 11:45:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:45:37.295919978 +0000 UTC m=+85.311443985" watchObservedRunningTime="2025-03-19 11:45:37.296674618 +0000 UTC m=+85.312198625" Mar 19 11:45:39.364313 systemd-networkd[1414]: lxc_health: Link UP Mar 19 11:45:39.374977 systemd-networkd[1414]: lxc_health: Gained carrier Mar 19 11:45:40.880439 systemd[1]: run-containerd-runc-k8s.io-07e4925f148f73fbe6df10ecc89a9c60590d296770a0b0f5c2ecb57b15767baf-runc.ihFpBL.mount: Deactivated successfully. Mar 19 11:45:40.932659 systemd-networkd[1414]: lxc_health: Gained IPv6LL Mar 19 11:45:45.110610 sshd[4447]: Connection closed by 10.0.0.1 port 38822 Mar 19 11:45:45.110965 sshd-session[4440]: pam_unix(sshd:session): session closed for user core Mar 19 11:45:45.113525 systemd[1]: sshd@26-10.0.0.95:22-10.0.0.1:38822.service: Deactivated successfully. Mar 19 11:45:45.115196 systemd[1]: session-27.scope: Deactivated successfully. Mar 19 11:45:45.116443 systemd-logind[1471]: Session 27 logged out. Waiting for processes to exit. Mar 19 11:45:45.117514 systemd-logind[1471]: Removed session 27.